DSpace Repository

RESTORING OF 2D SURGERY SCENES IMAGES THROUGH INPAINTING USING GAN

Show simple item record

dc.contributor.author Mithun, M
dc.contributor.author Chinnu, Jacob
dc.date.accessioned 2024-07-08T05:10:10Z
dc.date.available 2024-07-08T05:10:10Z
dc.date.issued 2024-06-30
dc.identifier.uri http://210.212.227.212:8080/xmlui/handle/123456789/564
dc.description.abstract In recent years, surgical techniques have significantly advanced with the adoption of mini- mally invasive procedures that prioritize precision and reduced patient recovery time. How- ever, limited visibility remains a critical challenge, often obstructed by surgical instruments and the surgeon’s hands. Addressing this challenge, image inpainting, a technique involving the reconstruction of missing or damaged portions in images, has emerged as a solution. Deep learning, particularly Generative Adversarial Networks (GANs), has shown promise in image inpainting. This work proposes a GAN-based model to restore missing regions in 2D surgery scene images, leveraging the Pix2Pix GAN framework to enhance the realism and detail preservation in inpainted surgical scenes. By implementing and evaluating different U-Net architectures, including pre-trained models such as VGG16, VGG19, ResNet50, and Inception ResNet V2 as encoder, we aim to identify the most effective approach for restoring missing regions in 2D surgery scene images. Using the DREAMING dataset, our compre- hensive evaluation involves both qualitative and quantitative metrics, such as Mean Square Error (MSE), Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM). en_US
dc.language.iso en en_US
dc.relation.ispartofseries ;TKM22MEAI09
dc.title RESTORING OF 2D SURGERY SCENES IMAGES THROUGH INPAINTING USING GAN en_US
dc.type Technical Report en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account