| dc.description.abstract |
In recent years, surgical techniques have significantly advanced with the adoption of mini-
mally invasive procedures that prioritize precision and reduced patient recovery time. How-
ever, limited visibility remains a critical challenge, often obstructed by surgical instruments
and the surgeon’s hands. Addressing this challenge, image inpainting, a technique involving
the reconstruction of missing or damaged portions in images, has emerged as a solution.
Deep learning, particularly Generative Adversarial Networks (GANs), has shown promise in
image inpainting. This work proposes a GAN-based model to restore missing regions in 2D
surgery scene images, leveraging the Pix2Pix GAN framework to enhance the realism and
detail preservation in inpainted surgical scenes. By implementing and evaluating different
U-Net architectures, including pre-trained models such as VGG16, VGG19, ResNet50, and
Inception ResNet V2 as encoder, we aim to identify the most effective approach for restoring
missing regions in 2D surgery scene images. Using the DREAMING dataset, our compre-
hensive evaluation involves both qualitative and quantitative metrics, such as Mean Square
Error (MSE), Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), and
Structural Similarity Index (SSIM). |
en_US |