Advancing Urban Image Inpainting with MAGT and CAGAN (Context-Aware GAN): A Deep Learning Approach Using ADE20K and Cityscapes

Authors

  • Mahesh Patil Oriental University, Indore, India
  • Vikas Tiwari Oriental University, Indore, India

DOI:

https://doi.org/10.52783/rev-alap.69

Keywords:

Image Inpainting, Generative Adversarial Networks (GANs), Urban Scene Reconstruction, Transformer-based Models, Context-Aware Generation.

Abstract

Urban image inpainting is a critical task in computer vision, enabling the reconstruction of damaged or occluded urban environments such as roads, buildings, and vehicles. With the advent of deep learning, Generative Adversarial Networks (GANs) have shown promising results, particularly when augmented with attention and contextual learning mechanisms. This paper proposes a dual-model framework combining the Mask-Aware Generative Transformer (MAGT) with a novel Context-Aware GAN (CAGAN). The models are trained and evaluated on diverse urban datasets—ADE20K, Cityscapes, and Stanford Cars. Experimental results demonstrate significant improvements in perceptual and structural quality using metrics such as SSIM, LPIPS, and FID. This dual-model strategy achieves state-of-the-art performance on challenging inpainting scenarios.

Downloads

Published

2025-07-11

Issue

Section

Research Articles