High-Resolution Image Editing via Multi-Stage Blended Diffusion

We propose a new approach that allows Diffusion Editing to scale to Megapixel images!

Overview of our proposed multi-stage approach.

Diffusion models have shown great results in image generation and in image editing. However, current approaches are limited to low resolutions due to the computational cost of training diffusion models for high-resolution generation. We propose an approach that uses a pre-trained low-resolution diffusion model to edit images in the megapixel range. We first use Blended Diffusion to edit the image at a low resolution, and then upscale it in multiple stages, using a super-resolution model and Blended Diffusion. Using our approach, we achieve higher visual fidelity than by only applying off the shelf super-resolution methods to the output of the diffusion model. We also obtain better global consistency than directly using the diffusion model at a higher resolution.

Find more details and examples in our paper: https://arxiv.org/abs/2210.12965

Overview of our proposed multi-stage approach.