Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models

1University of Glasgow 2Dotphoton
*Indicates Equal Contribution


NeurIPS 2024

Abstract

In this work, we introduce Pixelsmith, a zero-shot text-to-image generative framework to sample images at higher resolutions with a single GPU with minimal computational resources. We are the first to show that it is possible to scale the output of a pre-trained diffusion model by a factor of 1000, opening the road for gigapixel image generation at no additional cost. Our cascading method uses the image generated at the lowest resolution as a baseline to sample at higher resolutions. For the guidance, we introduce the Slider, a tunable mechanism that fuses the overall structure contained in the first-generated image with enhanced fine details. At each inference step, we denoise patches rather than the entire latent space, minimizing memory demands such that a single GPU can handle the process, regardless of the image's resolution. Our experimental results show that Pixelsmith not only achieves higher quality and diversity compared to existing techniques, but also reduces sampling time and artifacts.

One GPU is enough!

With Pixelsmith, you can effortlessly scale pre-trained generative models to generate gigapixel-scale images on a single Nvidia RTX 3090 (24GB VRAM).
Note: For the best experience, it’s recommended to view the website on a computer.

Select an image:



Background Image
Background Image
Foreground Image
Foreground Image

Methods

Slider

Adjusting the Slider position changes the image’s balance between detail and structure. Moving left adds fine details but creates artifacts, while moving right improves overall structure but loses detail. Depending on the image, there is an ideal slider position which offers the best balance, preserving structure and adding extra content.


Base resolution (1024x1024)

reference

Higher resolution (2048x2048)

x4
Slider value: 25

Weak guidance

Strong guidance

Improving the base generation

We present a flexible image generation process where an initial image produced by the base model can be directly enhanced to any higher resolution. This method eliminates the need for intermediate steps, allowing for seamless generation of higher-resolution images in a single iteration.

Multi

We show a few examples of a two-steps image generation process.
Zoom-in to visualize progressively enhanced details (e.g. face, hands, hair)

Select an image:



Choose the resolution:



Default Image

Gigapixel images on a single GPU

Move the cursor over the image to zoom-in.
The left side of the lens displays the 1024×1024px image, while the right side showcases the 32768×32768px gigapixel image.​
Note: we resized the gigapixel image to a lower resolution for faster visualization.

Gigapixel image

BibTeX

@misc{tragakis2024gpu,
        title={Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models}, 
        author={Athanasios Tragakis and Marco Aversa and Chaitanya Kaul and Roderick Murray-Smith and Daniele Faccio},
        year={2024},
        eprint={2406.07251},
        archivePrefix={arXiv},
        primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'}
  }