MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation

  • Marco Bellagente

    *+ 4
  • Manuel Brack

    * 2,3
  • Hannah Teufel

    * 1
  • Felix Friedrich

    3,6
  • Björn Deiseroth

    1,3,6
  • Constantin Eichenberg

    1
  • Andrew Dai

    1
  • Robert J.N. Baldock

    1
  • Souradeep Nanda

    + 5
  • Koen Oostermeijer

    1
  • Andres Felipe Cruz-Salinas

    1
  • Patrich Schramowski

    2,3,6,8
  • Kristian Kersting

    ✦ 2,3,6,7
  • Samuel Weinbach

    ✦ 1
  • 1Aleph Alpha   2German Research Center for Artificial Intelligence (DFKI)
  • 3Computer Science Department, TU Darmstadt   4Stability AI   5University of Texas
  • 6Hessian.AI   7Centre for Cognitive Science, TU Darmstadt   8LAION  
  • (+) Work performed while at Aleph Alpha (*) equal contribution (✦) equal supervision

Abstract

The recent popularity of text-to-image diffusion models (DM) can largely be attributed to the intuitive interface they provide to users. The intended generation can be expressed in natural language, with the model producing faithful interpretations of text prompts. However, expressing complex or nuanced ideas in text alone can be difficult. To ease image generation, we propose MultiFusion that allows one to express complex and nuanced concepts with arbitrarily interleaved inputs of multiple modalities and languages. MultiFusion leverages pre-trained models and aligns them for integration into a cohesive system, thereby avoiding the need for extensive training from scratch. Our experimental results demonstrate the efficient transfer of capabilities from individual modules to the downstream model. Specifically, the fusion of all independent components allows the image generation module to utilize multilingual, interleaved multimodal inputs despite being trained solely on monomodal data in a single language.

Method

To enable multimodal, multilingual prompting, both compute efficiently and without multimodal downstream training data we use a custom modular encoder. The base of our encoder is a 13B autoregressive transformer (1.1), pretrained on 5 languages (English, German, Spanish, Italian and French). We extend the encoder by an image prefix as well as adapters (1.2) to enable multimodality. Additionally, we finetune the biases (2.1) of the LLM to provide embeddings, which capture the semantic meaning of the text prompt, thus simplifying the learning of mapping from embeddings to image outputs. Finally, to align the pre-trained Stable Diffusion model (1.4) with the embeddings of our modular encoder, we retrain the conditioning by finetuning the cross-attention weights (2.2).

architecture figure 5

Evaluation

In the following section we provide a concise overview of the quantitative and qualitative evaluation of MultiFusion.

Image Fidelity and Text-to-Image Alignment

First we meassure image fidelity and image-text-alignment using the standard metrics FID-30K and Clip Scores. We find that MultiFusion prompted with text only performs on par with Stable Diffusion despite extension of the Encoder to support multiple languages and modalities.


Compositional Robustness

method

Image Composition is a known limitation of Diffusion Models. Through evaluation of our new benchmark MCC-250 we show that multimodal prompting leads to more compositional robustness as judged by humans. Each prompt is a complex conjunction of two different objects with different colors, with multimodal prompts containing one visual reference for each object interleaved with the text input.

Multilinguality

Below we demostrate the multilingual alignment of images generated by MultiFusion. All images were generated using the same seed and with the respective translation of the prompt ‘an image of an astronaut riding a horse’.

method

We show the comparison of multilingual alignment over DrawBench prompts. MultiFusion achieves comparable alignment of the output images although the image generation module was only trained on English data. This can be attributed to the strong alignment of multilingual prompts in MultiFusion’s embedding space.

method

Attention Manipulation for Multimodal Inference

Attention Manipulation, based on AtMan, allows us to weight image and text tokens at inference time and guide their influence on the resulting generation.

method

Applications

Finally, we present use cases and applications demonstrating the unique capabilities of MultiFusion.

Interleaved multilingual, multimodal prompting

method

Image Composition

MultiFusion increases expressivness in composition through arbitrary and flexible promptin of image and text sequences.

method

Negative Prompting

Negative prompting with images enables a more powerful supression than through text prompts.

method

Style Modification

MultiFusion enables simple style transfer through one reference image capturing all the facets of a unique style such as color pallette, composition contrast, etc. making elaborate prompts obsolete. Additionally, MultiFusion enables highly individual prompting such as "in the style of a picture I drew". method

Image Variation

MultiFusion produces meaningful image variations without the need for inversion or renoising if the input image.

method

Follow-Up Work

In order to contribute to the fair evaluation and safety of large generative models we opened our model for the evaluation on the Holisitic Evaluation of Text-to-Image Models (HEIM) benchmark [results, paper]. As well as for a study on the mitigation of inapproriatness in image generation [paper]

Citation

@article{bellagente2023multifusion,
      title={MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation}, 
      author={Marco Bellagente and Manuel Brack and Hannah Teufel and Felix Friedrich and Björn Deiseroth and Constantin Eichenberg and Andrew Dai and Robert Baldock and Souradeep Nanda and Koen Oostermeijer and Andres Felipe Cruz-Salinas and Patrick Schramowski and Kristian Kersting and Samuel Weinbach},
      year={2023},
      journal={arXiv preprint arXiv:2305.15296},
}

Acknowledgements

The website template was borrowed from Jon Barron.