Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers

Authors

Abstract

Mirroring the success of masked language models, vision-and-language counterparts like ViLBERT, LXMERT and UNITER have achieved state of the art performance on a variety of multimodal discriminative tasks like visual question answering and visual grounding. Recent work has also successfully adapted such models towards the generative task of image captioning. This begs the question: Can these models go the other way and generate images from pieces of text? Our analysis of a popular representative from this model family - LXMERT - finds that it is unable to generate rich and semantically meaningful imagery with its current training setup. We introduce X-LXMERT, an extension to LXMERT with training refinements including: discretizing visual representations, using uniform masking with a large range of masking ratios and aligning the right pre-training datasets to the right objectives which enables it to paint. X-LXMERT's image generation capabilities rival state of the art generative models while its question answering and captioning abilities remains comparable to LXMERT. Finally, we demonstrate the generality of these training refinements by adding image generation capabilities into UNITER to produce X-UNITER.

1 Introduction

The past year has seen a spate of BERT-style (Devlin et al., 2019) transformer-based architectures Chen et al., 2019; Li et al., 2019) proposed for vision-and-language tasks. These models are typically pre-trained on large image captioning corpora, extending ideas from masked language modeling to mask both the image and text modalities and produce state of the art results on a variety of vision and language tasks including visual question answering, visual grounding and image retrieval. These impressive results as well as recent probing mechanisms (Ilharco et al., 2020) suggest that these models are able to capture a variety of semantics in images including objects, attributes and their relationships and ground these in natural language.

While these models have been extensively evaluated over several discriminative tasks, relatively little attention has been paid to their generative capabilities. Bidirectional transformer models like BERT which exploit context preceding and following the current token are not explicitly designed for generation. Recent work for language-only transformers Dong et al., 2019; Liao et al., 2020) adapt these models towards this capability using sampling procedures. Such techniques have also been adapted successfully for image captioning -inputting an image and sampling the textual side of the model to generate a relevant caption . This begs the question: Can we go the other way and sample images from input pieces of text? i.e. Do vision-and-language BERT models know how to paint?

In this work, we probe the ability of a powerful and popular representative from this family of models -LXMERT (Tan and Bansal, 2019) , to produce high fidelity and semantically meaningful images conditioned on captions. Interestingly, our analysis leads us to the conclusion that LXMERT in its current form does not possess the ability to paintit produces images that have little resemblance to natural images. This is a somewhat surprising finding given LXMERT's masked training objectives for both modalities and its impressive performance on tasks that seemingly require a similar skill set.

We find that this is largely due to the regression training objective used by this family of models to predict masked features on the visual side. This is in contrast with the textual side, where they predict masked tokens within a large discrete vocabulary using a classification objective. Regressing features in high dimensional spaces is challenging to optimize and introduces noise at inference. This gets compounded when using iterative sampling procedures to predict the entire set of visual features. A downstream image generator consuming these predictions isn't able to recover from this noise even when fine-tuned on LXMERT's predictions.

We introduce X-LXMERT that builds upon LXMERT and enables it to effectively perform discriminative as well as generative tasks. Our key refinements include: (a) simplifying the visual inputs to use grid features instead of object detection bounding boxes, (b) discretizing visual representations, (c) using uniform masking with a large range of masking ratios to enable the model to predict the entire set of visual clusters at inference time and (d) aligning the right pre-training datasets to the right objectives. When coupled with our proposed image generator, X-LXMERT is able to generate rich imagery that is semantically consistent with the input captions. Importantly, X-LXMERT's image generation capabilities rival state-of-the-art image generation models (designed only for generation), while its question answering capabilities show little degradation compared to LXMERT.

In summary, we present X-LXMERT, a unified multimodal transformer model that can answer questions, and also generate captions and images. Our extensions to enable these capabilities are not tied to LXMERT's underlying architecture. We expect that the entire family of multimodal BERT models can be enhanced with image generative capabilities using our introduced strategy.

2 Related Works

Visual-Language transformer models Recent multi-modal pre-training models show significant improvements on a wide range of downstream tasks, including discriminiative (eg., visual question answering) and generation task (eg. image captioning ). Some methods use a single transformer architecture to jointly encode text and image (Li et al., 2019; Su et al., 2019; Alberti et al., 2019; Rahman et al., 2019; Li et al., 2020; Chen et al., 2019; Qi et al., 2020; Huang et al., 2020) , while others use two-stream architectures (Lu et al., , 2020 Tan and Bansal, 2019) . These models typically consume object detection features. We probe this family of models at the task of image generation and present extensions that enable them to reliably generate images.

Sequence generation with undirectional transformer When generating sequences with conven-tional transformer language models, it is natural to sample tokens from left to right. However, since undirectional transformers (eg. BERT) are not trained with a specific generation order, a line of works has investigated different strategies for sequence generation with undirected models. use Gibbs sampling from an allmask sequence, and Dong et al. (2019) ; Bao et al. (2020) use causal attention during training for leftto-right generation. Liao et al. (2020) ; Mansimov et al. (2019) ; Ghazvininejad et al. (2019) sample masks from a uniform distribution during training for arbitrary order or parallel generation. We adapt these techniques for grid-based image generation. Text-to-image synthesis Synthesizing images from text descriptions continues to be challenging. Since the pioneering work of Reed et al. (2016) , many methods have adopted GANs (Goodfellow et al., 2014) to generate high-fidelity images. Nguyen et al. (2017) generate images that maximizes activation of a pretrained captioning model. Recent works (Zhang et al., 2017 Li et al., 2019) use multi-stage generation, where low-resolution images are initially sampled, then gradually upsampled and improved in later stages. These models are specialized toward image generation, whereas our model can not just generate images, but also answer questions and generate captions. Also, our design is modular in nature. While we use a compact image generator with X-LXMERT, one can also replace it with either of the aforementioned model architectures. Grid visual representation Compared to bounding box representations which requires expensive object detection annotations, grid representations of images can be naturally obtained from CNNs. ; Huang et al. (2020) have recently shown that these can be almost as powerful as bounding box representations for VQA. Grid representation have been widely used in vision tasks, including self-supervised learning (Oord et al., 2018; Henaff et al., 2019; Trinh et al., 2019; Gidaris et al., 2020; Noroozi and Favaro, 2016) and image generation (van den Oord et al., 2017; Lin et al., 2019) . We leverage grid visual representations to enable LXMERT to generate images.

3 Background: Revisiting Lxmert

Over the past year, a large number of transformer based architectures for multimodal data have produced impressive results across a variety of dis- Figure 1: Top: Overview of the proposed X-LXMERT model. Blocks in blue are the modifications we make to LXMERT model to enable it to paint. Bottom: Overview of the image generation architecture. The input to the model is a natural image that is compressed to a quantized latent map of size 8 × 8 by RoI Pooling. We use a generator consisting of multiple residual blocks with SPADE layer which encodes 8 × 8 grid features.

Figure 1: Top: Overview of the proposed X-LXMERT model. Blocks in blue are the modifications we make to LXMERT model to enable it to paint. Bottom: Overview of the image generation architecture. The input to the model is a natural image that is compressed to a quantized latent map of size 8 × 8 by RoI Pooling. We use a generator consisting of multiple residual blocks with SPADE layer which encodes 8× 8 grid features.

criminative tasks. Some of these models have been shown to perform very well at the generative task of Image Captioning, but little attention has been paid to the reverse generative task: generating images given text. In this work, we first probe one popular representative from this family -LXMERT (Tan and Bansal, 2019) -in its ability to paint; and propose extensions that enable it to paint.

LXMERT is a cross modality transformer with inputs: image I and text T . This is represented as the sequence

{v 1 , . . . , v T , CLS, w 1 , . . . , w T , EOS} where {v i } T i=1 are image region features, {w j } T j=1

are word tokens and CLS and EOS are special tokens. LXMERT outputs embeddings for each input

{h v i } T i=1 , {h w j } T j=1 and h CLS , h EOS .

h CLS is used as the cross-modality output. Internally, LXMERT consists of two types of encoders: single-modality encoders for each modality and a cross-modality encoder using bi-directional cross attention to exchange information and align entities across the modalities.

LXMERT is pretrained on several vision-andlanguage datasets with five objectives: Masked language modeling (MLM), Masked visual feature regression (MVFR) -reconstructing randomly masked words and regions given the remaining inputs, Masked object classification (MOC) -object classification on masked image regions, Image-text matching (ITM) -image-caption alignment prediction and Question answering (QA) -answering a question given image input. After pretraining, LXMERT is finetuned for various downstream tasks. Unless noted, we use the default settings and hyperparameters of LXMERT in our experiments.

4 Probing Lxmert'S Ability To Paint

In order to probe LXMERT's ability to paint, we first modify its input image representation to a grid based feature set (Section 4.1) and then pass these to an image generator (Section 4.2).

4.1 Grid Image Features

Most popular multimodal BERT models use image features extracted from the output of a Faster R-CNN (Ren et al., 2015) object detector. The detected objects typically have various locations and sizes. Passing these features into an image generator poses some challenges: (1) LXMERT is not trained to predict locations of given objects (2) it is not trivial to predict both object classes and their locations simultaneously (3) object detections do not cover backgrounds.

We modify LXMERT to use a uniform N × N grid and use RoI Pooling to extract the grid features. Note that we use the same detection backbone pretrained on the Visual Genome dataset to maintain parity with the original LXMERT. Our experiments in Sec 6 show that moving to a grid based input causes very little degradation to downstream QA tasks, a finding consistent with .

Sampling grid features: Given text input, we sample predicted visual features

{h v i } T i=1

where T = N × N is the number of image regions, using Gibbs sampling in a manner similar to language generation using BERT by .

4.2 Image Generation

We use a compact image generator inspired by recent state of the art image synthesis methods leveraging Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) . Its takes as inputs an N × N grid of visual features from the pretrained Faster-RCNN network and generates an image. As shown in Fig 1, the input grid features are projected through convolutional layers and then passed to an image generator, which consists of multiple residual blocks . Each generator residual block has SPADE layer (Park et al., 2019) which guides generator to outptut high fidelity images given semantic grid layouts. In our experiments, we use an image generator which takes 8×8 grid features and outputs an 256 × 256 image.

Training the image generator: The generator is pre-trained using 8 × 8 ground truth Faster-RCNN features, akin to teacher forcing, without any inputs from LXMERT. We train the generator with the same loss as Park et al. (2019) , but replacing the segmentation map with a grid feature map. Fig. 2 (b) shows that our generation architecture can successfully reconstruct images using ground truth pre-trained grid features. Note that the generator still displays some reconstruction errors compared with modern auto-encoders such as VQ-VAEv2 primarily due to (1) freezing the encoder backbone in order to match LXMERT's training settings (2) restricting grid features to have a low (and manageable) dimension.

Figure 2: Top: Image generation from X-LXMERT. Given the text input and all masked visual feature, we first sample grid features by using Gibbs sampling with multiple iterations. Then the sampled grid features are fed into the generator to generate the image. Bottom: Sampled images, from left to right (a) Original image (b) Reconstruction from GT features (c) Sampling from LXMERT(d) Sampling from X-LXMERT without uniform masking pretraining. (e) Our proposed X-LXMERT. (f) Generated image from DM-GAN (Zhu et al., 2019).

4.3 Can Lxmert Paint?

Our experiments in Section 6 reveal that LXMERT is unable to produce visual features that can be converted to a meaningful image by a generator. Figure 2 shows an example. Recall that the LXMERT loss function includes a regression loss -MVFRthat corresponds to regressing target visual features given the textual and visual context. Unfortunately, at inference, this loss on the validation set remains high, causing the predicted visual features to be fairly noisy. In addition, the Gibbs sampling procedure causes this error to propagate over the entire set of features. The resulting predictions aren't suitable to be used for downstream image generation.

5 X-Lxmert

In this section, we present X-LXMERT 1 that extends LXMERT, enabling it to paint, while still maintaining a high performance on discriminative tasks. X-LXMERT has three key refinements that enable it to paint (Sec 5.1): discretizing visual representations, using uniform masking with a large range of masking ratios, and aligning the right pretraining datasets to the right objectives. We then leverage Gibbs sampling to generate visual features given textual input (Sec 5.2).

5.1 From Lxmert To X-Lxmert

Discrete visual representations: We observe that the visual features regressed by LXMERT are not suitable for image generation. Instead, akin to VideoBERT (Sun et al., 2019) , we first create a visual vocabulary using K-mean clustering, approximate the target visual features via a nearest neighbor search, and modify LXMERT to predict the cluster ID for each masked visual token. A new Cluster-Centroid Classification objective (CCC) is used to replace the previous regression objective with a high cardinality classification objective. Our experiments show that discretizing visual representations results helps in predicting better visual features, stems the propagation of feature noise over sampling iterations and generates rich imagery. Uniform instead of Bernoulli masking: Following BERT, LXMERT uses Bernouli sampling (with p = 0.15) to determine positions of the masked tokens on the visual and textual features. In order to generate an image from captions, all tokens on the vision side must be masked and predicted. A low probability Bernouli sampling procedure does not prepare the model well for the generation task, and increasing the probability to very high values leads to poor pre-training. To resolve this, we use Uniform masking on the vision modality. X-LXMERT's uniform masking first samples the masking ratio from a uniform prior distribution ([0,1]), and then samples the desired number of positions randomly. This subjects the model to a variety of masking ratios, and our experiments reveal that this greatly benefits image generation. Updating pre-training data: LXMERT uses a variety of data to pre-train the model: QA data A giraffe standing on dirt ground near a tree.

212 X-Lxmert

A giraffe standing on dirt ground near a tree.

212 617

A giraffe standing on dirt ground near a tree. from multiple sources, caption data from COCO and captions from Visual Genome (VG). Since X-LXMERT uses the CCC loss function, predicting visual features given questions like: "What is shown in the image?" is very ambiguous and results in models that cannot predict visual clusters. Similarly, many captions from VG (e.g., "A bag" or "Glasses on the hair") tend to describe small regions of the image and not the whole image, which makes them unsuited to train the CCC objective. X-LXMERT drops QA data and the captions from VG for its CCC objective for visual cluster prediction. Removing this data helps significantly.

5.2 Sampling Strategies For X-Lxmert

Given text input, predicting the entire set of visual features in one step does not produce good results. Instead, we employ Gibbs sampling to iteratively sample features at different spatial locations. In contrast to text generation, where left-to-right is considered a natural order, there is no natural order for generating images. The grid sampling process starts with N 2 grids filled with the MASK special token. The model then iteratively updates locations either one-by-one or multiple in parallel. There are several sampling strategies for sampling locations on the square grid, primarily falling into two buckets: auto-regressive and parallel. Autoregressive sampling In each iteration, a grid position is sampled, masked and predicted. Then the corresponding MASK token is replaced with the predicted one, and the process is repeated until all locations are updated. (Liao et al., 2020) : Positions are selected in random order. This is repeated K times. Locations may be updated more than once. Non-autoregressive sampling In each iteration, multiple positions are sampled, masked with MASK, predicted and then replaced.

-Mask-predict-K (Ghazvininejad et al., 2019) :

This requires K sampling steps. In the first iteration, all N 2 locations are updated. Then, we linearly decay the number of tokens updated per iteration. For example, for a 2 × 2 grid whereby N 2 = 4, if K = 4 then (4, 3, 2, 1) positions are updated in each iteration. Within each iteration, positions with the lowest confidence are updated. Our experiments show that Mask-Predict-4 consistently produces good results across a variety of generation metrics and we propose using it for X-LXMERT. Our uniform masking procedure makes the model robust to a varied number of masked locations. This enables Mask-Predict to work well. We notice that this sampling usually results in the primary objects being coarsely generated in the beginning followed by the rest of the details and background.

6 Experiments

We now present an analysis of X-LXMERT for discriminative and generative tasks. Implementation details are provided in the appendix.

6.1 Evaluating Image Generation

We train and evaluate models using the MS COCO captioning dataset (Lin et al., 2014) . We compare X-LXMERT with LXMERT and state-of-the- art text-to-image generation methods: StackGAN , PPGN (Nguyen et al., 2017) , AttnGAN , ControlGAN (Li et al., 2019) , and DM-GAN . Image generation is a particularly difficult task to evaluate, due to the variability in acceptable outputs for a given caption, as well as the subjective nature of perceiving image quality. We present a suite of automated and manual metrics to compare models.

- - - - - - StackGAN 8.5 - - - - - - - - - - - PPGN 9.6 - - - - - - - - - - - AttnGAN 25.9 35.5 - - - - - - - - - - ControlGAN 24.1 - - - - - - - - - - - DM-GAN

Automated Metrics: Evaluate Image Quality

We use Inception score (IS) (Salimans et al., 2016) to measure image diversity and Fréchet Inception Distance (FID) (Heusel et al., 2017) to measure authenticity; using Inception v3 (Szegedy et al., 2016) as a surrogate net.

Automated Metrics: Evaluate semantics We use two variants of R-precision , R-prec-easy and R-prec-hard to evaluate if the image is well conditioned on the input text. Given a generated image, a positive caption and negatives, R-precision measures the retrieval rate for the positive caption using a surrogate multi-modal network.

We use an independent surrogate -ViLBERT-MT (Lu et al., 2020) for this purpose. R-prec-easy is the variant of R-precision with easy negatives (sampled randomly amongst the caption set). R-prec-hard is the variant with hard negatives (swapping a word in a caption with another word within the same category, e.g., red ⇒ green). We choose words from one of 4 categories: nouns (80 COCO objects), 64 verbs, 10 colors and 10 numbers. The above automatic metrics, while cheap and reproducible, are noisy because they depend on imperfect surrogate models. The ultimate measure of quality and semantics for image generation continues to be crowd-sourced human studies.

Human Study: Pairwise preferences We conduct a human preference evaluations between X-LXMERT and the best performing model in the automated metrics-DM-GAN. We measure (1) Semantic preference by showing two image and asking annotators to select the one that best matches the source caption. (2) Fidelity preference by showing the two images alone and asking which appears more realistic. Both evaluations also allow a third option (Tie) to be selected. For each evaluation, 5000 image pairs were used, and 357 unique crowdworkers participated in total (median annotations per worker-17).

Human Study: HUMMUS The above pairwise test is very useful and widely used to evaluate generative models, but measuring new models becomes challenging, since they must compare to all old models. To expand human evaluation, we present a novel metric to test semantic consistency between the caption and image inspired by masked token modeling, named -HUmans Measuring se-Mantics Using maSking (HUMMUS). To compute HUMMUS, human annotators are shown an image and its caption with a single word masked out. They are asked to complete the partial caption based on information in the image, and a match is counted only when a majority of annotators supply the correct word. The total score is reported as a ratio of these successful matches. The task was run on 2800 image-caption pairs (2289 unique images), with 5 annotators per pair. A total of 280 unique crowdworkers completed the task, with a median of 13 images annotated per worker. A high HUMMUS score reveals that the generated images contain the corresponding semantics, well enough to be recognized. The masked word is chosen from one of 3

Two people play video games while sitting on a couch.

Caption Original

Ours A grassy tree filled field with a lot of kites in the air.

A giraffe walking on a road with two cars approaching.

Dm-Gan

A large painted clock tower in the middle of town.

Caption

Original Ours DM-GAN

Figure 6: More qualitative examples of images generated by X-LXMERT.

MOC, MVFR tasks → CCC task We replace MOC, MVFR tasks with CCC task (see Sec. 5.1) for X-LXMERT. For CCC head, we simply modify the output dimension of fully connected layer used in MOC task to the number of clusters (1600 → 10000).

6.2 Evaluating Visual Question Answering

We train and evaluate models for visual question answering using the VQA2.0 (Goyal et al., 2019) and GQA (Hudson and Manning, 2019) datasets, which provide an image and a question and require the model to generate an answer.

6.3 Evaluating Visual Reasoning

We train and evaluate models for visual reasoning using the NLVR 2 (Suhr et al., 2019) dataset and report numbers on the dev and test-P splits. The NLVR 2 dataset requires models to look at two images and determine if an accompanying caption is True or False. This is a particularly challenging dataset for present day vision and language models. Table 1 provides detailed metrics for X-LXMERT and baselines. It also provides generation metrics for the original image in the dataset for the corresponding input text. Note that X-LXMERT and LXMERT+Grid are the only models that are able to produce results for all tasks. Image Generation. As seen, X-LXMERT significantly outperforms LXMERT across all generation metrics. X-LXMERT even outperforms two specialized generation models, comparable to AttnGAN and ControlGAN. Our model is lower compared to DM-GAN in terms of automated metric (IS and FID), however, it is competitive with DM-GAN at semantic metric (R-prec-hard) 2 . Note that X-LXMERT's image generator is much smaller than the one used by DM-GAN (1.7M vs 22.3M parameters). While the transformer employed in X-LXMERT is large, it is a unified textual and visual encoder used for multiple tasks and is not finetuned for image generation. We expect X-LXMERT's image quality to improve further when coupled with a larger image generator such as the one by DM-GAN. Table 1 also presents HUMMUS scores. Here we see that the semantics generated by X-LXMERT is on par with DM-GAN and still significantly better than LXMERT. All models are still a distance away from the original image. HUMMUS matches on the lemmatized forms of masked words to allow for lexical variation, but it misses synonyms and other valid descriptors. This causes the score for the original image to drop to its reported value. See the appendix for R-prec-hard and HUMMUS broken down into categories.

Table 1: Comparing X-LXMERT, LXMERT and baselines on image generation, visual question answering and visual reasoning tasks. The pairwise metric compares LXMERT and DM-GAN; numbers do not sum to 100 due to the TIE option provided to annotators. Note that X-LXMERT and LXMERT*+Grid are the only models that are able to produce results for all tasks. *: Our re-implementation of LXMERT.

6.4 Results

Finally we present human pairwise preference scores between X-LXMERT and DM-GAN (its closest competitor). Here we see that human annotators clearly prefer X-LXMERT to DM-GAN for semantics as well as fidelity.

In summary, X-LXMERT's generation capabilities rival state of the art specialized generation 2 Note: R-prec and HUMMUS are reported only for DM-GAN (the strongest of the 5 baselines), since this was the only model with code and pretrained weights. IS and FID numbers are from their respective publications or from . The detailed R-prec-hard numbers across categories are presented in the appendix.

A man dances on top of picnic tables while it snows.

A giraffe walking on a road with two cars approaching.

A full view of a home office with many computer screens.

A large painted clock tower in the middle of town. A kite flying in the air with water in the background.

The woman is wearing a red jacket.

Where was the picture taken, the beach or the harbor?

What is the main color of the kite in front of the person that is standing on the?

What is the color of the jacket the person with the kite is wearing?

What is the color of the chair?

What food is on the plate?

A young boy sitting in a chair with a birthday cake for his birthday.

The piece of cake in the little blonde girl's mouth.

Where is the chair, on the right or on the left?

Is the bowl to the right of the spoon red and round?

A cake on a red tray sitting on top of a table.

What is the name of the food that is on the plate?

The handle of the spoon is on the side of the bowl.

Figure 3: Qualitative examples of images generated by X-LXMERT.
Figure 4: Images generated by X-LXMERT at intermediate stages of sampling.

Where is the food that is on top of the table sitting? Figure 5 : Captions generated by X-LXMERT using Gibbs sampling. We control the samples by providing different prefix word into the model. Those prefix words are common starting word such as 'A', 'The', 'What', 'Where'.

Figure 5: Captions generated by X-LXMERT using Gibbs sampling. We control the samples by providing different prefix word into the model. Those prefix words are common starting word such as ‘A’, ‘The’, ‘What’, ‘Where’.

models. In fact, our human studies demonstrate that X-LXMERT produces better results than even DM-GAN, its closest competitor. Our analysis also shows the limitations of current automatic evaluation metric for text-to-image synthesis.

Visual Question Answering Table 1 compares models on the VQA2.0 and GQA datasets. Converting LXMERT to use grid inputs causes a slight or no drop, consistent with findings by , but hugely simplifies the pipeline. X-LXMERT shows 1.5 -2.5% drop on these datasets but note that its numbers are still very competitive. Visual Reasoning Table 1 compares models on NLVR 2 dataset. Consistent with VQA, grid inputs cause a slight drop. X-LXMERT shows a roughly 2% drop but retains most of the massive jumps obtained by LXMERT on NLVR 2 compared to the previous generation of models. Our implementation of X-LXMERT uses a small 8×8 grid. Increasing the grid size will likely shrink gaps in VQA2.0, GQA and NLVR2 datasets as per the recent findings by .

Ablating X-LXMERT's sampling strategies Table 2 shows that X-LXMERT is fairly robust to sampling strategy, particularly for image semantics, with the exception of TL→BR which tends to produce worse results. This is interesting in that TL→BR is typically the default strategy used by practitioners. However, note that the differences between the strategies are quite small. Fig 3 shows qualitative examples by X-LXMERT compared to DM-GAN. While the images lack fine details, they do a reasonable job at preserving high level semantics, as revealed by the metrics. We do not show images produced by LXMERT since they largely tend to be incomprehensible. Fig 4 shows intermediately generated images by X-LXMERT. Interestingly, the model first coarsely generates salient objects (ex. giraffe, monitors) in the caption followed by details and background. Fig 5 shows captions generated from our model. For each image, we sample text from X-LXMERT using Gibbs sampling. We control the samples by providing different prefix word into the model. Those prefix words are common starting word such as 'A', 'The', 'What', 'Where'. X-LXMERT can produce long meaningful captions as well as questions (like the ones in VQA datasets).

Table 2: Ablating X-LXMERT’s sampling strategies

7 Conclusion

We develop a probing mechanism and find that LXMERT, a powerful vision-and-language transformer model, is not able to generate meaningful images conditioned on text. We present X-LXMERT, a unified model for image generation, captioning, QA and visual reasoning.

A Qualitative Samples

More qualitative samples In Fig 6, we show more qualitative examples of images generated by DM-GAN, reconstruction from ground truth clusters, LXMERT, our proposed X-LXMERT with different sampling strategies.

Figure 6: More qualitative examples of images generated by X-LXMERT.

Image Generation Process Animation

We uploaded an animated GIF file (https://s7.

Gifyu.Com/Images/Generation_Process.Gif)

to demonstrate intermediate image generation process of our model. We made the file with intermediate generation results of X-LXMERT with Random-200 sampling from 4 captions: 'A man dances on top of picnic tables while it snows.', 'A giraffe walking on a road with two cars approaching.', 'A full view of a home office with many computer screens.', and 'A large painted clock tower in the middle of town.'.

B Lxmert / X-Lxmert Details

For a fair comparison, we re-implement LXMERT and LXMERT with grid features. Our models have 226.5M trainable parameters, slightly smaller than 228M of original LXMERT implementation due to weight sharing of MVFR head and MOC head. We use PyTorch (Paszke et al., 2017) and Huggingface Transformers (Wolf et al., 2019) libraries for implementation.

B.1 Lxmert Architecture

LXMERT architecture consists of text embedder, object embedder, transformer backbone, and taskspecific heads.

Text embedder A text input is tokenized by WordPiece Tokenizer (Wu et al., 2016) and special tokens CLS and EOS are concatenated: {CLS, w 1 , . . . , w T , EOS}. We use the same vocabulary used in BERT 3 and LXMERT with size 30522. Text is truncated with maximum token length of 20, including two special tokens. 768-dimensional embedding is learned for each token and position. Final text embedding is obtained by sum of token embedding and positional embedding.

Object embedder An input image is resized within minimum length 800 and maximum length 1333 while preserving aspect ratio. We use Faster R-CNN trained on Visual Genome to extract 36 3 bert-base-uncased bounding boxes from each image 4 . We take fc6 feature, which is between RoI-Pool layer and final object classification head and has 2048 dimension. This is encoded into 768 dimensional vector followed by layer norm (Ba et al., 2016) . Four bounding box coordinates (x 0 , x 1 , y 0 , y 1 ) are [0, 1]-normalized by width and height. Then they are also encoded into 768 dimensional vectors with fully connected layer followed by layer norm. Final object embedding is obtained by element-wise average of object and positional feature.

Transformer backbone Transformer backbone of LXMERT consists of object relation encoder, language encoder and cross modality encoder, which are composed of 9 self-attention layer (Vaswani et al., 2017) , 5 self-attention layer, and 5 crossattention layer respectively. The self-attention layers are same as the ones used in BERT and the dimension of the layers is 768.

Task-specific heads LXMERT is pretrained with five objectives 5 (MLM, MVFR, MOC, ITM, QA) as explained in Sec. 3. For MLM, MVFR, ITM, QA task, a task head consisting of two fully connected layers with GeLU activation (Hendrycks and Gimpel, 2016) and layer norm is trained. For MOC task, a fully connected layer is applied on ouput of MVFR head, similar to original object detection pipeline 6 . For MLM, MVFR, MOC tasks, task heads are applied on cross-modal encoder outputs corresponding to masked tokens. For ITM, QA tasks, tasks heads are applied on CLS token.

B.2 X-Lxmert Architecture

X-LXMERT shares most components with LXMERT, except for minor modifications below.

Object embedder → Grid embedder We extract 8 × 8 grid features of fc6 layer of Faster R-CNN, by giving positional information of 8 × 8 grids into RoI-Pool layer. Then we quantize these features with nearest neighborhood search from 10,000 cluster centroids. Remaining are same with object embedder of LXMERT. 4 We use PyTorch version (https://gitlab. com/vedanuj/vqa-maskrcnn-benchmark), instead of Caffe version (https://github.com/ peteanderson80/bottom-up-attention) used in original implementation. 5 We do not use 400 object attributes predicted from Faster R-CNN, which were used by original implementation.

6 Original implementation trains separate head for MOC task.

A man dances on top of picnic tables while it snows.

Caption

Original LXMERT

Reconstruction Top Left -> Bottom Right

A full view of a home office with many computer screens.

A giraffe walking on a road with two cars approaching.

A large painted clock tower in the middle of town.

B.3 Datasets

For pretraining, we use same datasets used in LXMERT. We use vision-and-language datasets whose images come from MS COCO (Lin et al., 2014) or Visual Genome (Krishna et al., 2016) . Besides the two original captioning datasets, we also aggregate three large image question answering (image QA) datasets: VQA v2.0 (Goyal et al., 2019) , GQA balanced version (Hudson and Manning, 2019), and VG-QA . Table 3 shows statistics of the datasets. Note that X-LXMERT only uses COCO captions for CCC task.

Table 3: Dataset statistics used in pretraining. Each image has multiple sentences/questions. ‘Cap’ is caption. ‘VG’ is Visual Genome. Since MS COCO and VG share 51K images, we list it separately to ensure disjoint image splits. This table is from LXMERT (Tan and Bansal, 2019).

B.4 Visual Vocabulary Clustering

To create visual vocabularies, we run K-means clustering on Faster R-CNN grid features of COCO train2014 images. train2014 has 82783 images, resulting 8 x 8 x 82783 = 5.3M grid features. We use FAISS (Johnson et al., 2017) library for clustering. We sample 2.6M features in training data and run 20 iteration, which takes 2 hours.

B.5 Training

We train LXMERT and X-LXMERT for 20 epochs with mixed precision using Apex 7 (opt-level O1).

We use AdamW optimizer (Loshchilov and Hutter, 2019) with (β 1 , β 2 ) = (0.9, 0.999) and learning rate 1e-5 with 5% linear warmup schedule. We use gradient clipping with maximum norm 1. Training LXMERT takes 60 hours with batch size 1280, and training xlxmert takes 40 hours with batch size 920. We use 4 Titan RTX GPUs (4 × 24GB) for training both models.

B.6 Finetuning

During finetuning on VQA/GQA/NLVR 2 , a task head consisting of two fully connected layers with GeLU activation and layer norm is trained along with pre-trained LXMERT and X-LXMERT. For VQA/GQA, the parameters are initialized from pretrained QA head. We use AdamW optimizer with learning rate 5e-4. We train LXMERT and X-LXMERT for 10 epochs for each task. For VQA/GQA/NLVR 2 , finetuning takes 3/5/1 hours respectively on 4 Titan RTX GPUs (4 × 24GB).

C Generator Details

Our image generation system adopts GAN (Goodfellow et al., 2014) framework and has two networks trained: generator and discriminator.

C.1 Generator Architecture

Our generator consists of multiple residual blocks following SNGAN . The generator takes (quantized) 8 × 8 grid features of Faster R-CNN as input and outputs 256 × 256 RGB images. We use a generator with 5 residual blocks, where each block bilinearly-upsamples feature map by 2. We use 32 channels of 3x3 kernel for every convolution layer in residual blocks. Note that many existing generator architectures Wang et al., 2018; Karras et al., 2019a,b) have residual blocks starting from higher dimensions (eg. 512, 1024) in lowresolution then gradually decrease the dimension as feature maps are spatial upsampled. However, we found that using fixed-sized small dimension for all residual blocks makes training more stable. Each residual block has spatially adaptive instance norm (SPADE) (Park et al., 2019; Huang and Belongie, 2017) that guides the residual block using spatial information of 8 × 8 grid features. After each spatially adaptive instance norm, we multiply spatial gaussian noise on feature maps to make model less focus on local texture following StyleGAN (Karras et al., 2019a) . We use spectral normalization after each convolution layer in generator. Following StyleGAN-v2 (Karras et al., 2019b) , we use skip connection for each residual block to generate final output. Our generator has 1.7M trainable parameters. The detailed architecture of our generator is illustrated at Fig. 7 .

Figure 7: Generator architecture that takes 8x8 grid visual features and generates 256x256 images.

C.2 Discriminator Architecture

Discriminator also consists of multiple residual blocks. We use a discriminator with 5 residual blocks, where each residual block downsamples feature map by 2. We use 64 channels of 3x3 kernel for every convolution layer in residual blocks. We use spectral normalization after each convolu-tion layer in discriminator. In contrasts to generator, discriminator (1) uses instance norm (Ulyanov et al., 2016) instead of adaptive instance norm, (2) does not gaussian noise multiplication and (3) does not use skip connection. Output of the 5 residual blocks are 8 × 8 feature map. Our discriminator have two heads taking these feature maps: (1) adversarial head spatially averaging 8×8 feature map and predicting whether input image is from original image domain or not and (2) classification head predicting cluster ids of 8 × 8 spatial layouts from input image. Our discriminator has 0.5M trainable parameters. The detailed architecture of our discriminator is illustrated at Fig. 8 .

Figure 8: Discriminator architecture that takes 256x256 images.

C.3 Dataset

We train our model on COCO train2014 split, which consits of 82783 images.

C.4 Training

Our generator and discrminator are trained with 4 losses: (1) hinge adversarial loss (Lim and Ye, 2017; Tran et al., 2017) , (2) AC-GAN loss (Odena et al., 2017) , (3) discriminator feature match loss (Wang et al., 2018) and (4) perceptual loss following (Park et al., 2019) . Following pix2pixHD (Wang et al., 2018) , coefficients for the losses are (1, 1, 10, 10) respectively. Adversarial loss guides generator to output images close to original images. The rest of the losses guide generator to output images close to specific target images using spatial layout inputs. We use ResNet-50 (He et al., 2016) for perceptual loss. Detail of losses are explained in Sec. C.5. We use Adam optimizer (Kingma and Ba, 2015) with (β 1 , β 2 ) = (0, 0.999) and two-time update rule (Heusel et al., 2017) with learning rate of 0.0004 and 0.0001 for generator and discriminator respectively. We train the image generator for 60 epochs with batch size 96. on 8 NVIDIA Titan V GPUs (8 × 12GB).

C.5 Losses

In below equations,X and X refer to generated image and target image respectively.

Adversarial loss

EQUATION (2): Not extracted; please refer to original document.

where D Adv is discriminator adversarial head.

Ac-Gan Loss

EQUATION (3): Not extracted; please refer to original document.

where D cls is discriminator classification head.

Discriminator feature match loss Perceptual loss

EQUATION (: Not extracted; please refer to original document.

L G F M −E = k 1 H k W k C k h,w,c huber |E k (X)−E k (X)| (5)

where E k is ResNet-50 (He et al., 2016) 's k-th resblock (conv2 x, conv3 x, conv4 x, conv5 x).

Total Loss

EQUATION (7): Not extracted; please refer to original document.

where (λ GAN , λ ACGAN , λ F M , λ F M −E ) = (1, 1, 10, 10).

D Evaluation Details D.1 Image Metrics

To calculate image metrics, we follow and randomly sample 30000 images from MS COCO val2014 split and sample a caption for each image. Then we generate images from those 30000 captions for each method. We use subset of these 30000 captions for automatic image evaluation.

Inception Score (IS) Following , we use all 30000 generated images. We use OpenAI implementation 8 to calculate IS.

Fréchet Inception Distance (FID) Following Zhu et al. 2019, we use all 30000 generated images. We use PyTorch port of official implementation 9 to calculate FID.

R-Precision-Easy

We use all 30000 generated images. For R-precision-easy, we sample 99 negative captions for each caption, where all negative captions correspond to different val2014 images.

R-precision-hard For each R-precision-hard category (noun/verb/color/number), we use 1000 randomly sampled caption that contains a category word. Then we generate 9 negative captions by swapping the detected category word with another word with same category. We use POS-tagging with spaCy 10 to find category words from a caption. We present per-category score of R-precision-hard at table 4.

Table 4: R-precision-hard per-category scores

D.2 Human Evaluation

We use Amazon Mechanical Turk 11 for human evaluation.

HUMMUS score For each HUMMUS category (noun/verb/color), we use 100 randomly sampled images. Then we mask out words in the same fashion as in R-precision-hard metric. A total of 280 unique crowdworkers completed the task, with a median of 13 images annotated per worker. We present per-category score of HUMMUS score at table 5. Fig 9 shows Pairwise preference For Semantic preference task, we ask annotators (1) 'Which image best matches the caption?' with caption. For Fidelity preference task, we ask annotators 'Which image looks more realistic?' without providing the caption. A total of 357 unique crowdworkers completed the task, with a median of 17 annotations performed per worker.

Figure 9: Screenshot of HUMMUS score evaluation system
Table 5: Evaluating semantics with HUMMUS.

X-LXMERT is an LXMERT with a "display server"

https://github.com/NVIDIA/apex

Figure 10: Screenshot of Semantic preference evaluation system
Figure 11: Screenshot of Fidelity preference evaluation system