Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

A Diagram is Worth a Dozen Images

Authors

  • Aniruddha Kembhavi
  • M. Salvato
  • Eric Kolve
  • Minjoon Seo
  • Hannaneh Hajishirzi
  • Ali Farhadi
  • ECCV
  • 2016
  • View in Semantic Scholar

Abstract

Diagrams are common tools for representing complex concepts, relationships and events, often when it would be difficult to portray the same information with natural images. Understanding natural images has been extensively studied in computer vision, while diagram understanding has received little attention. In this paper, we study the problem of diagram interpretation, the challenging task of identifying the structure of a diagram and the semantics of its constituents and their relationships. We introduce Diagram Parse Graphs (DPG) as our representation to model the structure of diagrams. We define syntactic parsing of diagrams as learning to infer DPGs for diagrams and study semantic interpretation and reasoning of diagrams in the context of diagram question answering. We devise an LSTM-based method for syntactic parsing of diagrams and introduce a DPG-based attention model for diagram question answering. We compile a new dataset of diagrams with exhaustive annotations of constituents and relationships for about 5,000 diagrams and 15,000 questions and answers. Our results show the significance of our models for syntactic parsing and question answering in diagrams using DPGs.

1 Introduction

For thousands of years visual illustrations have been used to depict the lives of people, animals, their environment, and major events. Archaeological discoveries have unearthed cave paintings showing lucid representations of hunting, religious rites, communal dancing, burial, etc. From ancient rock carvings and maps, to modern info-graphics and 3-D visualizations, to diagrams in science textbooks, the set of visual illustrations is very large, diverse and ever growing, constituting a considerable portion of visual data. These illustrations often represent complex concepts, such as events or systems, that are otherwise difficult to portray in a few sentences of text or a natural image (Figure 1 ).

Fig. 1. The space of visual illustrations is very rich and diverse. The top palette shows the inter class variability for diagrams in our new diagram dataset, AI2D. The bottom palette shows the intra-class variation for the Water Cycles category.

While understanding natural images has been a major area of research in computer vision, understanding rich visual illustrations has received scant attention. From a computer vision perspective, these illustrations are inherently different from natural images and offer a unique and interesting set of problems.

These authors contributed equally to this work. Since they are purposefully designed to express information, they typically suppress irrelevant signals such as background clutter, intricate textures and shading nuances. This often makes the detection and recognition of individual elements inherently different than their counterparts, objects, in natural images. On the other hand, visual illustrations may depict complex phenomena and higher-order relations between objects (such as temporal transitions, phase transformations and inter object dependencies) that go well beyond what a single natural image can convey. For instance, one might struggle to find natural images that compactly represent the phenomena seen in some grade school science diagrams, as shown in Figure 1 . In this paper, we define the problem of understanding visual illustrations as identifying visual entities and their relations as well as establishing semantic correspondences to real-world concepts.

The characteristics of visual illustrations also afford opportunities for deeper reasoning than provided by natural images. Consider the food web in Figure 1 , which represents several relations such as foxes eating rabbits and rabbits eating plants. One can further reason about higher order relations between entities such as the effect on the population of foxes caused by a reduction in the population of plants. Similarly, consider the myriad of phenomena displayed in a single water cycle diagram in Figure 1 . Some of these phenomena are shown to occur on the surface of the earth while others occur either above or below the surface. The main components of the cycle (e.g. evaporation) are labeled and the flow of water is displayed using arrows. Reasoning about these objects and their interactions in such rich scenes provides many exciting research challenges.

In this paper, we address the problem of diagram interpretation and reasoning in the context of science diagrams, defined as the two tasks of Syntactic parsing and Semantic interpretation. Syntactic parsing involves detecting and recognizing constituents and their syntactic relationships in a diagram. This is most analogous to the problem of scene parsing in natural images. The wide variety of diagrams as well as large intra-class variation ( Figure 1 shows several varied images depicting a water cycle) make this step very challenging. Semantic interpretation is the task of mapping constituents and their relationships to semantic entities and events (real-world concepts). For example, an arrow in a food chain diagram typically corresponds to the concept of consumption, arrows in water cycles typically refer to phase changes, and arrows in a planetary diagram often refers to rotatory motion. This is a challenging task given the inherent ambiguities in the mapping functions. Hence we study it in the context of diagram question answering.

Fig. 2. An overview of the Dsdp-Net solution to inferring DPGs from diagrams. The LSTM based network exploits global constrains such as overlap, coverage, and layout to select a subset of relations amongst thousands of candidates to construct a DPG.
Fig. 3. An overview of the Dqa-Net solution to diagram question answering. The network encodes the DPG into a set of facts, learns to attend on the most relevant fact, given a question and then answers the question.