Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

Actor and Observer: Joint Modeling of First and Third-Person Videos

Authors

  • Gunnar A. Sigurdsson
  • Abhinav Gupta
  • C. Schmid
  • Ali Farhadi
  • Alahari Karteek
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
  • View in Semantic Scholar

Abstract

Several theories in cognitive neuroscience suggest that when people interact with the world, or simulate interactions, they do so from a first-person egocentric perspective, and seamlessly transfer knowledge between third-person (observer) and first-person (actor). Despite this, learning such models for human action recognition has not been achievable due to the lack of data. This paper takes a step in this direction, with the introduction of Charades-Ego, a large-scale dataset of paired first-person and third-person videos, involving 112 people, with 4000 paired videos. This enables learning the link between the two, actor and observer perspectives. Thereby, we address one of the biggest bottlenecks facing egocentric vision research, providing a link from first-person to the abundant third-person data on the web. We use this data to learn a joint representation of first and third-person videos, with only weak supervision, and show its effectiveness for transferring knowledge from the third-person to the first-person domain.

1. Introduction

What is an action? How do we represent and recognize actions? Most of the current research has focused on a data-driven approach using abundantly available thirdperson (observer's perspective) videos. But can we really learn how to represent an action without understanding goals and intentions? Can we learn goals and intentions without simulating actions in our own mind? A popular theory in cognitive psychology, the Theory of Mind [30] , suggests that humans have the ability to put themselves in each others' shoes, and this is a fundamental attribute of human intelligence. In cognitive neuroscience, the presence of activations in mirror neurons and motor regions even for passive observations suggests the same [33] .

When people interact with the world (or simulate these interactions), they do so from a first-person egocentric perspective [16] . Therefore, making strides towards humanlike activity understanding might require creating a link between the two worlds of data: first-person and third-person. In recent years, the field of egocentric action understanding [14, 20, 22, 27, 32, 34] has bloomed due to a variety of practical applications, such as augmented/virtual reality. While first-person and third-person data represent the two sides of the same coin, these two worlds are hardly connected. Apart from philosophical reasons, there are practical reasons for establishing this connection. If we can create a link, then we can use billions of easily available thirdperson videos to improve egocentric video understanding. Yet, there is no connection: why is that?

The reason for the lack of link is the lack of data! In order to establish the link between the first and third-person worlds, we need aligned first and third-person videos. In addition to this, we need a rich and diverse set of actors and actions in these aligned videos to generalize. As it turns out, aligned data is much harder to get. In fact, in the egocentric world, getting diverse actors and, thus, a diverse action dataset is itself a challenge that has not yet been solved. Most datasets are lab-collected and lack diversity as they contain only a few subjects [8, 10, 27] .

In this paper, we address one of the biggest bottlenecks facing egocentric vision research. We introduce a large-scale and diverse egocentric dataset, Charades-Ego, collected using the Hollywood in Homes [37] methodology. We demonstrate an overview of the data collection and the learning process in Figure 1 , and present examples from the dataset in Figure 2 . Our new dataset has 112 actors performing 157 different types of actions. More importantly, we have the same actors perform the same sequence of actions from both first and third-person perspective. Thus, our dataset has semantically similar first and third-person videos. These "aligned" videos allow us to take the first steps in jointly modeling actions from first and third-person's perspective. Specifically, our model, Ac-torObserverNet, aligns the two domains by learning a joint embedding in a weakly-supervised setting. We show a practical application of joint modeling: transferring knowledge from the third-person domain to the first-person domain for the task of zero-shot egocentric action recognition.

Figure 1. Not extracted; please refer to original document.