Applying artificial vision models to human scene understanding - Robotics Institute Carnegie Mellon University

Applying artificial vision models to human scene understanding

Elpssa M. Aminoff, Mariya Toneva, Abhinav Shrivastava, Xinlei Chen, Ishan Misra, Abhinav Gupta, and Michael J. Tarr
Journal Article, Frontiers in Computational Neuroscience, Vol. 9, No. 8, February, 2015

Abstract

How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective-the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)-have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN-the models that best accounted for the patterns obtained from PPA and TOS-were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method-NEIL ("Never-Ending-Image-Learner"), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes-showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.

BibTeX

@article{Aminoff-2015-113357,
author = {Elpssa M. Aminoff and Mariya Toneva and Abhinav Shrivastava and Xinlei Chen and Ishan Misra and Abhinav Gupta and Michael J. Tarr},
title = {Applying artificial vision models to human scene understanding},
journal = {Frontiers in Computational Neuroscience},
year = {2015},
month = {February},
volume = {9},
number = {8},
}