BOLD5000, a public fMRI dataset while viewing 5000 visual images - Robotics Institute Carnegie Mellon University

BOLD5000, a public fMRI dataset while viewing 5000 visual images

Nadine Chang, John A. Pyles, Austin Marcus, Abhinav Gupta, Michael J. Tarr, and Elissa M. Aminoff
Journal Article, Scientific Data, Vol. 6, May, 2019

Abstract

Vision science, particularly machine vision, has been revolutionized by introducing large-scale image datasets and statistical learning approaches. Yet, human neuroimaging studies of visual perception still rely on small numbers of images (around 100) due to time-constrained experimental procedures. To apply statistical learning approaches that include neuroscience, the number of images used in neuroimaging must be significantly increased. We present BOLD5000, a human functional MRI (fMRI) study that includes almost 5,000 distinct images depicting real-world scenes. Beyond dramatically increasing image dataset size relative to prior fMRI studies, BOLD5000 also accounts for image diversity, overlapping with standard computer vision datasets by incorporating images from the Scene UNderstanding (SUN), Common Objects in Context (COCO), and ImageNet datasets. The scale and diversity of these image datasets, combined with a slow event-related fMRI design, enables fine-grained exploration into the neural representation of a wide range of visual features, categories, and semantics. Concurrently, BOLD5000 brings us closer to realizing Marr's dream of a singular vision science-the intertwined study of biological and computer vision.

BibTeX

@article{Chang-2019-121559,
author = {Nadine Chang and John A. Pyles and Austin Marcus and Abhinav Gupta and Michael J. Tarr and Elissa M. Aminoff},
title = {BOLD5000, a public fMRI dataset while viewing 5000 visual images},
journal = {Scientific Data},
year = {2019},
month = {May},
volume = {6},
}