fMRI data from "Attention reduces spatial uncertainty in human ventral temporal cortex"
Questions, comments? E-mail Kendrick Kay

News

Introduction

This web site provides fMRI data from the experiments described in the following paper:

Kay, K.N., Weiner, K.S., & Grill-Spector, K. Attention reduces spatial uncertainty in human ventral temporal cortex. Current Biology (2015).

If you use these data in your research, please cite the above paper. You might also be interested in other available datasets as well as modeling code.

Terms of use: The content provided by this web site is licensed under a Creative Commons Attribution 3.0 Unported License. You are free to share and adapt the content as you please, under the condition that you cite the appropriate manuscript.

Example stimulus movies

Here are movies showing examples of what subjects saw during each experiment. Refer to the paper for further detail regarding the stimulus and experiment design.

pRF-estimation experiment

download high-res movie

task experiment

download high-res movie

interleaved-task experiment

download high-res movie

peripheral-digits experiment (central-digit run)

download high-res movie

peripheral-digits experiment (peripheral-digit run)

download high-res movie

Basic description

Three subjects participated in four separate experiments: (1) pRF-estimation experiment, (2) task experiment, (3) interleaved-task experiment, and (4) peripheral-digits experiment. Functional MRI data were collected from human visual cortex at 3T using gradient-echo, isotropic 2-mm voxels, and a TR (sampling rate) of 2.006553 seconds. The data were acquired in volumes with dimensions 80 voxels x 80 voxels x 26 voxels. The data have been pre-processed, including removal of the initial volumes of each run (to avoid magnetization effects), slice time correction, motion correction, and spatial undistortion based on fieldmap measurements. Additionally, the data have been analyzed and denoised using the GLM implemented in GLMdenoise. This prepares the data in the format of beta weights for further analysis. Voxels are identified using indices into the 3D volume; for example, voxel 82 is located at matrix location (2,2,1).

The four experiments were conducted in different scan sessions. (The first experiment actually reflects several scan sessions of data that have been averaged together.) All of the data for each given subject have been co-registered (thus, individual voxels can be compared across the four experiments). Based on independent localizer data, voxels in each subject have been assigned to the following regions of interest (ROIs): left and right hemisphere V1, V2, V3, hV4, IOG, pFus, and mFus. After fitting the GLM in each experiment, beta weights were divided by the mean signal level in each voxel and multiplied by 100, thereby converting the beta weights to units of percent BOLD change.

For the pRF-estimation experiment, there are 196 beta weights ordered in four groups of 7*7=49, where the first group consists of the phase-scrambled faces, the second group consists of the small faces, the third group consists of the medium faces, and the fourth group consists of the large faces. In each group, the order is left to right and then top to bottom.

For the task experiment, there are 75 beta weights ordered in three groups of 5*5=25, where the first group consists of the medium faces under the digit task, the second group consists of the medium faces under the dot task, and the third group consists of the medium faces under the face task. In each group, the order is left to right and then top to bottom.

For the interleaved-task experiment, the beta weights have the same order as in the task experiment. However, there is additionally a set of weights associated with finite impulse response (FIR) regressors that capture any potential cue-related responses. These FIR regressors start at the onset of the cue and extend 10 time points (TRs) after the onset (thus, there are a total of 11 regressors). There are three sets of FIR regressors, one for each cue type (digit, dot, face).

For the peripheral-digits experiment, there are 50 beta weights ordered in two groups of 5*5=25, where the first group consists of the medium faces under the central-digit task and the second group consists of the medium faces under the peripheral-digit task. In each group, the order is left to right and then top to bottom. Note that unlike the task experiment and the interleaved-task experiments, the stimuli in the peripheral-digits experiment are somewhat different across the two tasks: in the central-digit task, there is a stream of digits at the center of the display, whereas in the peripheral-digit task, there is a stream of digits to the left of the center of the display. This should be taken into account when interpreting the beta weights from this experiment.

Note that the same set of voxels are extracted across the four experiments. However, due to missing data at the edges of the scanned volume, some voxels may have NaN values in one or more of the experiments.

Files available for download

dataset01.mat, dataset02.mat, dataset03.mat - Data from subjects 1, 2, and 3, respectively. The contents of the files are detailed below.

dataset01_t1.nii.gz, dataset02_t1.nii.gz, dataset03_t1.nii.gz - T1-weighted whole-brain anatomical volumes for subjects 1, 2, and 3, respectively.

conimages.mat - A file containing the stimuli represented as contrast images. The contents of this file are detailed below.

Contents of the data files

Contents of 'dataset01.mat', 'dataset02.mat', and 'dataset03.mat':

'roivol' is 80 x 80 x 26 with integers between 0 and 14 indicating the ROI assignment. 0 means no ROI assignment.

'roilabels' is a 1 x 14 cell vector of strings with ROI names. The ROIs are ordered with left hemisphere ROIs first and right hemisphere ROIs second.

'vxs' is a 1 x 14 cell vector. The ith element indicates which voxels are in the ith ROI, and is a vector of indices into the 3D volume.

'meanvols' is a 1 x 4 cell vector with the mean of the pre-processed functional volumes for each of the four experiments. The values reflect raw scanner units. Note that in each experiment, voxels for which a complete set of data is not available (e.g. due to motion) are assigned a value of 0.

'hrfs' is a 1 x 4 cell vector with the HRF estimate for each of the four experiments. The first time point coincides with stimulus onset and subsequent time points occur at the data sampling rate (TR). Each HRF is scaled such that the peak of the HRF is 1. Note that the GLM analysis of the interleaved-task experiment used the HRF estimate from the task experiment; thus, the second and third HRF estimates are identical in 'hrfs'.

'betas' is a 14 x 4 cell matrix with different bootstraps of beta weights. The rows correspond to different ROIs; the columns correspond to the four experiments. Each element is voxels x amplitudes x bootstraps.

'betamn' is a 14 x 4 cell matrix with final beta weights. Each element is voxels x amplitudes and are obtained by taking the median across the bootstraps in 'betas'.

'betase' is a 14 x 4 cell matrix with error bars on beta weights. Each element is voxels x amplitudes and are obtained by calculating half of the 68% range of the bootstraps in 'betas'.

'cuebetas' is a 14 x 4 cell matrix with different bootstraps of cue-related beta weights. Each element is voxels x time-points x cue-type x bootstraps. (Note that only the third column has values since cue-related responses were estimated only for the third experiment.)

'cuebetamn' is a 14 x 4 cell matrix with final cue-related beta weights. Each element is voxels x time-points x cue-type and are obtained by taking the median across the bootstraps in 'cuebetas'.

'cuebetase' is a 14 x 4 cell matrix with error bars on cue-related beta weights. Each element is voxels x time-points x cue-type and are obtained by calculating half of the 68% range of the bootstraps in 'cuebetas'.

Contents of the stimulus file

Contents of 'conimages.mat':

'conimages' is a 1 x 2 cell vector with contrast images that reflect the spatial extent of the stimuli used in the experiments. These contrast images were used for modeling purposes in the study. The contrast images have a resolution of 800 pixels x 800 pixels, corresponding to 12.5 degrees of retinal angle. The format is double and the range of values is [0,1]. The first element of 'conimages' contains contrast images for the pRF-estimation experiment. The second element of 'conimages' contains contrast images for the task, interleaved-task, and peripheral-digits experiments—note that we include only 25 contrast images, as these contrast images are the same across the different tasks in these experiments.