Tutorial Data: Block-design fMRI

This dataset is a compilation of data and results for PyMVPA Tutorial.

At the moment dataset is based on data for a single subject from a study published by Haxby et al. (2001). The full (raw) dataset of this study is also available. However, in contrast to the full data this single subject datasets has been preprocessed to a degree that should allow people without prior fMRI experience to perform meaningful analyses. Moreover, it should not require further preprocessing with external tools.

All preprocessing has been performed using tools from FSL. Specifically, the 4D fMRI timeseries has been motion-corrected by applying MCFLIRT to a skull-stripped and thresholded timeseries (to zero-out non-brain voxels, using a brain outline estimate significantly larger than the brain, to prevent removal of edge voxels actually covering brain tissue). The estimated motion parameters have been subsequently applied to the original (unthresholded, unstripped) timeseries. For simplicity the T1-weighed anatomical image has also been projected and resampled into the subjects functional space.

For surface-based mapping two other archives are distributed, called ‘tutorial_data_surf_minimal-0.1.tar.gz’ and ‘tutorial_data_surf_complete-0.1.tar.gz’. Both contain surfaces that were reconstructed using FreeSurfer and preprocessed using AFNI and SUMA. The ‘minimal’ archive contains the minimal set of surfaces to run doc/examples/searchlight_surf.py. The latter contains the full output from FreeSurfer’s recon-all and the full output from the anatomical preprocessing by the alignment script in bin/pymvpa2-prep-afni-surf. This output includes left, right, and merged (left combined with right) hemispheres at various resolutions. Surfaces produced by the alignment script are stored in ASCII format and can be read by the module mvpa2/misc/nibabel/surf_fs_asc. The surfaces can be visualized using AFNI’s SUMA (SUrface MApper).

Terms Of Use

The orginal authors of Haxby et al. (2001) hold the copyright of this dataset and made it available under the terms of the Creative Commons Attribution-Share Alike 3.0 license. The PyMVPA authors have preprocessed the data and released this derivative work under the same licensing terms.

Download

Tarballs are available at:

Tarball Content

data/

Contains data files:

bold.nii.gz
The motion-corrected 4D timeseries (1452 volumes with 40 x 64 x 64 voxels, corresponding to a voxel size of 3.5 x 3.75 x 3.75 mm and a volume repetition time of 2.5 seconds). The timeseries contains all 12 runs of the original experiment, concatenated in a single file. Please note, that the timeseries signal is not detrended.
bold_mean.nii.gz
The voxel-wise average image of bold.nii.gz (averaged over time). Generated using AFNI’s 3dTstat.
bold_mc.par
The motion correction parameters. This is a 6-column text file with three rotation and three translation parameters respectively. This information can be used e.g. as additional regressors for motion-aware timeseries detrending.
mask*.nii.gz
A number of mask images in the subjects functional space, including a full-brain mask.
attributes.txt
A two-column text file with the stimulation condition and the corresponding experimental run for each volume in the timeseries image. The labels are given in literal form (e.g. ‘face’).
anat.nii.gz
An anatomical image of the subject, projected and resampled into the same space as the functional images, hence also of the same spatial resolution. The image is not skull-stripped.
freesurfer/

Data used for and generated by FreeSurfer‘s recon-all. Only included in the tutorial_data_surf_complete archive.

anat_nii.nii
A high-resolution version of the anatomical image that was used for surface reconstruction with FreeSurfer’s recon-all.
subj1

Contains the output from FreeSurfer’s recon-all. The command used to generate the output was:

recon-all -subject subj1 -i anat_nii.nii -all -cw256

Note that the environmental variable SUBJECTS_DIR was set to point to the current working directory (freesurfer). The version of FreeSurfer used for reconstruction is: freesurfer-Linux-centos4_x86_64-stable-pub-v5.0.0.

subj1/surf/SUMA
Contains the output from AFNI‘s @SUMA_Make_Spec_FS program that converts FreeSurfer’s output to AFNI-readable files. It also contains surfaces resampled using MapIcosahedron. For the command used to generate these files, see the suma_surfaces description below.
results/
Some analyses presented in the tutorial takes non-negligible time to compute. Therefore, we provide results of some analysis so they could simply be loaded while following the tutorial (commands to load them are embedded in the code snippets through out tutorial and prefixed with # alt:).
start_tutorial_session.sh
Helper shell script to start an interactive session within IPython to proceed with the tutorial code.
suma_surfaces/

Surfaces generated by the AFNI / SUMA wrapper script in bin/pymvpa2-prep-afni-surf. Most files are available only in the tutorial_data_surf_complete archive. The minimal set for running doc/examples/searchlight_surf.py is provided in the tutorial_data_surf_minimal archive. These surfaces are aligned to bold_mean.nii.gz as indicated by the infix _al in the file name. The contents of this directory can be generated with:

PyMVPAROOT/bin/pymvpa-prep-afni-surf.py \
--refdir suma_surfaces \
--surfdir data/freesurfer/subj1/surf \
--epivol data/bold_mean.nii.gz

where PyMVPAROOT is the directory where PyMVPA is installed. Using this script requires that FreeSurfer, AFNI and SUMA are installed. The prefixes icoXX_Yh indicates that the surface was generated using AFNI’s MapIcosahedron with XX linear divisions (ld parameter) and represents the Y hemisphere (l=left, r=right, m=merged). Such a surface has 10*XX**2+2 nodes and 20*XX*2 surfaces for a single hemisphere, and twice that number for merged hemispheres. Merged hemispheres contain first the nodes of the left hemispheres, followed by the nodes in the right hemisphere. SUMA .spec files that define several views are also provided for these surfaces. Files were generated using FreeSurfer version stable5, and AFNI AFNI_2011_12_21_1014 running on a Mac with Mac OS 10.7.5.

Instructions

>>> from mvpa2.suite import *
>>> datapath = os.path.join(pymvpa_datadbroot, 'tutorial_data',
...                         'tutorial_data', 'data')
>>> attrs = SampleAttributes(os.path.join(datapath, 'attributes.txt'))
>>> ds = fmri_dataset(samples=os.path.join(datapath, 'bold.nii.gz'),
...                   targets=attrs.targets, chunks=attrs.chunks,
...                   mask=os.path.join(datapath, 'mask_brain.nii.gz'))
>>> print ds.shape
(1452, 39912)
>>> print ds.a.voxel_dim
(40, 64, 64)
>>> print ds.a.voxel_eldim
(3.5, 3.75, 3.75)
>>> print ds.a.mapper
<Chain: <Flatten>-<StaticFeatureSelection>>
>>> print ds.uniquetargets
['bottle' 'cat' 'chair' 'face' 'house' 'rest' 'scissors' 'scrambledpix'
 'shoe']

References

Haxby, J., Gobbini, M., Furey, M., Ishai, A., Schouten, J., and Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425–2430.

Changelog

0.3*
  • Added tutorial_data_surf_{complete,minimal}-0.1.tar.gz descriptions to this README file.
0.3
  • Removed tutorial_lib.py which is superseded by using mvpa2.tutorial_suite
0.2
  • Updated tutorial code to work with PyMVPA 0.6
  • Removed dependency on PyNIfTI and use NiBabel instead.
0.1
  • Initial release.

Table Of Contents

NeuroDebian

NITRC-listed