Home > GLMdenoise > example4.m

example4

PURPOSE ^

% Example 4: Use GLMdenoise, allowing voxel-specific HRFs

SYNOPSIS ^

This is a script file.

DESCRIPTION ^

% Example 4: Use GLMdenoise, allowing voxel-specific HRFs

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SOURCE CODE ^

0001 %% Example 4: Use GLMdenoise, allowing voxel-specific HRFs
0002 
0003 %% Download dataset (if necessary) and add GLMdenoise to the MATLAB path
0004 
0005 setup;
0006 
0007 %% Load in the data
0008 
0009 % Load in the data
0010 load('exampledataset.mat');
0011 
0012 % Check the workspace
0013 whos
0014 %%
0015 
0016 %% Outline the strategy
0017 
0018 % Fitting the finite impulse response (FIR) model is at one end of the spectrum of
0019 % allowing flexibility in estimated hemodynamic response functions (HRFs).  However,
0020 % there is a real potential for the FIR model to overfit, especially when there is a
0021 % large number of conditions.  A less aggressive strategy is to allow each voxel
0022 % (or ROI) to have its own flexible HRF but enforce the constraint that the same HRF
0023 % applies to different conditions (thus, the conditions differ only with respect
0024 % to their associated beta weights).
0025 %
0026 % The default behavior of GLMdenoisedata is to optimize the HRF to best fit the
0027 % top 50 voxels in the dataset.  Suppose, however, that we want to instead
0028 % optimize the HRF on a voxel-by-voxel basis.  One challenge in this approach
0029 % is computational time, as an iterative fitting procedure must be performed
0030 % for each voxel in the dataset.  The other challenge is how this can co-exist
0031 % with the denoising technique implemented in GLMdenoise.
0032 %
0033 % Our strategy is as follows.  First, we will make an initial call to GLMdenoise,
0034 % assuming a canonical HRF (which is computed based on the stimulus duration and
0035 % the TR).  The point of this initial call is to learn the noise regressors
0036 % (and the specific number of noise regressors) that result in good cross-validation
0037 % performance.  Then, we will re-analyze the data voxel-by-voxel, including the noise
0038 % regressors learned in the first step and tailoring the HRF on a voxel-by-voxel basis.
0039 
0040 %% Step 1: Initial call to GLMdenoise to learn the noise regressors
0041 
0042 % Compute a canonical HRF.
0043 hrf = getcanonicalhrf(stimdur,tr)';
0044 
0045 % Make the initial call to GLMdenoise.  We indicate that the canonical HRF is to be
0046 % assumed.  We also turn off bootstrapping as it is unnecessary here.
0047 results = GLMdenoisedata(design,data,stimdur,tr, ...
0048             'assume',hrf,struct('numboots',0), ...
0049             'example4figures');
0050 
0051 % Extract the noise regressors based on the results.  Note that results.pcnum is
0052 % the number of noise regressors that was selected by GLMdenoisedata.m.
0053 noisereg = cellfun(@(x) x(:,1:results.pcnum),results.pcregressors,'UniformOutput',0);
0054 
0055 % Inspect the dimensionality of the noise regressors.
0056 noisereg
0057 %%
0058 
0059 %% Step 2: Re-analyze the data, tailoring the HRF voxel-by-voxel
0060 
0061 % Define some useful constants.
0062 xyzsize = [64 64 4];  % the XYZ dimensions of the dataset
0063 numcond = 35;         % the number of conditions in this dataset
0064 
0065 % Define an options struct.  This specifies (1) that the noise regressors determined
0066 % above will serve as extra regressors in the GLM, (2) that the HRF estimated
0067 % from the data should always be used (even if it is very unlike the initial seed),
0068 % and (3) that we want to suppress text output (since many voxels will be analyzed
0069 % in a loop).
0070 opt = struct('extraregressors',{noisereg},'hrfthresh',-Inf,'suppressoutput',1);
0071 
0072 % Initialize the results.
0073 hrfs = zeros([xyzsize length(hrf)],'single');
0074 betas = zeros([xyzsize numcond],'single');
0075 R2 = zeros([xyzsize],'single');
0076 
0077 % Loop over voxels.  Note: The following loop may take a long time to execute.
0078 % This loop can be speeded up using parfor or cluster-computing solutions, but
0079 % here we use a simple for-loop for the purpose of simplicity.
0080 cache = [];
0081 for xx=1:xyzsize(1)
0082   fprintf('#');
0083   for yy=1:xyzsize(2)
0084     fprintf('.');
0085     for zz=1:xyzsize(3)
0086 
0087       % Extract the time-series data for one voxel.
0088       data0 = cellfun(@(x) flatten(x(xx,yy,zz,:)),data,'UniformOutput',0);
0089 
0090       % Analyze the time-series, specifying that we want to optimize the HRF.
0091       % We use the canonical HRF as the initial seed for the HRF.
0092       [results0,cache] = GLMestimatemodel(design,data0,stimdur,tr, ...
0093                            'optimize',hrf,0,opt,cache);
0094 
0095       % Record the results.
0096       hrfs(xx,yy,zz,:) = results0.modelmd{1};
0097       betas(xx,yy,zz,:) = results0.modelmd{2};
0098       R2(xx,yy,zz) = results0.R2;
0099 
0100     end
0101   end
0102 end
0103 
0104 %% Inspect the results
0105 
0106 % Here we inspect the results for an example voxel.  Notice that the estimated HRF
0107 % is a bit delayed with respect to the canonical HRF.  The beta weights across
0108 % the two cases, however, are pretty similar.
0109 xx = 53; yy = 27; zz = 1;
0110 figure;
0111 set(gcf,'Units','points','Position',[100 100 900 200]);
0112 subplot(1,2,1); hold on;
0113 plot(0:tr:(length(hrf)-1)*tr,hrf,'r-');
0114 plot(0:tr:(length(hrf)-1)*tr,flatten(hrfs(xx,yy,zz,:)),'k-');
0115 straightline(0,'h','k-');
0116 xlabel('Time from condition onset (s)');
0117 ylabel('Response');
0118 title('HRF (red = canonical, black = estimated)');
0119 subplot(1,2,2); hold on;
0120 plot(flatten(results.modelmd{2}(xx,yy,zz,:)),'r-');
0121 plot(flatten(betas(xx,yy,zz,:)),'k-');
0122 straightline(0,'h','k-');
0123 xlabel('Condition number');
0124 ylabel('BOLD signal (% change)');
0125 title('Beta weights (red = using canonical HRF, black = using estimated HRF)');
0126 %%

Generated on Fri 01-Aug-2014 12:03:17 by m2html © 2005