Home > GLMdenoise > utilities > olsmatrix2.m

olsmatrix2

PURPOSE ^

function f = olsmatrix2(X)

SYNOPSIS ^

function f = olsmatrix2(X)

DESCRIPTION ^

 function f = olsmatrix2(X)

 <X> is samples x parameters

 what we want to do is to perform OLS regression using <X>
 and obtain the parameter estimates.  this is accomplished
 by inv(X'*X)*X'*y = f*y where y is the data (samples x cases).

 what this function does is to return <f> which has dimensions
 parameters x samples.

 we check for a special case, namely, when one or more regressors 
 are all zeros.  if we find that this is the case, we issue a warning
 and simply ignore these regressors when fitting.  thus, the weights
 associated with these regressors will be zeros.

 if any warning messages are produced by the inversion process, then we die.
 this is a conservative strategy that ensures that the regression is 
 well-behaved (i.e. has a unique, finite solution).  (note that this does
 not cover the case of zero regressors, which is gracefully handled as
 described above.)

 note that no scale normalization of the regressor columns is performed.
 also, note that we use \ to perform the inversion.

 see also olsmatrix.m.

 history:
 2013/05/12 - change how zero-regressors are handled: if zero regressors are
              found, give a warning and ensure that zero weights are assigned to
              them (thus, we no longer result in a crash).

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SOURCE CODE ^

0001 function f = olsmatrix2(X)
0002 
0003 % function f = olsmatrix2(X)
0004 %
0005 % <X> is samples x parameters
0006 %
0007 % what we want to do is to perform OLS regression using <X>
0008 % and obtain the parameter estimates.  this is accomplished
0009 % by inv(X'*X)*X'*y = f*y where y is the data (samples x cases).
0010 %
0011 % what this function does is to return <f> which has dimensions
0012 % parameters x samples.
0013 %
0014 % we check for a special case, namely, when one or more regressors
0015 % are all zeros.  if we find that this is the case, we issue a warning
0016 % and simply ignore these regressors when fitting.  thus, the weights
0017 % associated with these regressors will be zeros.
0018 %
0019 % if any warning messages are produced by the inversion process, then we die.
0020 % this is a conservative strategy that ensures that the regression is
0021 % well-behaved (i.e. has a unique, finite solution).  (note that this does
0022 % not cover the case of zero regressors, which is gracefully handled as
0023 % described above.)
0024 %
0025 % note that no scale normalization of the regressor columns is performed.
0026 % also, note that we use \ to perform the inversion.
0027 %
0028 % see also olsmatrix.m.
0029 %
0030 % history:
0031 % 2013/05/12 - change how zero-regressors are handled: if zero regressors are
0032 %              found, give a warning and ensure that zero weights are assigned to
0033 %              them (thus, we no longer result in a crash).
0034 
0035 % bad regressors are those that are all zeros
0036 bad = all(X==0,1);
0037 good = ~bad;
0038 
0039 % report warning
0040 if any(bad)
0041   warning('One or more regressors are all zeros; we will estimate a 0 weight for those regressors.');
0042 end
0043 
0044 % do it
0045 if any(bad)
0046 
0047   f = zeros(size(X,2),size(X,1));
0048   lastwarn('');
0049   f(good,:) = (X(:,good)'*X(:,good))\X(:,good)';
0050   assert(isempty(lastwarn),lastwarn);
0051 
0052 else
0053 
0054   lastwarn('');
0055   f = (X'*X)\X';
0056   assert(isempty(lastwarn),lastwarn);
0057 
0058 end

Generated on Fri 01-Aug-2014 12:03:17 by m2html © 2005