I thought it might be useful to have a little tutorial about how the GLM maps are created for the SOE data. I am basing this off the current method in Tony’s scripts.

Processing Scripts

The general process of creating \(\beta\)-maps for each word requires regressingn out words, nuissance regressors and motion.

Building Word and Nuissance Regressors

The first step is to make 1d regressor files to feed into afni.Tony has a script, regressors.py which which has several functions:

  • build_large_file - converts psychopy table into python variable with proper format
  • demean_and_write - groups reaction time into 6 groups (2 per session), and converts reaction times into z-scores
  • word_length - creates word-length as a nuisance regressor
  • word_stim_time - creates 1d arrays for target word and all other words
  • copy_anat - uses 3dcopy to copy anatomy files (not sure why?)

At the end of all this work, we have generated the files

Building Motion Regressors

The script 3dDcon_Preparation.py (which requires python 2) generates the motion regressors. This script has two functions:

  • afni_to_nifti - this runs the afni command 3dAFNItoNIFTI where all the pre-processed scans in the SOE320 projects folder are converted to nifti files into Tony’s directory
  • prepare_motion_censor_regressor - this takes the motion files from the pre-processed directory. The first three steps use the function 1d_tool.py and this function performs the following tasks:

    De-meaned motion parameters Removes the mean from 1d files
    Motion parameter derivatives Creates .1d derivative file
    Create censor file Takes 1d file and creates a censor 1d file
    Combine censor results Uses 1deval to set motion outlier as 0.1 (1-step(a - 0.1))
    Combine multiple censor files Uses 1deval to combine censor files using \(a*b\)

3dDeconvolve Script

Tony then has a script for running 3d deconvolve (3dDcon_SOE_Doug.py) which has the following functions:

  • extract_words - creates sorted list of words based on file names in patient directory
  • separate_words - creates several subgroups of words (number of groups fed in as argument)
  • Set_the_deconvolve_line - This is used to create the design matrix which is fed into the next step. The regressors included here are:

    Censor Regressor Motion censor
    Target word regressor This is the time point for the word of interest (6 time points)
    All other words This is a regressor for AOW
    Word length regressor This is a regressor for visual information
    Response Time regressor This is the reaction time z-score. This is thought to correlate with overall word difficulty. Note: This is split into 6 different groups, so this is really 6 different regressors
    Motion demean Not really sure?
    Motion derivitive Not sure how generated
    • The other key points of this line are that ‘SPMG1’ is used as the HRF, and the output directory is called GLM_wordmaps. At the end of this script, the line which is created from this command is too long to include below, but the output of this function is a file called X.xmat.1d which is fed into the next function
  • Set_the_Remlfit_line - This takes in the design matrix, the input, and a brain mask. The output of this command will look something like:
$ 3dREMLfit -matrix .../X.xmat.1D 
   -input " " 
   -mask .../SOE_123.mask.nii.gz 
   -fout -rout -tout -Rbuck /jtong/SOE/GLM_wordmaps/Doug_method/123/stats.nii.gz 
   -verb

Running the Script

Finally, we can now process a patient. We can use Tony’s script build_torque_Doug.py which creates and submits a job to the cluster. The final command is:

python3 build_torque_Doug.py SOE_123

Thoughts / Q’s

  • Why is SMPG1 used instead of a multiple basis functions (seems more common in the literature)?
  • How/Why to use -censor_prev_TR option in 1d_tool.py (motion censor script)