Unlike the labs and exercises you've done so far, most analyses and designs will involve multiple conditions that must be modeled and contrasted appropriately to examine the cognitive processes you are investigating. In this lab, we will cover many of the things you might do in a real experimental design. You should be able to set up these designs in FEAT, the fMRI analysis tool in FSL. You won't need any real data, but you will need to create some (but not all) timing files (e.g., 3 column files) based on our descriptions.
On the FEAT webpage, there are detailed instructions on how to set up these sorts of
models and also other, more complicated, models. We will go through parts of this webpage together and you will do part of it on your own.
Most of the time in fMRI analyses, you will be asking whether one condition is greater than another or vice versa. In FEAT, these are set up as linear contrasts between your regressors. Throughout this laboratory, you should be thinking about how to set up the contrasts for what we are testing. We will do most of this together, but you should get in the habit of thinking about how you can isolate cognitive processes via subtractive techniques like a simple contrast. A simple overview of the first-level stats and contrasts can be found here. If you're interested in a more detailed explanation, you can take a look at the relevant FSL course slides.
This is the part of the analysis that uses timing information from your experimental paradigm. This stage of the analysis is crucially important since the results from these analyses get “carried up” to higher levels of the analysis pipeline.
You have an experiment with 3 conditions: monetary gains of $5, $10, and $15. These events are randomly interspersed throughout your experiment and there is nothing else that we want to model for this design.
Here are the timing files you would use for this setup:
Your experiment contains 2 events: 1.) a cue that predicts, with 100% certainty, a reward and 2.) the subsequent reward. Unfortunately, you did not design your experiment very well and these two regressors are highly correlated. Your main aim is to investigate the brain areas that are responsive to receiving a reward, but you are not very interested in the anticipatory effects.
Here are the timing files you would use for this setup:
Most experiments involve more than one condition. In both of the scenarios above, we have more than one condition that we might want to model in our design matrix. You have the timing files for both of designs. How do you set up the models? How would you contrast different conditions? Let's work with the data from SCENARIO 1 for this example
Sometimes you might have a design in which your regressors might be correlated (e.g., reward anticipation → reward outcome). Sometimes you can avoid correlated regressors by carefully designing your experiment. However, sometimes this isn't always possible and you may not care about the variability from one condition. If the regressors are correlated, you will not be able to parse the variance appropriately (i.e., variability related to one regressor might be captured by the other regressor or vice versa, leading to misestimations.)
In SCENARIO 2, we have this exact problem. How can we try to ameliorate this problem, especially if we're only interested in the reward outcome component of the task? We can make FEAT force the regressors to be orthogonal, but what do we do if we're just interested in the reward outcome? We can orthogonalize outcome w.r.t. anticipation, or we can orthogonalize anticipation w.r.t. outcome. Which set up should you use if you're interested in reward outcome?
Presumably, we could've set up our design better to make the outcome and anticipation less correlated from the outset. This could've been accomplished by jitterring the interval between these events. Try this revised outcome EV and see what you get: :biac:courses:outcome.txt
Sometime we might be interested in linear trends of activation – that is, testing which brain areas respond linearly with the level of a condition (e.g., reward). This sort of pattern is implicit in SCENARIO 1; however, we need change the model and EV files. Remember that the 3rd column in the 3-column files controls the intensity of the the event, so this is the part we can change to look for bigger responses for certain events.
You will make the 3-column files for this, but it shouldn't be too hard. You should open the EV files from the first scenario for reference and so you'll know when each event occurred
See this FSL forum post for more details about modeling increasing levels of activation.
The main interesting thing you might do at this level is combing across different kinds of runs. For example, you might have a set up where the odd-numbered runs are non-social (e.g., played against a computer opponent) and the even-numbered runs are social (e.g., played against a human opponent).
You have 6 runs in your experiment; however, the subject was doing something fundamentally different in odd vs even runs. Rather than having all 1s in a single column of your design matrix, you now need two columns to pull out these effects. How do you think you would set this up?
Although you will not be doing third level analyses in this class, there are several analysis principles that can be illustrated when setting up group level analyses. Generally, researchers are interested in the main effects of a condition across their subject sample; however, there is sometimes a need to go past looking at main effects of a particular condition. For example, researchers may want to examine group differences between two populations, or researchers may be interested in how individual differences in some behavioral trait (e.g., sensation seeking) might vary with brain activation (e.g., reward-related activation in the ventral striatum). You will frequently see these techniques applied in papers, and you might one day need to do it yourself.
Setting up main effects is easy. For this, each subject will correspond to one input in your higher level analysis. So, when you're in the full model set up, you will simply create a column of 1s in the design model. You will have to do this for each COPE (Contrast of Parameter Estimates), which will be carried up from your lower-level analyses.
See this page for details.
Imagine you have design with 20 subjects and you want to examine the main effect of a particular condition (it doesn't matter what condition). How would you set your model up in FEAT?
Sometimes you will want to test whether two groups are different. This is relatively easy to set up in FEAT. Let's imaging that our 20 subjects from above also took a sensation-seeking survey (see scores below). We can do a median split based on these scores to divide our sample into two groups: high sensation seeking and low sensation seeking. See this page for details on how to set up a test of two groups. Essentially, rather than having one group and one EV for our model, we will now have two. We can then do contrasts on these EVs.
SUBJ score 1 1 2 1 3 2 4 4 5 4 6 7 7 8 8 8 9 8 10 9 11 11 12 13 13 13 14 14 15 16 16 17 17 19 18 19 19 20 20 20
When you use covariates in our group level analysis, you are testing whether variability in your data can be “explained” by a regressor that models cross-subjects differences along some trait (e.g., sensation seeking). This is easy to set up in FEAT; however, the main thing you need to remember to do is demean your covariate (i.e., the scores or trait measures should sum to zero across all of your subjects).
Let's take the same 20 subjects from above and, rather than doing a median split based on their sensation-seeking scores, let's add in those scores as a covariate. Remember to demean the scores by subtracting the mean from each individual score.
See this page for further details.