Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aCompcor ROI Mask - GM Voxels #3195

Open
demidenm opened this issue Jan 6, 2024 · 18 comments
Open

aCompcor ROI Mask - GM Voxels #3195

demidenm opened this issue Jan 6, 2024 · 18 comments
Labels

Comments

@demidenm
Copy link

demidenm commented Jan 6, 2024

What happened?

aCompCor mask may include gray mask voxels for some participants. It appears the procedure "a mask of pixels that likely contain a volume fraction of GM is subtracted from the aCompCor masks. This mask is obtained by dilating a GM mask extracted from the FreeSurfer aseg segmentation, and it ensures components are not extracted from voxels containing a minimal fraction of GM. Finally, these masks are resampled into BOLD space and binarized by thresholding at 0.99" doesn't work in all cases, esp. the dataset I am working with as rate is at least 1/10 or 1/15 where this occurs.

What command did you use?

fMRIPrep command: /opt/conda/envs/fmriprep/bin/fmriprep /bids_dir /output_dir participant --participant-label AB123 --fs-license-file license.txt --bids-filter-file task_list.json --ignore slicetiming --fd-spike-threshold .9 --output-space MNI152NLin2009cAsym:res-2 --project-goodvoxels --cifti-output 91k -vv -w /wd

What version of fMRIPrep are you running?

23.1.4

How are you running fMRIPrep?

Singularity

Is your data BIDS valid?

Yes

Are you reusing any previously computed results?

No

Please copy and paste any relevant log output.

No response

Additional information / screenshots

Including screen shot of freesurfer WM/GM/CSF contours and the image where acompcor ROI mask using for acompcor.

Screenshot 2024-01-06 at 1 09 36 PM

Screenshot 2024-01-06 at 1 09 22 PM

@demidenm demidenm added the bug label Jan 6, 2024
@julfou81
Copy link
Contributor

Hi @demidenm,

Thank you for this report.
The second image shows correct boundaries for CSF/WM/GM compartments.
Could you detail what we see on the first image and what are the colors meaning? It looks like the WM/GM boundary overlayed on the fMRI reference image. But if it is really the acompcor boundaries, then indeed there is an issue.

@demidenm
Copy link
Author

demidenm commented Jan 11, 2024

@julfou81 - Thanks for the note. Yes, the delineation of the CSF/WM/GM boundary seems reasonable (Left, above). However, I suspect it may contain excessively overlapping boundaries as a result of either fast segmentation or the fuzziness applied? The contours in the above are a visually over represented by the magenta color. Every instance of this type of delineation results in a poor acompcor ROI (In 1000+ fmriprep outputs, I have observed 50-60 of these). The other image (Right, above) I provided is the Brain mask and (anatomical/temporal) CompCor ROIs masks overlayed on the reference image. Specifically, the red = BOLD image mask, magenta = anatomical CompCor ROI, blue = temporal ComCor ROI and green = brain edge crown.

Note, when the freesurfer Brain mask and brain tissue segmentation of the T1w looks like the below, the aCompcor ROI oddity does not occur and it is relatively normal.
Screenshot 2024-01-11 at 1 04 40 PM
Screenshot 2024-01-11 at 1 11 26 PM

UPDATE: touch base with @effigies and it seems like it is a FAST segmentation failure as the labels are incorrect. See example: Blue is labeled as rec-normalized_label-GM_probseg, Green is rec-normalized_label-CSF_probseg and Red is rec-normalized_label-WM_probseg
Screenshot 2024-01-12 at 9 46 34 AM

@effigies
Copy link
Member

@oesteban If you have a second, have you seen this GM/CSF swap before? Is there a good way to detect? I suppose we could take total volumes and sort those to check for plausible ordering WM > GM > CSF...

@julfou81
Copy link
Contributor

@demidenm ,

It is a very interesting case. From what I just read it seems that the origin of the issue is a segmentation failure from FSL FAST? It looks like the underlying T1w image in the case where the FAST segmentation fails (in your first post) has a very different contrast (the WM look much brighter) than in your second case where the segmentation looks good. FAST is not using tissue priors and only relies on images intensity distribution. Do you think that for all the cases where aCompCor (and FAST) failed, there was an odd intensity distribution in the input T1w images?

@demidenm
Copy link
Author

@julfou81 - Thanks for the note. I conducted the sanity check on one of the subjects that clearly has this mix-up.
Screenshot 2024-01-12 at 10 48 43 AM

Taking the *T1w.nii.gz image, I ran the following steps.
brain extraction: bet sub-AAA_rec-normalized_T1w-brain.nii.gz -B
fast segmentation: fast -t 1 -n 3 -g -o ./ sub-AAA_rec-normalized_T1w-brain.nii.gz

Consistent with the pve_[0:2] outputs (0=CSF, 1=GM, and 2=WM), the segmentation seemed to have worked correctly.
pve_0.nii.gz = Green
pve_1.nii.gz = Blue
pve_2.nii.gz = Red
Screenshot 2024-01-12 at 10 55 43 AM

Based on the above thesis, pve_0 and pve_1 would be flipped but in this instance they do not appear to be.

@julfou81
Copy link
Contributor

julfou81 commented Jan 12, 2024

That was a good test. What you could do now is to navigate through the working directory for this subject look into:

/wd/fmriprep_23_1_wf/single_subject_AAA_wf/anat_preproc_wf/t1w_dseg/

And look at how the fast command was run (in command.txt) for this subject and on which preprocessed image to try to understand what went wrong. Was the input image for this command (sub-AAA_ses-02_T1w_noise_corrected_corrected_xform_masked.nii.gz) with a strange intensity distribution?

@demidenm
Copy link
Author

@julfou81 - thanks for note. Unfortunately the working dir was ran in tmp scratch so it isn't easily available. Nevertheless, I will try to rerun soon and extend hold on the fmriprep working dir to check.

@demidenm
Copy link
Author

demidenm commented Jan 13, 2024

@julfou81 - Had a change to rerun it and inspect the ./wd/ closer. Looking at the command, the command differs a touch (e.g. uses -N). More notably, the file that fmriprep uses is a noise corrected brain.

Here is fmriprep command.txt:
fast -N -p -g -S 1 /wd/fmriprep_23_1_wf/single_subject_AAA_wf/anat_preproc_wf/t1w_dseg/sub-AAA_ses-2YearFollowUpYArm1_rec-normalized_T1w_noise_corrected_corrected_xform_masked.nii.gz

I went ahead and compared the results using fast -N -p 3 -g -S 1 -o on both the fp and orig fast outputs. Labels: pve_0.nii.gz = Green
pve_1.nii.gz = Blue
pve_2.nii.gz = Red

./fp/
The fmriprep corrected imaged has a modified intensity and smoothness (direct comparison left v right below)
fp_t1w
The segmentation labels, as above, do fail.
fp_seg

./orig/
The original T1w image is not as smooth as the above..
org_t1w
The resulting segmentations are correct.
org_seg

Original T1w vs fMRIPrep T1w-corrected
orig
fp

@julfou81
Copy link
Contributor

julfou81 commented Jan 13, 2024

Thank you @demidenm for this thorough analysis! It is very interesting and it is the first time that I see such a strange behavior of FAST on human brain data.

A good point is that you are able to reproduce this strange behavior calling 'FAST' outside of fmriprep with the same input.
But it is rather strange, as the corrected image by fmriprep seems correctly debiased and denoised, with a nice contrast between GM and WM.
What is very strange is that this happens on several subjects but not all of them. It would be interesting to understand which specificity on those images triggers this inversion of labels.
I don't know FAST enough to propose an explanation of this behavior, you may ask in the FSL forum what they think about it, the FSL developers are responding there and they could explain the origin of this behavior.
The next step is of course to find a way so that this label inversion does not happen within FMRIPREP for those subject. I see several options:

  • Either by changing the options in the FAST call by fmriprep that would require to issue a pull request within FMRIPREP
  • Or by finding an option in the fmriprep call that would make its behavior more robust for your subjects
  • Or by preparing your images before launching fmriprep so that this issue with FAST does not happen with the same fmriprep call

@demidenm
Copy link
Author

demidenm commented Jan 13, 2024

@julfou81 - thanks for the note & suggestion! Will see if @oesteban @effigies arrive at a solution but the adding into the pipeline some kind of subject prep to avoid this may be helpful. TBH, I'm not quite sure what that may be. In the meantime, for my purposes, a non-elegant way to flag this issue may be to run bet+fast on the uncorrected T1w and estimate the dice(GM-uncorrected,fmriprep GM-corrected). Bad ones would be approx <.50

@demidenm
Copy link
Author

demidenm commented Jan 19, 2024

Again, used a rough approach to calculate the % in which this occurs using the below .py code. The best approximation was the grey matter mask as the bet approach I use differs from freesurfer and so the CSF is too variability.

When I manually reviewed a subset of data, I found that it occurred in ~11% of subjects. Running this check and manually confirming, I found that [less than] .25 dice similarity was a good indicator of which subjects had this issue. Validated it by pulling a subset-- found that <.25 was a 1:1 indicator of the failure. The 25-40 dice range were not as bad but one would be better off not using those.

check_proportion

Interestingly enough, the proportion along which this occurs differs across scanners... Philips >> GE >> SIEMENS. Which makes more sense as to why I was seeing ~11% and not ~8% in random subsample of 1,000 data I manually reviewed, as they were all GE scanners.
image

import os
import fnmatch
import argparse
import warnings
warnings.filterwarnings("ignore")
from glob import glob
from nipype.interfaces import fsl
from pyrelimri import similarity

# input arguments
parser = argparse.ArgumentParser(description="Script to run similarity estimate between func/anat masks")
parser.add_argument("--in_dat", help="path to subjects session subfolders")
parser.add_argument("--task", help="task label")
parser.add_argument("--run", help="task label", default=None)
parser.add_argument("--stype", help="similarity type, dice or jaccard")

# parse input args
args = parser.parse_args()
input_dir = args.in_dat
task = args.task
run = args.run
stype = args.stype

# folder paths
anat_dir = f'{input_dir}/anat'
func_dir = f'{input_dir}/func'
fmap_dir = f'{input_dir}/fmap'
fsl_out = f'{input_dir}/anat/fsl'

# glob files to set variables 
anat_file = glob(f'{anat_dir}/*_space-MNI152NLin2009cAsym_res-2_desc-brain_mask.nii.gz')[0]

# dont want to run on MNI but subject space, ignore MNI
all_files = glob(f'{anat_dir}/*.nii.gz')
filtered_files = [file for file in all_files if not fnmatch.fnmatch(file, '*space-MNI152*')]

# Check if any files are left after filtering
if filtered_files:
    # Use the filtered files as needed
    fp_gm = next((file for file in filtered_files if '_label-GM_probseg.nii.gz' in file), None)
    fp_wm = next((file for file in filtered_files if '_label-WM_probseg.nii.gz' in file), None)
    fp_csf = next((file for file in filtered_files if '_label-CSF_probseg.nii.gz' in file), None)
    t1w_file = next((file for file in filtered_files if '_T1w.nii.gz' in file), None)

if run is None:
    bold_file = glob(f'{func_dir}/*_task-{task}_space-MNI152NLin2009cAsym_res-2_desc-brain_mask.nii.gz')[0]
else:
    bold_file = glob(f'{func_dir}/*_task-{task}_run-{run}_space-MNI152NLin2009cAsym_res-2_desc-brain_mask.nii.gz')[0]


if not os.path.exists(fsl_out):
    try:
        # Create the fsl output directory if it doesn't exist
        os.makedirs(fsl_out)
        # Create bet output name
        t1w_base = os.path.basename(t1w_file)
        prefix_t1w = t1w_base.rsplit('.', 2)[0]
        t1w_brain_fname = f'{fsl_out}/{prefix_t1w}-brain.nii.gz'
        # Run BET
        bet_run = fsl.BET()
        bet_run.inputs.in_file = t1w_file
        # bet_run.inputs.reduce_bias = True
        bet_run.inputs.robust = True
        bet_run.inputs.out_file = t1w_brain_fname
        bet_out = bet_run.run()
        # Run FAST
        fast_run = fsl.FAST()
        fast_run.inputs.in_files = t1w_brain_fname
        # fast_run.inputs.out_basename = t1w_brain_fname
        fast_out = fast_run.run()
    except Exception as e:
        # exception 
        print(f"An error occurred: {e}")

# sets CSF [0], GM [1], WM [2]
orig_csf = glob(f'{fsl_out}/*_T1w-brain_pve_0.nii.gz')[0]
orig_gm = glob(f'{fsl_out}/*_T1w-brain_pve_1.nii.gz')[0]
orig_wm = glob(f'{fsl_out}/*_T1w-brain_pve_2.nii.gz')[0]

# run similarity
sim_anatfunc = similarity.image_similarity(imgfile1=anat_file,
                                           imgfile2=bold_file,
                                           similarity_type=stype)
origcsf_fpcsf = similarity.image_similarity(imgfile1=orig_csf,
                                            imgfile2=fp_csf,
                                            similarity_type=stype)
origwm_fpwm = similarity.image_similarity(imgfile1=orig_wm,
                                          imgfile2=fp_wm,
                                          similarity_type=stype)
origgm_fpgm = similarity.image_similarity(imgfile1=orig_gm,
                                          imgfile2=fp_gm,
                                          similarity_type=stype)

print(round(sim_anatfunc, 2), round(origcsf_fpcsf, 2), 
      round(origwm_fpwm, 2), round(origgm_fpgm, 2))

@julfou81
Copy link
Contributor

julfou81 commented Mar 22, 2024

I continue on this thread as I observed a similar pattern for the aCompCor ROI after fmriprep processing.

It is rather strange and interesting because the first execution on the same subject produced the correct aCompCor ROIs, while a second execution , where I asked for a new output space (after removing all the temporary folder from the previous execution) produced wrong ROIS.
I relaunched a third execution (still removing the temporary files from previous executions), and this time the correct ROIs were produced.

What I think triggered this change of behaviour? The addition of --level full --derivatives /work/$study/derivatives/fmriprepin the command line of fmriprep! Do you think this is possible?

I will test more extensively to try to reproduce this bug. Also, even if I requested the aCompCor ROIs as output (--debug compcor), they are not produced, while they were in previous fmriprep versions.

What version of fMRIPrep are you running?

23.2.1

How are you running fMRIPrep?

Singularity

Is your data BIDS valid?

Yes

Are you reusing any previously computed results?

Yes (freesurfer output) / No (I removed previously computed output and working directory from previous fmriprep execution)

initial execution command:

/opt/conda/envs/fmriprep/bin/fmriprep --fs-license-file /work/freesurfer/license.txt /work/PhantomPainBrain /work/PhantomPainBrain/derivatives/fmriprep participant --participant-label AMP06 -w /work/temp_data_PhantomPainBrain --mem-mb 50000 --omp-nthreads 10 --nthreads 12 --longitudinal --ignore slicetiming --fd-spike-threshold 0.5 --dvars-spike-threshold 2.0 --bold2t1w-dof 9 --debug compcor --output-spaces T1w fsnative fsaverage fsLR MNI152NLin2009cAsym --cifti-output 91k --project-goodvoxels --fs-subjects-dir /work/PhantomPainBrain/derivatives/fmriprep/sourcedata/freesurfer

Corresponding output of the first execution:
Capture d’écran 2024-03-22 à 15 27 32

second (buggy) command execution command:

/opt/conda/envs/fmriprep/bin/fmriprep --fs-license-file /work/freesurfer/license.txt /work/PhantomPainBrain /work/PhantomPainBrain/derivatives/fmriprep participant --participant-label AMP06 -w /work/temp_data_PhantomPainBrain --mem-mb 50000 --omp-nthreads 10 --nthreads 12 --longitudinal --ignore slicetiming --fd-spike-threshold 0.5 --dvars-spike-threshold 2.0 --bold2t1w-dof 9 --output-spaces MNI152NLin6Asym --fs-subjects-dir /work/PhantomPainBrain/derivatives/fmriprep/sourcedata/freesurfer --debug compcor --level full --derivatives /work/PhantomPainBrain/derivatives/fmriprep

Corresponding output of the second (buggy) command:
Capture d’écran 2024-03-22 à 15 27 55

Third execution command:

/opt/conda/envs/fmriprep/bin/fmriprep --fs-license-file /work/freesurfer/license.txt /work/PhantomPainBrain /work/PhantomPainBrain/derivatives/fmriprep participant --participant-label AMP06 -w /work/temp_data_PhantomPainBrain --mem-mb 50000 --omp-nthreads 10 --nthreads 12 --longitudinal --ignore slicetiming --fd-spike-threshold 0.5 --dvars-spike-threshold 2.0 --bold2t1w-dof 9 --output-spaces MNI152NLin6Asym --fs-subjects-dir /work/PhantomPainBrain/derivatives/fmriprep/sourcedata/freesurfer --debug compcor

Third execution output:

Capture d’écran 2024-03-22 à 15 28 07

@julfou81
Copy link
Contributor

julfou81 commented Oct 7, 2024

I wanted to follow up on this issue, as this happened to some of our subjects with FMRIPREP v24.0.0.

I take back what I just said above, this wrong behavior in FMRIPREP for the creation of the aCompCor masks is not linked to wether or not we use the --derivatives or --level argument. It seems to happen randomly. For now the fix we found is just to delete the working directory (still keeping the freesurfer output) and re-run fmriprep all over again for the faulty subject. So far a re-execution of an execution is enough to obtain correct results.

@demidenm
Copy link
Author

demidenm commented Oct 7, 2024

@julfou81 - This is interesting. Wonder what is up with the ABCD study data as rerunning didnt really resolve this problem for me :(

@effigies
Copy link
Member

effigies commented Oct 7, 2024

Hey @pauldmccarthy, I wonder if you could have a look at this thread. In particular #3195 (comment) shows FAST sometimes mislabeling CSF and GM. I'm curious if there's something we can do to either detect this or parametrize FAST to avoid it.

@pauldmccarthy
Copy link

Hi @effigies, I have seen instances of FAST failing on images where there is not enough variance in one or more of the tissue classes for it to be able to model the distribution of values within each class. Based on the screenshots in #3195 (comment), I'm guessing that this is what is happening here - the different tissue classes are too uniform for FAST to be able to accurately model them.

FAST is also sensitive to NaNs or non-zero values in non-brain regions, although I'm not sure if that's relevant here.

There are a few relevant posts on the FSL mailing list:

I'm not familiar enough with fmriprep to know whether it is possible to pass fast an unprocessed (brain-extracted) T1?

What kind of normalisation/rescaling is being applied? If you can roughly predict the mean intensity of each tissue class, you could pass these values as priors to fast via the -s option. Or, if you have tissue priors in your standard space, and have already calculated a linear registration to that space, you can use the -a / -A options to give FAST those priors (see e.g. the MNI152 priors in $FSLDIR/data/standard/tissuepriors/).

I'm happy to play around if you could point me to an image which causes the failure.

@effigies
Copy link
Member

effigies commented Oct 7, 2024

Here is our T1w template workflow:

https://www.nipreps.org/smriprep/api/smriprep.workflows.anatomical.html#smriprep.workflows.anatomical.init_anat_template_wf

That's actually the 1-image variant of the workflow. Here's for 2+:

smriprep-workflows-anatomical-3

We denoise (Rician, I think) the images first. If there are multiple we bias-field correct, and then merge with Freesurfer's mri_robust_template.

It seems likely to be the DenoiseImage that's the problem here. We could recombine the original T1w images using the calculated transforms and pass that to FAST instead.

@julfou81
Copy link
Contributor

julfou81 commented Oct 8, 2024

I want to share a new observation I made. I still can't wrap my head around it: I tried very hard to obtain these wrong aCompCor masks with fmriprep v24.0.0 on a single subject where I saw wrong masks once (but didn't save the working directory).
I wonder if, in my case, I fall into this bad scenario for the aCompCor masks when I re-run fmriprep after:

  • I delete the working directory
  • I keep the fmriprep output directory! (even if the 1st execution was giving good aCompCor masks!)

(in my case I was rerunning fmriprep because I was fighting another bug: antsAI failing during the N4 correction of the bold images for skullstripping, similar to what was described here, but with no issue on coregistration: #3163)

Every time I relaunched the fmriprep execution after deleting both the working directory and the output directory, the aCompcor masks were good.

I also took the input of fast used by fmriprep (in a case of a correct execution) and run fast(from my laptop) many times on it and never got a wrong segmentation after over 10 tries. I wanted to check if I could save an input for fast causing a bad segmentation but with no luck so far. When I get a wrong execution (bad aCompCor masks) , the folder fast, among other, is not present in the working directory!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants