Help improve this workflow!
This workflow has been published but could be further improved with some additional meta data:- Keyword(s) in categories input, output, operation, topic
You can help improve this workflow by suggesting the addition or removal of keywords, suggest changes and report issues, or request to become a maintainer of the Workflow .
Snakemake workflow for fetal bold brain segmentation
If you don't have AFNI and FSL installed, you need to use the
--use-singularity
option when running
snakemake
.
Training is best with a GPU, but inference can be done reasonably fast with CPU only.
Step 1: Obtain a copy of this workflow
-
Create a new github repository using this workflow as a template .
-
Clone the newly created repository to your local system, into the place where you want to perform the data analysis.
Step 2: Configure workflow
Configure the workflow according to your needs via editing
config.yml
file, specifically the paths to your nifti images.
Step 3: Install python dependencies
You should install your dependencies in a virtual environment. Once you have activated your virtual environment, you can install the dependencies with
pip install .
A recommended alternative that also takes care of creating a virtual environment is to use Poetry. On OSX or Linux can be installed with:
curl -sSL https://install.python-poetry.org | python3 -
Once you have poetry installed you can simply use the following to install dependencies into a virtual environment, then activate it:
cd nnunet-fetalbrain
poetry install
poetry shell
Step 4: Execute workflow
To run inference on your test datasets, use:
snakemake all_test --cores all
By default, the trained model in the config will be downloaded and applied.
If you want to train a new model, set the
use_downloaded
config variable to one that is not in the
download_model
, then use:
snakemake all_train --cores all
Code Snippets
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | import json #load template json with open(snakemake.input.template_json) as f: dataset = json.load(f) dataset['training'] = [{'image': img, 'label': lbl} for img,lbl in zip(snakemake.params.training_imgs_nosuffix,snakemake.input.training_lbls)] dataset['numTraining'] = len(dataset['training']) #write modified json with open(snakemake.output.dataset_json, 'w') as f: json.dump(dataset, f, indent=4) |
67 | shell: 'wget {params.url}' |
73 | shell: 'mkdir -p trained_model && tar -C trained_models -xvf {input}' |
84 | shell: 'fslsplit {input} {params.prefix} -t' |
96 | shell: 'fslsplit {input} {params.prefix} -t' |
103 | shell: '3dresample -dxyz 3.5 3.5 3.5 -prefix {output} -input {input}' |
109 | shell: '3dZeropad -RL 96 -AP 96 -prefix {output} {input}' |
117 | shell: 'cp {input} {output}' |
125 | shell: 'cp {input} {output}' |
135 | shell: 'cp {input} {output}' |
149 | script: 'create_json.py' |
169 170 171 | shell: '{params.nnunet_env_cmd} && ' 'nnUNet_plan_and_preprocess -t {params.task_num} --verify_dataset_integrity' |
196 197 198 199 | shell: '{params.nnunet_env_cmd} && ' '{params.rsync_to_tmp} && ' 'nnUNet_train {params.checkpoint_opt} {wildcards.arch} {wildcards.trainer} {wildcards.unettask} {wildcards.fold}' |
213 214 | shell: 'tar -cvf {output} -C {params.trained_model_dir} {params.files_to_tar}' |
242 243 244 | shell: '{params.nnunet_env_cmd} && ' 'nnUNet_predict -chk {wildcards.checkpoint} -i {params.in_folder} -o {params.out_folder} -t {wildcards.unettask}' |
265 266 | shell: 'fslmerge -t {output} {input}' |
Support
- Future updates
Related Workflows





