Help improve this workflow!
This workflow has been published but could be further improved with some additional meta data:- Keyword(s) in categories input, output
You can help improve this workflow by suggesting the addition or removal of keywords, suggest changes and report issues, or request to become a maintainer of the Workflow .
Introduction
nf-core/slamseq is a bioinformatics analysis pipeline used for SLAMSeq sequencing data.
The workflow processes SLAMSeq datasets using Slamdunk and infers direct transcriptional targets using DESeq2 .
The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It comes with docker containers making installation trivial and results highly reproducible.
Quick Start
i. Install
nextflow
ii. Install either
Docker
or
Singularity
for full pipeline reproducibility (please only use
Conda
as a last resort; see
docs
)
iii. Download the pipeline and test it on a minimal dataset with a single command
nextflow run nf-core/slamseq -profile test,<docker/singularity/conda/institute>
Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use
-profile <institute>
in your command. This will enable eitherdocker
orsingularity
and set the appropriate execution settings for your local compute environment.
iv. Start running your own analysis!
nextflow run nf-core/slamseq -profile <docker/singularity/conda/institute> --input design.tsv --genome GRCh38
See usage docs for all of the available options when running the pipeline.
Documentation
The nf-core/slamseq pipeline comes with documentation about the pipeline, found in the
docs/
directory:
-
Pipeline configuration
Credits
nf-core/slamseq was originally written by Tobias Neumann ( @t-neumann ) for the use at the IMP Vienna .
Many thanks to other who have helped out along the way too, including (but not limited to): @apeltzer , @drpatelh , @pditommaso , @maxulysse , @ewels , @zethson , @bgruening , @micans .
Contributions and Support
If you would like to contribute to this pipeline, please see the contributing guidelines .
For further information or help, don't hesitate to get in touch on Slack (you can join with this invite ).
Citation
If you use nf-core/slamseq for your analysis, please cite it using the following doi: 10.5281/zenodo.3826585
You can cite
slamdunk
as follows:
Quantification of experimentally induced nucleotide conversions in high-throughput sequencing datasets.
Tobias Neumann, Veronika A. Herzog, Matthias Muhar, Arndt von Haeseler, Johannes Zuber, Stefan L. Ameres & Philipp Rescheneder.
BMC Bioinformatics 2019 May 20. doi: 10.1186/s12859-019-2849-7 .
You can cite the
nf-core
publication as follows:
The nf-core framework for community-curated bioinformatics pipelines.
Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .
ReadCube: Full Access Link
An extensive list of references for the tools used by the pipeline can be found in the
CITATIONS.md
file.
Code Snippets
110 111 112 | """ gunzip -c $fasta > ref.fa """ |
152 153 154 | """ gtf2bed.py $gtf | sort -k1,1 -k2,2n > ${gtf.baseName}.3utr.bed """ |
297 298 299 300 301 302 303 304 305 306 307 | """ echo $workflow.manifest.version > v_pipeline.txt echo $workflow.nextflow.version > v_nextflow.txt fastqc --version > v_fastqc.txt trim_galore --version > v_trimgalore.txt slamdunk --version > v_slamdunk.txt echo \$(R --version 2>&1) > v_R.txt R -e 'packageVersion("DESeq2")' | grep "\\[1\\]" > v_DESeq2.txt multiqc --version > v_multiqc.txt scrape_software_versions.py &> software_versions_mqc.yaml """ |
326 327 328 | """ check_design.py $design nfcore_slamseq_design.txt """ |
368 369 370 371 372 373 374 375 376 377 | """ mkdir -p TrimGalore trim_galore \\ $reads \\ --stringency 3 \\ --fastqc \\ --cores $task.cpus \\ --output_dir TrimGalore \\ --basename ${meta.name} """ |
397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 | """ slamdunk map \\ -r $fasta \\ -o map \\ -5 $params.trim5 \\ -n 100 \\ -a $params.polyA \\ -t $task.cpus \\ --sampleName ${meta.name} \\ --sampleType ${meta.type} \\ --sampleTime ${meta.time} \\ --skip-sam \\ $quantseq \\ $endtoend \\ $fastq """ |
440 441 442 443 444 445 446 | """ slamdunk filter \\ -o filter \\ $multimappers \\ -t $task.cpus \\ $map """ |
469 470 471 472 473 474 475 476 477 | """ slamdunk snp \\ -o snp \\ -r $fasta \\ -c $params.min_coverage \\ -f $params.var_fraction \\ -t $task.cpus \\ ${filter[0]} """ |
516 517 518 519 520 521 522 523 524 525 526 | """ slamdunk count -o count \\ -r $fasta \\ $snpMode \\ -b $bed \\ -l $params.read_length \\ -c $params.conversions \\ -q $params.base_quality \\ -t $task.cpus \\ ${filter[0]} """ |
550 551 552 553 554 555 556 | """ alleyoop collapse \\ -o collapse \\ -t $task.cpus \ $count sed -i "1i# name:${name}" collapse/*csv """ |
577 578 579 580 581 582 583 584 | """ alleyoop rates \\ -o rates \\ -r $fasta \\ -mq $params.base_quality \\ -t $task.cpus \\ ${filter[0]} """ |
606 607 608 609 610 611 612 613 614 615 | """ alleyoop utrrates \\ -o utrrates \\ -r $fasta \\ -mq $params.base_quality \\ -b $bed \\ -l $params.read_length \\ -t $task.cpus \\ ${filter[0]} """ |
637 638 639 640 641 642 643 644 645 646 | """ alleyoop tcperreadpos \\ -o tcperreadpos \\ -r $fasta \\ $snpMode \\ -mq $params.base_quality \\ -l $params.read_length \\ -t $task.cpus \\ ${filter[0]} """ |
669 670 671 672 673 674 675 676 677 678 679 | """ alleyoop tcperutrpos \\ -o tcperutrpos \\ -r $fasta \\ -b $bed \\ $snpMode \\ -mq $params.base_quality \\ -l $params.read_length \\ -t $task.cpus \\ ${filter[0]} """ |
709 710 711 712 713 714 | """ alleyoop summary \\ -o summary.txt \\ $countFolderFlag \\ ./filter/*bam """ |
743 744 745 | """ deseq2_slamdunk.r -t $group -d $conditions -c counts -p $params.pvalue -O $group """ |
776 777 778 | """ multiqc -m fastqc -m cutadapt -m slamdunk -f $rtitle $rfilename $custom_config_file . """ |
795 796 797 | """ markdown_to_html.py $output_docs -o results_description.html """ |
Support
- Future updates
Related Workflows





