Help improve this workflow!
This workflow has been published but could be further improved with some additional meta data:- Keyword(s) in categories input, output, operation, topic
You can help improve this workflow by suggesting the addition or removal of keywords, suggest changes and report issues, or request to become a maintainer of the Workflow .
This is the template for a new Snakemake workflow. Replace this text with a comprehensive description covering the purpose and domain.
Insert your code into the respective folders, i.e.
scripts
,
rules
, and
envs
. Define the entry point of the workflow in the
Snakefile
and the main configuration in the
config.yaml
file.
Inspiration: https://nf-co.re/atacseq
Authors
- Antonie Vietor (@AntonieV)
Usage
If you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this (original) repository and, if available, its DOI (see above).
Step 1: Obtain a copy of this workflow
-
Create a new github repository using this workflow as a template .
-
Clone the newly created repository to your local system, into the place where you want to perform the data analysis.
Step 2: Configure workflow
Configure the workflow according to your needs via editing the files in the
config/
folder. Adjust
config.yaml
to configure the workflow execution, and
samples.tsv
to specify your sample setup.
Step 3: Install Snakemake
Install Snakemake using conda :
conda create -c bioconda -c conda-forge -n snakemake snakemake
For installation details, see the instructions in the Snakemake documentation .
Step 4: Execute workflow
Activate the conda environment:
conda activate snakemake
Test your configuration by performing a dry-run via
snakemake --use-conda -n
Execute the workflow locally via
snakemake --use-conda --cores $N
using
$N
cores or run it in a cluster environment via
snakemake --use-conda --cluster qsub --jobs 100
or
snakemake --use-conda --drmaa --jobs 100
If you not only want to fix the software stack but also the underlying OS, use
snakemake --use-conda --use-singularity
in combination with any of the modes above. See the Snakemake documentation for further details.
Step 5: Investigate results
After successful execution, you can create a self-contained interactive HTML report with all results via:
snakemake --report report.html
This report can, e.g., be forwarded to your collaborators. An example (using some trivial test data) can be seen here .
Step 6: Commit changes
Whenever you change something, don't forget to commit the changes back to your github copy of the repository:
git commit -a
git push
Step 7: Obtain updates from upstream
Whenever you want to synchronize your workflow copy with new developments from upstream, do the following.
-
Once, register the upstream repository in your local copy:
git remote add -f upstream git@github.com:snakemake-workflows/chipseq.git
orgit remote add -f upstream https://github.com/snakemake-workflows/chipseq.git
if you do not have setup ssh keys. -
Update the upstream version:
git fetch upstream
. -
Create a diff with the current version:
git diff HEAD upstream/master workflow > upstream-changes.diff
. -
Investigate the changes:
vim upstream-changes.diff
. -
Apply the modified diff via:
git apply upstream-changes.diff
. -
Carefully check whether you need to update the config files:
git diff HEAD upstream/master config
. If so, do it manually, and only where necessary, since you would otherwise likely overwrite your settings and samples.
Step 8: Contribute back
In case you have also changed or added steps, please consider contributing them back to the original repository:
-
Fork the original repo to a personal or lab account.
-
Clone the fork to your local system, to a different place than where you ran your analysis.
-
Copy the modified files from your analysis to the clone of your fork, e.g.,
cp -r workflow path/to/fork
. Make sure to not accidentally copy config file contents or sample sheets. Instead, manually update the example config files if necessary. -
Commit and push your changes to your fork.
-
Create a pull request against the original repository.
Testing
Test cases are in the subfolder
.test
. They are automatically executed via continuous integration with
Github Actions
.
Code Snippets
14 15 | wrapper: "0.64.0/bio/cutadapt/pe" |
31 32 | wrapper: "0.64.0/bio/cutadapt/se" |
15 16 | wrapper: "0.64.0/bio/samtools/view" |
28 29 | wrapper: "0.64.0/bio/bamtools/filter_json" |
44 45 | shell: " ../workflow/scripts/rm_orphan_pe_bam.py {input} {output.bam} {params} 2> {log}" |
58 59 | wrapper: "0.64.0/bio/samtools/sort" |
15 16 | wrapper: "0.64.0/bio/bwa/mem" |
30 31 | wrapper: "0.64.0/bio/picard/mergesamfiles" |
43 44 | wrapper: "0.64.0/bio/picard/markduplicates" |
10 11 | wrapper: "0.64.0/bio/preseq/lc_extrap" |
42 43 | wrapper: "0.64.0/bio/picard/collectmultiplemetrics" |
55 56 | wrapper: "0.64.0/bio/bedtools/genomecov" |
65 66 | shell: "sort -k1,1 -k2,2n {input} > {output} 2> {log}" |
78 79 | wrapper: "0.64.0/bio/ucsc/bedGraphToBigWig" |
89 90 | shell: "find {input} -type f -name '*.bigWig' -exec echo -e 'results/big_wig/\"{{}}\"\t0,0,178' \; > {output} 2> {log}" |
112 113 | wrapper: "0.64.0/bio/deeptools/computematrix" |
127 128 | wrapper: "0.64.0/bio/deeptools/plotprofile" |
142 143 | wrapper: "0.64.0/bio/deeptools/plotheatmap" |
158 159 160 161 | shell: "( Rscript -e \"library(caTools); source('../workflow/scripts/run_spp.R')\" " " -c={input} -savp={output.plot} -savd={output.r_data} " " -out={output.res_phantom} -p={threads} 2>&1 ) >{log}" |
173 174 | script: "../scripts/phantompeak_correlation.R" |
189 190 191 | shell: "( gawk -v OFS='\t' '{{print $1, $9}}' {input.data} | cat {input.nsc_header} - > {output.nsc} && " " gawk -v OFS='\t' '{{print $1, $10}}' {input.data} | cat {input.rsc_header} - > {output.rsc} 2>&1 ) >{log}" |
9 10 | wrapper: "0.64.0/bio/fastqc" |
20 21 | wrapper: "0.64.0/bio/multiqc" |
31 32 | shell: "curl -L ftp://ftp.ensembl.org/pub/release-{params.release}/fasta/{params.species}/{params.datatype}/{params.spec_up}.{params.build}.{params.datatype}.{params.suffix}.fa.gz |gzip -d > {output} 2> {log}" |
46 47 | wrapper: "0.64.0/bio/reference/ensembl-annotation" |
58 59 | shell: "../workflow/scripts/gtf2bed {input} > {output} 2> {log}" |
69 70 | wrapper: "0.64.0/bio/samtools/faidx" |
82 83 | wrapper: "0.64.0/bio/bwa/index" |
92 93 | shell: "cut -f 1,2 {input} > {output} 2> {log}" |
8 9 | wrapper: "0.64.0/bio/samtools/flagstat" |
19 20 | wrapper: "0.64.0/bio/samtools/idxstats" |
31 32 | wrapper: "0.64.0/bio/samtools/stats" |
10 11 | wrapper: "0.64.0/bio/samtools/index" |
3 4 5 6 7 8 9 | log <- file(snakemake@log[[1]], open="wt") sink(log) sink(log, type="message") system(paste0("cp ", snakemake@input[["header"]], " ", snakemake@output[[1]])) load(snakemake@input[["data"]]) write.table(crosscorr['cross.correlation'], file=snakemake@output[[1]], sep=',', quote=FALSE, row.names=FALSE, col.names=FALSE, append=TRUE) |
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 | import os import pysam import argparse ############################################ ############################################ ## PARSE ARGUMENTS ############################################ ############################################ Description = 'Remove singleton reads from paired-end BAM file i.e if read1 is present in BAM file without read 2 and vice versa.' Epilog = """Example usage: bampe_rm_orphan.py <BAM_INPUT_FILE> <BAM_OUTPUT_FILE>""" argParser = argparse.ArgumentParser(description=Description, epilog=Epilog) ## REQUIRED PARAMETERS argParser.add_argument('BAM_INPUT_FILE', help="Input BAM file sorted by name.") argParser.add_argument('BAM_OUTPUT_FILE', help="Output BAM file sorted by name.") ## OPTIONAL PARAMETERS argParser.add_argument('-fr', '--only_fr_pairs', dest="ONLY_FR_PAIRS", help="Only keeps pairs that are in FR orientation on same chromosome.",action='store_true') args = argParser.parse_args() ############################################ ############################################ ## HELPER FUNCTIONS ############################################ ############################################ def makedir(path): if not len(path) == 0: try: #!# AVI: changed because of race conditions if directory exists, original code: os.makedirs(path) os.makedirs(path, exist_ok=True) except OSError as exception: if exception.errno != errno.EEXIST: raise ############################################ ############################################ ## MAIN FUNCTION ############################################ ############################################ def bampe_rm_orphan(BAMIn,BAMOut,onlyFRPairs=False): ## SETUP DIRECTORY/FILE STRUCTURE OutDir = os.path.dirname(BAMOut) makedir(OutDir) ## COUNT VARIABLES totalReads = 0; totalOutputPairs = 0; totalSingletons = 0; totalImproperPairs = 0 ## ITERATE THROUGH BAM FILE EOF = 0 SAMFin = pysam.AlignmentFile(BAMIn,"rb") #!# AVI: changed to new API from pysam.Samfile SAMFout = pysam.AlignmentFile(BAMOut, "wb",header=SAMFin.header) #!# AVI: changed to new API from pysam.Samfile currRead = next(SAMFin) #!# AVI: adapted for the use of the iterator, original code: currRead = SAMFin.next() for read in SAMFin.fetch(until_eof=True): #!# AVI: added .fetch() to explicitly use new API totalReads += 1 if currRead.qname == read.qname: pair1 = currRead; pair2 = read ## FILTER FOR READS ON SAME CHROMOSOME IN FR ORIENTATION if onlyFRPairs: if pair1.tid == pair2.tid: ## READ1 FORWARD AND READ2 REVERSE STRAND if not pair1.is_reverse and pair2.is_reverse: if pair1.reference_start <= pair2.reference_start: totalOutputPairs += 1 SAMFout.write(pair1) SAMFout.write(pair2) else: totalImproperPairs += 1 ## READ1 REVERSE AND READ2 FORWARD STRAND elif pair1.is_reverse and not pair2.is_reverse: if pair2.reference_start <= pair1.reference_start: totalOutputPairs += 1 SAMFout.write(pair1) SAMFout.write(pair2) else: totalImproperPairs += 1 else: totalImproperPairs += 1 else: totalImproperPairs += 1 else: totalOutputPairs += 1 SAMFout.write(pair1) SAMFout.write(pair2) ## RESET COUNTER try: totalReads += 1 currRead = next(SAMFin) #!# AVI: adapted for the use of the iterator, original code: currRead = SAMFin.next() except: StopIteration EOF = 1 ## READS WHERE ONLY ONE OF A PAIR IS IN FILE else: totalSingletons += 1 pair1 = currRead currRead = read if not EOF: totalReads += 1 totalSingletons += 1 pair1 = currRead ## CLOSE ALL FILE HANDLES SAMFin.close() SAMFout.close() LogFile = os.path.join(OutDir,'%s_bampe_rm_orphan.log' % (os.path.basename(BAMOut[:-4]))) SamLogFile = open(LogFile,'w') SamLogFile.write('\n##############################\n') SamLogFile.write('FILES/DIRECTORIES') SamLogFile.write('\n##############################\n\n') SamLogFile.write('Input File: ' + BAMIn + '\n') SamLogFile.write('Output File: ' + BAMOut + '\n') SamLogFile.write('\n##############################\n') SamLogFile.write('OVERALL COUNTS') SamLogFile.write('\n##############################\n\n') SamLogFile.write('Total Input Reads = ' + str(totalReads) + '\n') SamLogFile.write('Total Output Pairs = ' + str(totalOutputPairs) + '\n') SamLogFile.write('Total Singletons Excluded = ' + str(totalSingletons) + '\n') SamLogFile.write('Total Improper Pairs Excluded = ' + str(totalImproperPairs) + '\n') SamLogFile.write('\n##############################\n') SamLogFile.close() ############################################ ############################################ ## RUN FUNCTION ############################################ ############################################ bampe_rm_orphan(BAMIn=args.BAM_INPUT_FILE,BAMOut=args.BAM_OUTPUT_FILE,onlyFRPairs=args.ONLY_FR_PAIRS) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | __author__ = "Antonie Vietor" __copyright__ = "Copyright 2020, Antonie Vietor" __email__ = "antonie.v@gmx.de" __license__ = "MIT" from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=False, stderr=True) region = snakemake.params.get("region") region_param = "" if region and region is not None: region_param = ' -region "' + region + '"' shell( "(bamtools filter" " -in {snakemake.input[0]}" " -out {snakemake.output[0]}" + region_param + " -script {snakemake.params.json}) {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | __author__ = "Antonie Vietor" __copyright__ = "Copyright 2020, Antonie Vietor" __email__ = "antonie.v@gmx.de" __license__ = "MIT" import os from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=False, stderr=True) genome = "" input_file = "" if (os.path.splitext(snakemake.input[0])[-1]) == ".bam": input_file = "-ibam " + snakemake.input[0] if len(snakemake.input) > 1: if (os.path.splitext(snakemake.input[0])[-1]) == ".bed": input_file = "-i " + snakemake.input.get("bed") genome = "-g " + snakemake.input.get("ref") shell( "(genomeCoverageBed" " {snakemake.params}" " {input_file}" " {genome}" " > {snakemake.output[0]}) {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | __author__ = "Patrik Smeds" __copyright__ = "Copyright 2016, Patrik Smeds" __email__ = "patrik.smeds@gmail.com" __license__ = "MIT" from os import path from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=False, stderr=True) # Check inputs/arguments. if len(snakemake.input) == 0: raise ValueError("A reference genome has to be provided!") elif len(snakemake.input) > 1: raise ValueError("Only one reference genome can be inputed!") # Prefix that should be used for the database prefix = snakemake.params.get("prefix", "") if len(prefix) > 0: prefix = "-p " + prefix # Contrunction algorithm that will be used to build the database, default is bwtsw construction_algorithm = snakemake.params.get("algorithm", "") if len(construction_algorithm) != 0: construction_algorithm = "-a " + construction_algorithm shell( "bwa index" " {prefix}" " {construction_algorithm}" " {snakemake.input[0]}" " {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | __author__ = "Johannes Köster, Julian de Ruiter" __copyright__ = "Copyright 2016, Johannes Köster and Julian de Ruiter" __email__ = "koester@jimmy.harvard.edu, julianderuiter@gmail.com" __license__ = "MIT" from os import path from snakemake.shell import shell # Extract arguments. extra = snakemake.params.get("extra", "") sort = snakemake.params.get("sort", "none") sort_order = snakemake.params.get("sort_order", "coordinate") sort_extra = snakemake.params.get("sort_extra", "") log = snakemake.log_fmt_shell(stdout=False, stderr=True) # Check inputs/arguments. if not isinstance(snakemake.input.reads, str) and len(snakemake.input.reads) not in { 1, 2, }: raise ValueError("input must have 1 (single-end) or " "2 (paired-end) elements") if sort_order not in {"coordinate", "queryname"}: raise ValueError("Unexpected value for sort_order ({})".format(sort_order)) # Determine which pipe command to use for converting to bam or sorting. if sort == "none": # Simply convert to bam using samtools view. pipe_cmd = "samtools view -Sbh -o {snakemake.output[0]} -" elif sort == "samtools": # Sort alignments using samtools sort. pipe_cmd = "samtools sort {sort_extra} -o {snakemake.output[0]} -" # Add name flag if needed. if sort_order == "queryname": sort_extra += " -n" prefix = path.splitext(snakemake.output[0])[0] sort_extra += " -T " + prefix + ".tmp" elif sort == "picard": # Sort alignments using picard SortSam. pipe_cmd = ( "picard SortSam {sort_extra} INPUT=/dev/stdin" " OUTPUT={snakemake.output[0]} SORT_ORDER={sort_order}" ) else: raise ValueError("Unexpected value for params.sort ({})".format(sort)) shell( "(bwa mem" " -t {snakemake.threads}" " {extra}" " {snakemake.params.index}" " {snakemake.input.reads}" " | " + pipe_cmd + ") {log}" ) |
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | __author__ = "Julian de Ruiter" __copyright__ = "Copyright 2017, Julian de Ruiter" __email__ = "julianderuiter@gmail.com" __license__ = "MIT" from snakemake.shell import shell n = len(snakemake.input) assert n == 2, "Input must contain 2 (paired-end) elements." log = snakemake.log_fmt_shell(stdout=False, stderr=True) shell( "cutadapt" " {snakemake.params.adapters}" " {snakemake.params.others}" " -o {snakemake.output.fastq1}" " -p {snakemake.output.fastq2}" " -j {snakemake.threads}" " {snakemake.input}" " > {snakemake.output.qc} {log}" ) |
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | __author__ = "Julian de Ruiter" __copyright__ = "Copyright 2017, Julian de Ruiter" __email__ = "julianderuiter@gmail.com" __license__ = "MIT" from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=False, stderr=True) shell( "cutadapt" " {snakemake.params}" " -j {snakemake.threads}" " -o {snakemake.output.fastq}" " {snakemake.input[0]}" " > {snakemake.output.qc} {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | __author__ = "Antonie Vietor" __copyright__ = "Copyright 2020, Antonie Vietor" __email__ = "antonie.v@gmx.de" __license__ = "MIT" from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=True, stderr=True) out_tab = snakemake.output.get("matrix_tab") out_bed = snakemake.output.get("matrix_bed") optional_output = "" if out_tab: optional_output += " --outFileNameMatrix {out_tab} ".format(out_tab=out_tab) if out_bed: optional_output += " --outFileSortedRegions {out_bed} ".format(out_bed=out_bed) shell( "(computeMatrix " "{snakemake.params.command} " "{snakemake.params.extra} " "-R {snakemake.input.bed} " "-S {snakemake.input.bigwig} " "-o {snakemake.output.matrix_gz} " "{optional_output}) {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | __author__ = "Antonie Vietor" __copyright__ = "Copyright 2020, Antonie Vietor" __email__ = "antonie.v@gmx.de" __license__ = "MIT" from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=True, stderr=True) out_region = snakemake.output.get("regions") out_matrix = snakemake.output.get("heatmap_matrix") optional_output = "" if out_region: optional_output += " --outFileSortedRegions {out_region} ".format( out_region=out_region ) if out_matrix: optional_output += " --outFileNameMatrix {out_matrix} ".format( out_matrix=out_matrix ) shell( "(plotHeatmap " "-m {snakemake.input[0]} " "-o {snakemake.output.heatmap_img} " "{optional_output} " "{snakemake.params}) {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | __author__ = "Antonie Vietor" __copyright__ = "Copyright 2020, Antonie Vietor" __email__ = "antonie.v@gmx.de" __license__ = "MIT" from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=True, stderr=True) out_region = snakemake.output.get("regions") out_data = snakemake.output.get("data") optional_output = "" if out_region: optional_output += " --outFileSortedRegions {out_region} ".format( out_region=out_region ) if out_data: optional_output += " --outFileNameData {out_data} ".format(out_data=out_data) shell( "(plotProfile " "-m {snakemake.input[0]} " "-o {snakemake.output.plot_img} " "{optional_output} " "{snakemake.params}) {log}" ) |
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | __author__ = "Julian de Ruiter" __copyright__ = "Copyright 2017, Julian de Ruiter" __email__ = "julianderuiter@gmail.com" __license__ = "MIT" from os import path from tempfile import TemporaryDirectory from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=False, stderr=True) def basename_without_ext(file_path): """Returns basename of file path, without the file extension.""" base = path.basename(file_path) split_ind = 2 if base.endswith(".fastq.gz") else 1 base = ".".join(base.split(".")[:-split_ind]) return base # Run fastqc, since there can be race conditions if multiple jobs # use the same fastqc dir, we create a temp dir. with TemporaryDirectory() as tempdir: shell( "fastqc {snakemake.params} --quiet -t {snakemake.threads} " "--outdir {tempdir:q} {snakemake.input[0]:q}" " {log:q}" ) # Move outputs into proper position. output_base = basename_without_ext(snakemake.input[0]) html_path = path.join(tempdir, output_base + "_fastqc.html") zip_path = path.join(tempdir, output_base + "_fastqc.zip") if snakemake.output.html != html_path: shell("mv {html_path:q} {snakemake.output.html:q}") if snakemake.output.zip != zip_path: shell("mv {zip_path:q} {snakemake.output.zip:q}") |
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | __author__ = "Julian de Ruiter" __copyright__ = "Copyright 2017, Julian de Ruiter" __email__ = "julianderuiter@gmail.com" __license__ = "MIT" from os import path from snakemake.shell import shell input_dirs = set(path.dirname(fp) for fp in snakemake.input) output_dir = path.dirname(snakemake.output[0]) output_name = path.basename(snakemake.output[0]) log = snakemake.log_fmt_shell(stdout=True, stderr=True) shell( "multiqc" " {snakemake.params}" " --force" " -o {output_dir}" " -n {output_name}" " {input_dirs}" " {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | __author__ = "David Laehnemann, Antonie Vietor" __copyright__ = "Copyright 2020, David Laehnemann, Antonie Vietor" __email__ = "antonie.v@gmx.de" __license__ = "MIT" import sys from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=False, stderr=True) res = snakemake.resources.get("mem_gb", "3") if not res or res is None: res = 3 exts_to_prog = { ".alignment_summary_metrics": "CollectAlignmentSummaryMetrics", ".insert_size_metrics": "CollectInsertSizeMetrics", ".insert_size_histogram.pdf": "CollectInsertSizeMetrics", ".quality_distribution_metrics": "QualityScoreDistribution", ".quality_distribution.pdf": "QualityScoreDistribution", ".quality_by_cycle_metrics": "MeanQualityByCycle", ".quality_by_cycle.pdf": "MeanQualityByCycle", ".base_distribution_by_cycle_metrics": "CollectBaseDistributionByCycle", ".base_distribution_by_cycle.pdf": "CollectBaseDistributionByCycle", ".gc_bias.detail_metrics": "CollectGcBiasMetrics", ".gc_bias.summary_metrics": "CollectGcBiasMetrics", ".gc_bias.pdf": "CollectGcBiasMetrics", ".rna_metrics": "RnaSeqMetrics", ".bait_bias_detail_metrics": "CollectSequencingArtifactMetrics", ".bait_bias_summary_metrics": "CollectSequencingArtifactMetrics", ".error_summary_metrics": "CollectSequencingArtifactMetrics", ".pre_adapter_detail_metrics": "CollectSequencingArtifactMetrics", ".pre_adapter_summary_metrics": "CollectSequencingArtifactMetrics", ".quality_yield_metrics": "CollectQualityYieldMetrics", } progs = set() for file in snakemake.output: matched = False for ext in exts_to_prog: if file.endswith(ext): progs.add(exts_to_prog[ext]) matched = True if not matched: sys.exit( "Unknown type of metrics file requested, for possible metrics files, see https://snakemake-wrappers.readthedocs.io/en/stable/wrappers/picard/collectmultiplemetrics.html" ) programs = " PROGRAM=" + " PROGRAM=".join(progs) out = str(snakemake.wildcards.sample) # as default output_file = str(snakemake.output[0]) for ext in exts_to_prog: if output_file.endswith(ext): out = output_file[: -len(ext)] break shell( "(picard -Xmx{res}g CollectMultipleMetrics " "I={snakemake.input.bam} " "O={out} " "R={snakemake.input.ref} " "{snakemake.params}{programs}) {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | __author__ = "Johannes Köster" __copyright__ = "Copyright 2016, Johannes Köster" __email__ = "koester@jimmy.harvard.edu" __license__ = "MIT" from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=True, stderr=True) shell( "picard MarkDuplicates {snakemake.params} INPUT={snakemake.input} " "OUTPUT={snakemake.output.bam} METRICS_FILE={snakemake.output.metrics} " "{log}" ) |
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | __author__ = "Julian de Ruiter" __copyright__ = "Copyright 2017, Julian de Ruiter" __email__ = "julianderuiter@gmail.com" __license__ = "MIT" from snakemake.shell import shell inputs = " ".join("INPUT={}".format(in_) for in_ in snakemake.input) log = snakemake.log_fmt_shell(stdout=False, stderr=True) shell( "picard" " MergeSamFiles" " {snakemake.params}" " {inputs}" " OUTPUT={snakemake.output[0]}" " {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | __author__ = "Antonie Vietor" __copyright__ = "Copyright 2020, Antonie Vietor" __email__ = "antonie.v@gmx.de" __license__ = "MIT" import os from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=False, stderr=True) params = "" if (os.path.splitext(snakemake.input[0])[-1]) == ".bam": if "-bam" not in (snakemake.input[0]): params = "-bam " shell( "(preseq lc_extrap {params} {snakemake.params} {snakemake.input[0]} -output {snakemake.output[0]}) {log}" ) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | __author__ = "Johannes Köster" __copyright__ = "Copyright 2019, Johannes Köster" __email__ = "johannes.koester@uni-due.de" __license__ = "MIT" import subprocess import sys from snakemake.shell import shell species = snakemake.params.species.lower() release = int(snakemake.params.release) fmt = snakemake.params.fmt build = snakemake.params.build flavor = snakemake.params.get("flavor", "") branch = "" if release >= 81 and build == "GRCh37": # use the special grch37 branch for new releases branch = "grch37/" if flavor: flavor += "." log = snakemake.log_fmt_shell(stdout=False, stderr=True) suffix = "" if fmt == "gtf": suffix = "gtf.gz" elif fmt == "gff3": suffix = "gff3.gz" url = "ftp://ftp.ensembl.org/pub/{branch}release-{release}/{fmt}/{species}/{species_cap}.{build}.{release}.{flavor}{suffix}".format( release=release, build=build, species=species, fmt=fmt, species_cap=species.capitalize(), suffix=suffix, flavor=flavor, branch=branch, ) try: shell("(curl -L {url} | gzip -d > {snakemake.output[0]}) {log}") except subprocess.CalledProcessError as e: if snakemake.log: sys.stderr = open(snakemake.log[0], "a") print( "Unable to download annotation data from Ensembl. " "Did you check that this combination of species, build, and release is actually provided?", file=sys.stderr, ) exit(1) |
1 2 3 4 5 6 7 8 9 10 | __author__ = "Michael Chambers" __copyright__ = "Copyright 2019, Michael Chambers" __email__ = "greenkidneybean@gmail.com" __license__ = "MIT" from snakemake.shell import shell shell("samtools faidx {snakemake.params} {snakemake.input[0]} > {snakemake.output[0]}") |
1 2 3 4 5 6 7 8 9 10 | __author__ = "Christopher Preusch" __copyright__ = "Copyright 2017, Christopher Preusch" __email__ = "cpreusch[at]ust.hk" __license__ = "MIT" from snakemake.shell import shell shell("samtools flagstat {snakemake.input[0]} > {snakemake.output[0]}") |
1 2 3 4 5 6 7 8 9 10 11 | __author__ = "Antonie Vietor" __copyright__ = "Copyright 2020, Antonie Vietor" __email__ = "antonie.v@gmx.de" __license__ = "MIT" from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=False, stderr=True) shell("samtools idxstats {snakemake.input.bam} > {snakemake.output[0]} {log}") |
1 2 3 4 5 6 7 8 9 10 | __author__ = "Johannes Köster" __copyright__ = "Copyright 2016, Johannes Köster" __email__ = "koester@jimmy.harvard.edu" __license__ = "MIT" from snakemake.shell import shell shell("samtools index {snakemake.params} {snakemake.input[0]} {snakemake.output[0]}") |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | __author__ = "Johannes Köster" __copyright__ = "Copyright 2016, Johannes Köster" __email__ = "koester@jimmy.harvard.edu" __license__ = "MIT" import os from snakemake.shell import shell prefix = os.path.splitext(snakemake.output[0])[0] # Samtools takes additional threads through its option -@ # One thread for samtools # Other threads are *additional* threads passed to the argument -@ threads = "" if snakemake.threads <= 1 else " -@ {} ".format(snakemake.threads - 1) shell( "samtools sort {snakemake.params} {threads} -o {snakemake.output[0]} " "-T {prefix} {snakemake.input[0]}" ) |
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | __author__ = "Julian de Ruiter" __copyright__ = "Copyright 2017, Julian de Ruiter" __email__ = "julianderuiter@gmail.com" __license__ = "MIT" from snakemake.shell import shell extra = snakemake.params.get("extra", "") region = snakemake.params.get("region", "") log = snakemake.log_fmt_shell(stdout=False, stderr=True) shell("samtools stats {extra} {snakemake.input} {region} > {snakemake.output} {log}") |
1 2 3 4 5 6 7 8 9 10 | __author__ = "Johannes Köster" __copyright__ = "Copyright 2016, Johannes Köster" __email__ = "koester@jimmy.harvard.edu" __license__ = "MIT" from snakemake.shell import shell shell("samtools view {snakemake.params} {snakemake.input[0]} > {snakemake.output[0]}") |
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | __author__ = "Roman Chernyatchik" __copyright__ = "Copyright (c) 2019 JetBrains" __email__ = "roman.chernyatchik@jetbrains.com" __license__ = "MIT" from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=True, stderr=True) extra = snakemake.params.get("extra", "") shell( "bedGraphToBigWig {extra}" " {snakemake.input.bedGraph} {snakemake.input.chromsizes}" " {snakemake.output} {log}" ) |
Support
- Future updates
Related Workflows





