Gene Expression Analysis Workflow for Gekko Gecko Tail Regeneration
Help improve this workflow!
This workflow has been published but could be further improved with some additional meta data:- Keyword(s) in categories input, output, operation
You can help improve this workflow by suggesting the addition or removal of keywords, suggest changes and report issues, or request to become a maintainer of the Workflow .
A snakemake workflow to analyze gene expression in tail regeneration of G. gecko . This workflow was build using the snakemake cookie-cutter template and heavily inspired by this workflow .
Workflow overview
Usage
If you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this (original) repository and, if available, its DOI (see above).
Step 1: Obtain a copy of this workflow
Clone this repository to your local system, into the place where you want to perform the data analysis. Make sure to have the right access / SSH Key.
git clone git@github.com:matinnuhamunada/rnaseq_kadal.git
cd rnaseq_kadal
Step 2: Configure workflow
Configure the workflow according to your needs via editing the files in the
config/
folder. Adjust
config.yaml
to configure the workflow execution,
samples.tsv
, and
units.tsv
to specify your sample setup.
The parameter
samples
denote the location of your
.tsv
file which specify the samples to analyse. The parameter
units
informs the paired end
.fastq
locations of each sample.
Step 3: Install Snakemake
Installing Snakemake using Mamba is advised. In case you don’t use Mambaforge you can always install Mamba into any other Conda-based Python distribution with:
conda install -n base -c conda-forge mamba
Then install Snakemake with:
mamba create -c conda-forge -c bioconda -n snakemake snakemake
For installation details, see the instructions in the Snakemake documentation .
Step 4: Execute workflow
Activate the conda environment:
conda activate snakemake
Test your configuration by performing a dry-run via
snakemake --use-conda -n
Execute the workflow locally via
snakemake --use-conda --cores $N
using
$N
cores or run it in a cluster environment via
snakemake --use-conda --cluster qsub --jobs 100
Step 5: Investigate results
After successful execution, you can create a self-contained interactive HTML report with all results via:
snakemake --report report.html
This report can, e.g., be forwarded to your collaborators.
Code Snippets
12 13 | wrapper: "v0.85.1/bio/fastqc" |
41 42 | wrapper: "v0.85.1/bio/trimmomatic/pe" |
57 58 59 60 | shell: """ multiqc --force -o data/processed/qc -n multiqc.html {params.fastqc_dir} {params.trinity_dir}> {log} 2>&1 """ |
76 77 | wrapper: "v0.85.1/bio/busco" |
6 7 8 9 | shell: """ python workflow/scripts/prep_trinity.py {wildcards.name} {wildcards.condition} {output} 2> {log} """ |
30 31 32 33 34 | shell: """ Trinity --seqType fq --SS_lib_type {params.SS_lib_type} --max_memory {resources.mem_gb} --CPU {threads} --samples_file {input} --output data/processed/trinity/trinity_{wildcards.name}-{wildcards.condition} \ --min_kmer_cov {params.min_kmer_cov} --bflyCPU {threads} --bflyHeapSpaceMax {resources.mem_gb} --normalize_max_read_cov {params.normalize_max_read_cov} --trimmomatic > {log} 2>> {log} """ |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | import pandas as pd from pathlib import Path import sys def get_trinity_samples(run_name, condition, outfile, samples_path="config/samples.tsv", units_path="config/units.tsv"): """ """ samples = ( pd.read_csv(samples_path, sep="\t", dtype={"ID": str}) .set_index("ID", drop=False) .sort_index() ) units = ( pd.read_csv(units_path, sep="\t", dtype={"ID": str}) .set_index("ID", drop=False) .sort_index() ) trinity = [] subset = samples[samples.loc[:, "Condition"] == condition] for s in subset.index: s_dict = {} s_dict["Condition"] = samples.loc[s, "Condition"] s_dict["Replicate"] = units.loc[s, "unit_name"] s_dict["F"] = units.loc[s, "fq1"] s_dict["R"] = units.loc[s, "fq2"] trinity.append(s_dict) df = pd.DataFrame(trinity) df.to_csv(f"{outfile}", sep="\t", index=False, header=False) return None if __name__ == "__main__": get_trinity_samples(sys.argv[1], sys.argv[2], sys.argv[3]) |
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | __author__ = "Tessa Pierce" __copyright__ = "Copyright 2018, Tessa Pierce" __email__ = "ntpierce@gmail.com" __license__ = "MIT" from snakemake.shell import shell from os import path import tempfile log = snakemake.log_fmt_shell(stdout=True, stderr=True) extra = snakemake.params.get("extra", "") mode = snakemake.params.get("mode") assert mode is not None, "please input a run mode: genome, transcriptome or proteins" lineage = snakemake.params.get("lineage") assert lineage is not None, "please input the path to a lineage for busco assessment" stripped_output = snakemake.output[0].rstrip("/") out = path.basename(stripped_output) out_dirname = path.dirname(stripped_output) out_path = " --out_path {} ".format(out_dirname) if out_dirname else "" download_path_dir = snakemake.params.get("download_path", "") download_path = ( " --download_path {} ".format(download_path_dir) if download_path_dir else "" ) # note: --force allows snakemake to handle rewriting files as necessary # without needing to specify *all* busco outputs as snakemake outputs shell( "busco --in {snakemake.input} --out {out} --force " "{out_path} " "--cpu {snakemake.threads} --mode {mode} --lineage {lineage} " "{download_path} " "{extra} {log}" ) |
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | __author__ = "Julian de Ruiter" __copyright__ = "Copyright 2017, Julian de Ruiter" __email__ = "julianderuiter@gmail.com" __license__ = "MIT" from os import path import re from tempfile import TemporaryDirectory from snakemake.shell import shell log = snakemake.log_fmt_shell(stdout=True, stderr=True) def basename_without_ext(file_path): """Returns basename of file path, without the file extension.""" base = path.basename(file_path) # Remove file extension(s) (similar to the internal fastqc approach) base = re.sub("\\.gz$", "", base) base = re.sub("\\.bz2$", "", base) base = re.sub("\\.txt$", "", base) base = re.sub("\\.fastq$", "", base) base = re.sub("\\.fq$", "", base) base = re.sub("\\.sam$", "", base) base = re.sub("\\.bam$", "", base) return base # Run fastqc, since there can be race conditions if multiple jobs # use the same fastqc dir, we create a temp dir. with TemporaryDirectory() as tempdir: shell( "fastqc {snakemake.params} -t {snakemake.threads} " "--outdir {tempdir:q} {snakemake.input[0]:q}" " {log}" ) # Move outputs into proper position. output_base = basename_without_ext(snakemake.input[0]) html_path = path.join(tempdir, output_base + "_fastqc.html") zip_path = path.join(tempdir, output_base + "_fastqc.zip") if snakemake.output.html != html_path: shell("mv {html_path:q} {snakemake.output.html:q}") if snakemake.output.zip != zip_path: shell("mv {zip_path:q} {snakemake.output.zip:q}") |
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 | __author__ = "Johannes Köster, Jorge Langa" __copyright__ = "Copyright 2016, Johannes Köster" __email__ = "koester@jimmy.harvard.edu" __license__ = "MIT" from snakemake.shell import shell from snakemake_wrapper_utils.java import get_java_opts # Distribute available threads between trimmomatic itself and any potential pigz instances def distribute_threads(input_files, output_files, available_threads): gzipped_input_files = sum(1 for file in input_files if file.endswith(".gz")) gzipped_output_files = sum(1 for file in output_files if file.endswith(".gz")) potential_threads_per_process = available_threads // ( 1 + gzipped_input_files + gzipped_output_files ) if potential_threads_per_process > 0: # decompressing pigz creates at most 4 threads pigz_input_threads = ( min(4, potential_threads_per_process) if gzipped_input_files != 0 else 0 ) pigz_output_threads = ( (available_threads - pigz_input_threads * gzipped_input_files) // (1 + gzipped_output_files) if gzipped_output_files != 0 else 0 ) trimmomatic_threads = ( available_threads - pigz_input_threads * gzipped_input_files - pigz_output_threads * gzipped_output_files ) else: # not enough threads for pigz pigz_input_threads = 0 pigz_output_threads = 0 trimmomatic_threads = available_threads return trimmomatic_threads, pigz_input_threads, pigz_output_threads def compose_input_gz(filename, threads): if filename.endswith(".gz") and threads > 0: return "<(pigz -p {threads} --decompress --stdout {filename})".format( threads=threads, filename=filename ) return filename def compose_output_gz(filename, threads, compression_level): if filename.endswith(".gz") and threads > 0: return ">(pigz -p {threads} {compression_level} > {filename})".format( threads=threads, compression_level=compression_level, filename=filename ) return filename extra = snakemake.params.get("extra", "") java_opts = get_java_opts(snakemake) log = snakemake.log_fmt_shell(stdout=True, stderr=True) compression_level = snakemake.params.get("compression_level", "-5") trimmer = " ".join(snakemake.params.trimmer) # Distribute threads input_files = [snakemake.input.r1, snakemake.input.r2] output_files = [ snakemake.output.r1, snakemake.output.r1_unpaired, snakemake.output.r2, snakemake.output.r2_unpaired, ] trimmomatic_threads, input_threads, output_threads = distribute_threads( input_files, output_files, snakemake.threads ) input_r1, input_r2 = [ compose_input_gz(filename, input_threads) for filename in input_files ] output_r1, output_r1_unp, output_r2, output_r2_unp = [ compose_output_gz(filename, output_threads, compression_level) for filename in output_files ] shell( "trimmomatic PE -threads {trimmomatic_threads} {java_opts} {extra} " "{input_r1} {input_r2} " "{output_r1} {output_r1_unp} " "{output_r2} {output_r2_unp} " "{trimmer} " "{log}" ) |
Support
- Future updates
Related Workflows





