RNA Biology Pipeline to Characterize protein-RNA Interactions
Help improve this workflow!
This workflow has been published but could be further improved with some additional meta data:- Keyword(s) in categories input, output, operation, topic
You can help improve this workflow by suggesting the addition or removal of keywords, suggest changes and report issues, or request to become a maintainer of the Workflow .
An RNA Biology pipeline to characterize protein-RNA interactions.
1. Getting Started!
If you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this (original) repository and, if available, its DOI (tba later!).
1.1 Download the workflow
Please clone this repository to your local filesystem using the following command:
# Clone Repository from Github
git clone https://github.com/RBL-NCI/iCLIP.git
# Change your working directory to the iCLIP repo
cd iCLIP/
1.2 Add snakemake to PATH
Please make sure that
snakemake>=5.19
is in your
$PATH
. If you are in Biowulf, please load the following environment module:
# Recommend running snakemake>=5.19
module load snakemake/5.24.1
1.3 Configure workflow
Configure the workflow according to your needs via editing the files in the
config/
folder. Adjust
snakemake_config.yaml
to configure the workflow execution and
cluster_config.yml
to configure the cluster settings. Create
multiplex.tsv
and
samples.tsv
files to specify your sample setup, or edit the example manifests in the
manifest/
folder.
1.4 Dry-run the workflow
Run the following command to dry-run the snakemake pipeline:
sh run_snakemake.sh dry-run
Review the log to ensure there are no workflow errors.
2. Usage
Submit master job to the cluster:
sh run_snakemake.sh cluster
Submit master job locally:
sh run_snakemake.sh local
3. Contribute
This section is for new developers working with the iCLIP pipeline. If you have added new features or adding new changes, please consider contributing them back to the original repository:
-
Fork the original repo to a personal or org account.
-
Clone the fork to your local filesystem.
-
Copy the modified files to the cloned fork.
-
Commit and push your changes to your fork.
-
Create a pull request to this repository.
Code Snippets
518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 | shell: """ set -exo pipefail if [[ {params.b_qc_flag} == "PROCESS" ]]; then gunzip -c {input.fq} \\ | awk 'NR%4==2 {{print substr($0, {params.start_pos}, {params.bc_len});}}' \\ | LC_ALL=C sort --buffer-size={params.memG} --parallel={threads} --temporary-directory=/lscratch/${{SLURM_JOB_ID}} -n \\ | uniq -c > {output.counts}; Rscript {params.R} --sample_manifest {sample_manifest} \\ --multiplex_manifest {multiplex_manifest} \\ --barcode_input {output.counts} \\ --mismatch {params.mm} \\ --mpid {wildcards.mp} \\ --output_dir {params.base} \\ --qc_dir {params.base} else echo "Barcode QC checking was skipped for this run" > {output.txt} touch {output.png} touch {output.counts} fi """ |
571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 | shell: """ set -exo pipefail # set tmp dir tmp_dir="/lscratch/${{SLURM_JOB_ID}}" export tmp_dir # run ultraplex to remove adaptors, separate barcodes # output files to tmp scratch dir ultraplex \\ --threads {threads} \\ --barcodes {input.barcodes} \\ --directory $tmp_dir \\ --inputfastq {input.f1} \\ --final_min_length {params.ml} \\ --phredquality {params.pq} \\ --fiveprimemismatches {params.mm} \\ --ultra # move files to final location after they are zipped mv $tmp_dir/* {params.out_dir} """ |
608 609 610 611 612 613 614 615 616 617 | shell: """ set -exo pipefail # create empty file touch {output.fastq} # moves files to project output dir {params.cmd} """ |
632 633 634 635 636 637 638 | shell: """ set -exo pipefail # Rename files {params.cmd} """ |
653 654 655 656 657 658 659 | shell: """ set -exo pipefail # run FASTQC fastqc {input.fastq} -o {params.base} """ |
695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 | shell: """ set -exo pipefail # set tmp dir tmp_dir="/lscratch/${{SLURM_JOB_ID}}" export tmp_dir # Gzip input files gunzip -c {input.filtered} > ${{tmp_dir}}/{params.tmp}; # Run FastQ Screen fastq_screen --conf {params.conf_species} \\ --outdir {params.base_species} \\ --threads {threads} \\ --subset 1000000 \\ --aligner bowtie2 \\ --force \\ ${{tmp_dir}}/{params.tmp}; fastq_screen --conf {params.conf_rrna} \\ --outdir {params.base_rrna} \\ --threads {threads} \\ --subset 1000000 \\ --aligner bowtie2 \\ --force \\ ${{tmp_dir}}/{params.tmp}; # Remove tmp gzipped file rm ${{tmp_dir}}/{params.tmp} # Run FastQ Validator mkdir -p {params.base_val} {params.fastq_v} \\ --disableSeqIDCheck \\ --noeof \\ --printableErrors 100000000 \\ --baseComposition \\ --avgQual \\ --file {input.filtered} > {output.log}; """ |
785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 | shell: """ set -exo pipefail # set tmp dir tmp_dir="/lscratch/${{SLURM_JOB_ID}}" export tmp_dir # STAR cannot handle sorting large files - allow samtools to sort output files STAR \ --runMode alignReads \ --genomeDir {params.s_index} \ --sjdbGTFfile {params.s_gtf} \ --readFilesCommand zcat \ --readFilesIn {input.f1} \ --outFileNamePrefix $tmp_dir/{params.out_prefix} \ --outReadsUnmapped Fastx \ --outSAMtype BAM Unsorted \ --alignEndsType {params.s_atype} \ --alignIntronMax {params.s_intron} \ --alignSJDBoverhangMin {params.s_sjdb} \ --alignSJoverhangMin {params.s_asj} \ --alignTranscriptsPerReadNmax {params.s_transc} \ --alignWindowsPerReadNmax {params.s_windows} \ --limitBAMsortRAM {params.s_bam_limit} \ --limitOutSJcollapsed {params.s_sjcol} \ --outFilterMatchNmin {params.s_match} \ --outFilterMatchNminOverLread {params.s_readmatch} \ --outFilterMismatchNmax {params.s_mismatch} \ --outFilterMismatchNoverReadLmax {params.s_readmm} \ --outFilterMultimapNmax {params.s_fmm} \ --outFilterMultimapScoreRange {params.s_mmscore} \ --outFilterScoreMin {params.s_score} \ --outFilterType {params.s_ftype} \ --outSAMattributes {params.s_att} \ --outSAMunmapped {params.s_unmap} \ --outSJfilterCountTotalMin {params.s_sjmin} \ --outSJfilterOverhangMin {params.s_overhang} \ --outSJfilterReads {params.s_sjreads} \ --seedMultimapNmax {params.s_smm} \ --seedNoneLociPerWindow {params.s_loci} \ --seedPerReadNmax {params.s_read} \ --seedPerWindowNmax {params.s_wind} \ --sjdbScore {params.s_sj} \ --winAnchorMultimapNmax {params.s_anchor} \ --quantMode {params.s_quantmod} # sort file samtools sort -m 80G -T $tmp_dir $tmp_dir/{params.out_prefix}Aligned.out.bam -o $tmp_dir/{params.out_prefix}Aligned.sortedByCoord.out.bam # move STAR files and final log file to output mv $tmp_dir/{params.out_prefix}Aligned.sortedByCoord.out.bam {output.bam} mv $tmp_dir/{params.out_prefix}Log.final.out {output.log} # move mates to unmapped file touch {output.unmapped} for f in $tmp_dir/{params.out_prefix}Unmapped.out.mate*; do cat $f >> {output.unmapped}; done """ |
859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 | shell: """ set -exo pipefail # set tmp dir tmp_dir="/lscratch/${{SLURM_JOB_ID}}" export tmp_dir # Index cp {input.bam} {output.bam} samtools index -@ {threads} {output.bam}; # Run samstats samtools stats --threads {threads} {output.bam} > {output.samstat} """ |
892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 | shell: """ # set fail count fail=0 # create output file if [[ -f {output.qc_raw_counts} ]]; then rm {output.qc_raw_counts}; fi touch {output.qc_raw_counts} for f in {input.stats}; do # check samstats file to determine number of reads and reads mapped raw_count=`cat $f | grep "raw total sequences" | awk -F"\t" '{{print $3}}'` mapped_count=`cat $f | grep "reads mapped:" | awk -F"\t" '{{print $3}}'` found_percentage=$(($mapped_count / $raw_count)) # check the count against the set count_threshold, if counts found are lower than expected, fail fail=0 if [ 1 -eq "$(echo "${{found_percentage}} < {params.count_threshold}" | bc)" ]; then flag="sample failed" fail=$((fail + 1)) else flag="sample passed" fi # put data into output echo "$f\t$found_percentage\t$flag" >> {output.qc_raw_counts} done # create output file if [ 1 -eq "$(echo "${{fail}} > 0" | bc)" ]; then echo "Check sample log {output.qc_raw_counts} to review what sample(s) failed" > {params.qc_base}_fail.txt else touch {params.qc_base}_pass.txt fi """ |
949 950 951 952 953 954 955 956 957 958 959 960 | shell: """ set -exo pipefail multiqc -f -v \\ -c {params.qc_config} \\ -d -dd 1 \\ {params.dir_post} \\ {params.dir_screen_rrna} \\ {params.dir_screen_species} \\ -o {params.out_dir} """ |
984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 | shell: """ if [[ {params.m_flag} == "Y" ]] && [[ {params.b_flag} == "PROCESS" ]] ; then Rscript -e 'library(rmarkdown); \ rmarkdown::render("{params.R}", output_file = "{output.html}", \ params= list(log_list = "{input.log_list}", \ b_txt = "{params.txt_bc}"))' else Rscript -e 'library(rmarkdown); \ rmarkdown::render("{params.R}", output_file = "{output.html}", \ params= list(log_list = "{input.log_list}"))' fi sh {params.qc} {params.log_dir} {params.single_qc} {params.proj_qc} """ |
1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 | shell: """ set -exo pipefail # set tmp dir tmp_dir="/lscratch/${{SLURM_JOB_ID}}" export tmp_dir # Run UMI Tools Deduplication echo "Using the following UMI seperator: {params.umi}" umi_tools dedup \\ -I {input.f1} \\ --method unique \\ --multimapping-detection-method=NH \\ --umi-separator={params.umi} \\ -S ${{tmp_dir}}/{params.base}.bam \\ --log2stderr; # Sort and Index samtools sort --threads {threads} -m 10G -T ${{tmp_dir}} \\ ${{tmp_dir}}/{params.base}.bam \\ -o {output.bam}; samtools index -@ {threads} {output.bam}; """ |
1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 | shell: """ set -exo pipefail # set tmp dir tmp_dir="/lscratch/${{SLURM_JOB_ID}}" export tmp_dir # if read alignment has tag "NH:i:1" then it is an unique alignment python {params.pyscript} --inputBAM {input.bam} --outputBAM ${{tmp_dir}}/{params.base}.unique.bam samtools index -@ {threads} ${{tmp_dir}}/{params.base}.unique.bam; # Create SAFs bedtools bamtobed -split -i {input.bam} \\ | LC_ALL=C sort --buffer-size={params.memG} --parallel={threads} --temporary-directory=$tmp_dir -k1,1V -k2,2n > {output.bed_all}; bedtools bamtobed -split -i ${{tmp_dir}}/{params.base}.unique.bam \\ | LC_ALL=C sort --buffer-size={params.memG} --parallel={threads} --temporary-directory=$tmp_dir -k1,1V -k2,2n > {output.bed_unique}; awk '{{OFS="\\t";print "GeneID","Chr","Start","End","Strand"}}' > {output.saf_all} awk '{{OFS="\\t";print "GeneID","Chr","Start","End","Strand"}}' > {output.saf_unique} bedtools merge \\ -c 6 -o count,distinct \\ -bed -s -d 50 \\ -i {output.bed_all} \\ | awk '{{OFS="\\t"; print $1":"$2"-"$3"_"$5,$1,$2,$3,$5}}' >> {output.saf_all}; bedtools merge \\ -c 6 -o count,distinct \\ -bed -s -d 50 \\ -i {output.bed_unique} \\ | awk '{{OFS="\\t"; print $1":"$2"-"$3"_"$5,$1,$2,$3,$5}}' >> {output.saf_unique} """ |
1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 | shell: """ set -exo pipefail # set tmp dir tmp_dir="/lscratch/${{SLURM_JOB_ID}}" export tmp_dir cp {input.bed_all} $tmp_dir/all.bed bgzip --threads {threads} --force $tmp_dir/all.bed mv $tmp_dir/all.bed.gz {output.bed_all} tabix -p bed -f {output.bed_all} cp {input.bed_unique} $tmp_dir/unique.bed bgzip --threads {threads} --force $tmp_dir/unique.bed mv $tmp_dir/unique.bed.gz {output.bed_unique} tabix -p bed -f {output.bed_unique} """ |
1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 | shell: """ set -exo pipefail # Run for allreadpeaks featureCounts -F SAF \\ -a {input.saf_all} \\ -O \\ -J \\ --fraction \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.all_unique} \\ {input.bam}; featureCounts -F SAF \\ -a {input.saf_all} \\ -M \\ -O \\ -J \\ --fraction \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.all_mm} \\ {input.bam}; featureCounts -F SAF \\ -a {input.saf_all} \\ -M \\ -O \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.all_total} \\ {input.bam}; # Run for uniquereadpeaks featureCounts -F SAF \\ -a {input.saf_unique} \\ -O \\ -J \\ --fraction \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.unique_unique} \\ {input.bam}; featureCounts -F SAF \\ -a {input.saf_unique} \\ -M \\ -O \\ -J \\ --fraction \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.unique_mm} \\ {input.bam} featureCounts -F SAF \\ -a {input.saf_unique} \\ -M \\ -O \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.unique_total} \\ {input.bam} """ |
1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 | shell: """ Rscript {params.script} \\ --ref_species {params.ref_sp} \\ --refseq_rRNA {params.rrna_flag} \\ --alias_path {params.a_path} \\ --gencode_path {params.g_path} \\ --refseq_path {params.rs_path} \\ --canonical_path {params.c_path} \\ --intron_path {params.i_path} \\ --rmsk_path {params.r_path} \\ --custom_path {params.custom_path} \\ --out_dir {params.base} \\ --reftable_path {params.a_config} """ |
1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 | shell: ''' # Setup tmp directory if [ -d "/lscratch/${{SLURM_JOB_ID}}" ]; then tmp_dir="/lscratch/${{SLURM_JOB_ID}}/" else tmp_dir="{out_dir}01_preprocess/07_rscripts/" if [ ! -d $tmp_dir ]; then mkdir $tmp_dir; fi fi #bash script to run bedtools and get site2peak lookuptable bash {params.bashscript} {input.mm_jcounts} {input.mm} {params.sp}_{params.p_type} {params.out} {params.pyscript} # above bash script will create {output.splice_table} Rscript {params.script} \\ --rscript {params.functions} \\ --peak_type {params.p_type} \\ --peak_unique {input.unique} \\ --peak_all {input.mm} \\ --peak_total {input.total} \\ --join_junction {params.junc} \\ --anno_anchor {params.anchor} \\ --read_depth {params.r_depth} \\ --demethod {params.d_method} \\ --sample_id {params.sp} \\ --ref_species {params.ref_sp} \\ --splice_table {output.splice_table} \\ --tmp_dir ${{tmp_dir}} \\ --out_dir {params.out} \\ --out_file {output.text} \\ --out_dir_DEP {params.out_de} \\ --output_file_error {params.error} ''' |
1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 | shell: ''' # Setup tmp directory if [ -d "/lscratch/${{SLURM_JOB_ID}}" ]; then tmp_dir="/lscratch/${{SLURM_JOB_ID}}/" else tmp_dir="{out_dir}01_preprocess/07_rscripts/" if [[ ! -d $tmp_dir ]]; then mkdir $tmp_dir; fi fi #bash script to run bedtools and get site2peak lookuptable bash {params.bashscript} {input.mm_jcounts} {input.mm} {params.sp}_{params.p_type} {params.out} {params.pyscript} # above bash script will create {output.splice_table} Rscript {params.script} \\ --rscript {params.functions} \\ --peak_type {params.p_type} \\ --peak_unique {input.unique} \\ --peak_all {input.mm} \\ --peak_total {input.total} \\ --join_junction {params.junc} \\ --anno_anchor {params.anchor} \\ --read_depth {params.r_depth} \\ --demethod {params.d_method} \\ --sample_id {params.sp} \\ --ref_species {params.ref_sp} \\ --splice_table {output.splice_table} \\ --tmp_dir ${{tmp_dir}} \\ --out_dir {params.out} \\ --out_file {output.text} \\ --out_dir_DEP {params.out_m} \\ --output_file_error {params.error} ''' |
1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 | shell: ''' # Setup tmp directory if [ -d "/lscratch/${{SLURM_JOB_ID}}" ]; then tmp_dir="/lscratch/${{SLURM_JOB_ID}}/" else tmp_dir="{out_dir}01_preprocess/07_rscripts/" if [ ! -d $tmp_dir ]; then mkdir $tmp_dir; fi fi Rscript {params.script} \\ --rscript {params.functions} \\ --peak_type {params.p_type} \\ --anno_anchor {params.anchor} \\ --read_depth {params.r_depth} \\ --sample_id {params.sp} \\ --ref_species {params.ref_sp} \\ --anno_dir {params.anno_dir} \\ --reftable_path {params.a_config} \\ --gencode_path {params.g_path} \\ --intron_path {params.i_path} \\ --rmsk_path {params.r_path} \\ --tmp_dir ${{tmp_dir}} \\ --out_dir {params.out} \\ --out_file {output.outfile} \\ --anno_strand {params.anno_strand} ''' |
1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 | shell: ''' # Setup tmp directory if [ -d "/lscratch/${{SLURM_JOB_ID}}" ]; then tmp_dir="/lscratch/${{SLURM_JOB_ID}}" else tmp_dir="{out_dir}01_preprocess/07_rscripts/" if [ ! -d $tmp_dir ]; then mkdir $tmp_dir; fi fi tmp_dir_s=$tmp_dir/same tmp_dir_o=$tmp_dir/oppo mkdir $tmp_dir_s mkdir $tmp_dir_o Rscript {params.script} \\ --rscript {params.functions} \\ --peak_type {params.p_type} \\ --anno_anchor {params.anchor} \\ --read_depth {params.r_depth} \\ --sample_id {params.sp} \\ --ref_species {params.ref_sp} \\ --anno_dir {params.anno_dir} \\ --reftable_path {params.a_config} \\ --gencode_path {params.g_path} \\ --intron_path {params.i_path} \\ --rmsk_path {params.r_path} \\ --tmp_dir ${{tmp_dir_s}} \\ --out_dir {params.out} \\ --out_file {output.EIOut_SS} \\ --anno_strand "SameStrand" Rscript {params.script} \\ --rscript {params.functions} \\ --peak_type {params.p_type} \\ --anno_anchor {params.anchor} \\ --read_depth {params.r_depth} \\ --sample_id {params.sp} \\ --ref_species {params.ref_sp} \\ --anno_dir {params.anno_dir} \\ --reftable_path {params.a_config} \\ --gencode_path {params.g_path} \\ --intron_path {params.i_path} \\ --rmsk_path {params.r_path} \\ --tmp_dir ${{tmp_dir_o}} \\ --out_dir {params.out} \\ --out_file {output.EIOut_OS} \\ --anno_strand "OppoStrand" ''' |
1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 | shell: ''' # Setup tmp directory if [ -d "/lscratch/${{SLURM_JOB_ID}}" ]; then tmp_dir="/lscratch/${{SLURM_JOB_ID}}/" else tmp_dir="{out_dir}01_preprocess/07_rscripts/" if [ ! -d $tmp_dir ]; then mkdir $tmp_dir; fi fi Rscript {params.script} \\ --rscript {params.functions} \\ --peak_type {params.p_type} \\ --anno_anchor {params.anchor} \\ --read_depth {params.r_depth} \\ --sample_id {params.sp} \\ --ref_species {params.ref_sp} \\ --anno_dir {params.anno_dir} \\ --reftable_path {params.a_config} \\ --gencode_path {params.g_path} \\ --intron_path {params.i_path} \\ --rmsk_path {params.r_path} \\ --tmp_dir ${{tmp_dir}} \\ --out_dir {params.out} \\ --out_file {output.outfile} \\ --anno_strand {params.anno_strand} ''' |
1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 | shell: ''' # Setup tmp directory if [ -d "/lscratch/${{SLURM_JOB_ID}}" ]; then tmp_dir="/lscratch/${{SLURM_JOB_ID}}/" else tmp_dir="{out_dir}01_preprocess/07_rscripts/" if [ ! -d $tmp_dir ]; then mkdir $tmp_dir; fi fi Rscript {params.script} \\ --rscript {params.functions} \\ --peak_type {params.p_type} \\ --anno_anchor {params.anchor} \\ --read_depth {params.r_depth} \\ --sample_id {params.sp} \\ --ref_species {params.ref_sp} \\ --tmp_dir ${{tmp_dir}} \\ --out_dir {params.out} \\ --out_file {output.outfile} ''' |
1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 | shell: """ Rscript -e 'library(rmarkdown); \ rmarkdown::render("{params.R}", output_file = "{output.o1}", \ params= list(samplename = "{params.sp}", \ peak_in = "{input.peak_in}", \ output_table = "{output.o2}", \ readdepth = "{params.r_depth}", \ PeakIdnt = "{params.p_type}"))' """ |
1712 1713 1714 1715 1716 1717 | shell: """ set -exo pipefail awk '$6=="+"' {input.bed_all} > {output.p_bed} awk '$6=="-"' {input.bed_all} > {output.n_bed} """ |
1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 | shell: """ manorm \\ --p1 "{params.de_dir}/{params.gid_1}_{params.peak_id}readPeaks_forMANORM_{wildcards.strand}.bed" \\ --p2 "{params.de_dir}/{params.gid_2}_{params.peak_id}readPeaks_forMANORM_{wildcards.strand}.bed" \\ --r1 "{params.de_dir}/{params.gid_1}_ReadsforMANORM_{wildcards.strand}.bed" \\ --r2 "{params.de_dir}/{params.gid_2}_ReadsforMANORM_{wildcards.strand}.bed" \\ --s1 0 \\ --s2 0 \\ -p 1 \\ -m 0 \\ -w {params.manorm_w} \\ -d {params.manorm_d} \\ -n 10000 \\ -s \\ -o {params.base} \\ --name1 {params.gid_1} \\ --name2 {params.gid_2} # rename MANORM final output file mv {params.base}{wildcards.group_id}_all_MAvalues.xls {output.mavals} # rename individual file names for each sample mv {params.base}{params.gid_1}_MAvalues.xls {params.base}{params.gid_1}_{params.peak_id}readPeaks_MAvalues.xls mv {params.base}{params.gid_2}_MAvalues.xls {params.base}{params.gid_2}_{params.peak_id}readPeaks_MAvalues.xls # mv folders of figures, filters, tracks to new location # remove folders if they already exist if [[ -d {params.base}output_figures_{params.peak_id}readPeaks ]]; then rm -r {params.base}output_figures_{params.peak_id}readPeaks; fi if [[ -d {params.base}output_filters_{params.peak_id}readPeaks ]]; then rm -r {params.base}output_filters_{params.peak_id}readPeaks; fi if [[ -d {params.base}output_tracks_{params.peak_id}readPeaks ]]; then rm -r {params.base}output_tracks_{params.peak_id}readPeaks; fi mv {params.base}output_figures {params.base}output_figures_{params.peak_id}readPeaks mv {params.base}output_filters {params.base}output_filters_{params.peak_id}readPeaks mv {params.base}output_tracks {params.base}output_tracks_{params.peak_id}readPeaks """ |
1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 | shell: """ set -exo pipefail ### for sample vs Bg compairson featureCounts -F SAF \\ -a {params.smplSAF} \\ -O \\ --fraction \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.bkUniqcountsmplPk} \\ {params.bkbam} featureCounts -F SAF \\ -a {params.smplSAF} \\ -M \\ -O \\ --fraction \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.bkMMcountsmplPk} \\ {params.bkbam} Rscript {params.script} \\ --samplename {params.gid_1} \\ --background {params.gid_2} \\ --peak_anno_g1 {params.anno_dir}/{params.gid_1}_annotation_{params.peak_id}readPeaks_final_table.txt \\ --peak_anno_g2 {params.anno_dir}/{params.gid_2}_annotation_{params.peak_id}readPeaks_final_table.txt \\ --Smplpeak_bkgroundCount_MM {output.bkMMcountsmplPk} \\ --Smplpeak_bkgroundCount_unique {output.bkUniqcountsmplPk} \\ --pos_manorm {params.de_dir}/{wildcards.group_id}/{wildcards.group_id}_P/{params.gid_1}_{params.peak_id}readPeaks_MAvalues.xls \\ --neg_manorm {params.de_dir}/{wildcards.group_id}/{wildcards.group_id}_N/{params.gid_1}_{params.peak_id}readPeaks_MAvalues.xls \\ --output_file {output.post_proc} ### for Bg vs sample compairson featureCounts -F SAF \\ -a {params.bkSAF} \\ -O \\ --fraction \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.smplUniqcountbkPk} \\ {params.smplbam} featureCounts -F SAF \\ -a {params.bkSAF} \\ -M \\ -O \\ --fraction \\ --minOverlap 1 \\ -s 1 \\ -T {threads} \\ -o {output.smplMMcountbkPk} \\ {params.smplbam} Rscript {params.script} \\ --samplename {params.gid_2} \\ --background {params.gid_1} \\ --peak_anno_g1 {params.anno_dir}/{params.gid_2}_annotation_{params.peak_id}readPeaks_final_table.txt \\ --peak_anno_g2 {params.anno_dir}/{params.gid_1}_annotation_{params.peak_id}readPeaks_final_table.txt \\ --Smplpeak_bkgroundCount_MM {output.smplMMcountbkPk} \\ --Smplpeak_bkgroundCount_unique {output.smplUniqcountbkPk} \\ --pos_manorm {params.de_dir}/{wildcards.group_id}/{wildcards.group_id}_P/{params.gid_2}_{params.peak_id}readPeaks_MAvalues.xls \\ --neg_manorm {params.de_dir}/{wildcards.group_id}/{wildcards.group_id}_N/{params.gid_2}_{params.peak_id}readPeaks_MAvalues.xls \\ --output_file {output.post_procRev} """ |
1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 | shell: """ Rscript -e 'library(rmarkdown); \ rmarkdown::render("{params.R}", \ output_file = "{output.reportOut}", \ params= list(peak_in="{input.post_proc}", \ PeakIdnt="{params.p_id}",\ samplename="{params.gid_1}", \ background="{params.gid_2}", \ pval="{params.pval}", \ FC="{params.fc}", \ incd_rRNA="{params.rrna}"\ ))' Rscript -e 'library(rmarkdown); \ rmarkdown::render("{params.R}", \ output_file = "{output.reportRev}", \ params= list(peak_in="{input.post_procRev}", \ PeakIdnt="{params.p_id}",\ samplename="{params.gid_2}", \ background="{params.gid_1}", \ pval="{params.pval}", \ FC="{params.fc}", \ incd_rRNA="{params.rrna}"\ ))' """ |
1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 | shell: """ set -exo pipefail ################################################ # Setup tmp directory ################################################ if [ -d "/lscratch/${{SLURM_JOB_ID}}" ]; then tmp_dir="/lscratch/${{SLURM_JOB_ID}}" else tmp_dir="{out_dir}01_preprocess/05_tmp_bam" if [ ! -d $tmp_dir ]; then mkdir $tmp_dir; fi fi # Create header samtools view -@ {threads} -H {input.f1} > ${{tmp_dir}}/{params.base}.header.txt # Run samtools view -@ {threads} -F 16 {input.f1} \\ | cat ${{tmp_dir}}/{params.base}.header.txt - \\ | samtools view -Sb > {output.p_bam} samtools view -@ {threads} -f 16 {input.f1} \\ | cat ${{tmp_dir}}/{params.base}.header.txt - \\ | samtools view -Sb > {output.n_bam} """ |
2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 | shell: """ Rscript {params.script} \\ --samplename {params.gid_1} \\ --background {params.gid_2} \\ --strand {params.st} \\ --sample_overlap {params.so} \\ --samplemanifest {input.s_manifest} \\ --input_dir {params.base} \\ --output_table {output.table} \\ --output_summary {output.summary} \\ --output_figures {params.figs} """ |
2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 | shell: """ Rscript {params.script} \\ --samplename {params.gid_1} \\ --background {params.gid_2} \\ --peak_type {params.p_type} \\ --join_junction {params.junc} \\ --samplemanifest {input.s_manifest} \\ --pos_DB {input.table_p} \\ --neg_DB {input.table_n} \\ --anno_dir_sample {params.anno_dir_s} \\ --reftable_path {params.ref_tab_config} \\ --anno_dir_project {params.anno_dir_p} \\ --ref_species {params.ref_sp} \\ --gencode_path {params.g_path} \\ --intron_path {params.i_path} \\ --rmsk_path {params.r_path} \\ --function_script {params.script_func} \\ --out_dir {params.base} """ |
2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 | shell: """ Rscript -e 'library(rmarkdown); \ rmarkdown::render("{params.R}", output_file = "{output.o1}", \ params= list(peak_in="{input.final}", \ DEGfolder="{params.base}",\ PeakIdnt="{params.p_id}",\ samplename="{params.gid_1}", \ background="{params.gid_2}", \ pval="{params.pval}", \ FC="{params.FC}", \ incd_rRNA="{params.rrna}"\ ))' """ |
Support
- Future updates
Related Workflows





