Automated analysis and result reporting for targeted sequencing data
Help improve this workflow!
This workflow has been published but could be further improved with some additional meta data:- Keyword(s) in categories input, output, operation, topic
You can help improve this workflow by suggesting the addition or removal of keywords, suggest changes and report issues, or request to become a maintainer of the Workflow .
The hemoMIPs pipeline is a fast and efficient analysis pipeline for the analysis of multiplexed and targeted NGS datasets created from Molecular Inversion Probes (MIPs). It runs highly automated using conda und snakemake and can be set to use GATK v4 or GATK v3 for variant calling. It reports benign and likely pathogenic variants in a userfriendly HTML report that shows detailed performance statistics and results.
Pre-requirements
Conda
The pipeline depends on Snakemake , a workflow management system that wraps up all scripts and runs them highly automated, in various environments (workstations, clusters, grid, or cloud). Further, we use Conda as software/dependency managment tool. Conda can install snakemake and all neccessary software with its dependencies automatically. Conda installation guidlines can be found here:
https://conda.io/projects/conda/en/latest/user-guide/install/index.html
Snakemake
After installing Conda, you install Snakemake using Conda and the
environment.yaml
provided in this repository. For this purpose, please clone or download and uncompress the repository first. Then change into the root folder of the local repository.
git clone https://github.com/kircherlab/hemoMIPs
cd hemoMIPs
We will now initiate three Conda environments, which we will need for some preparations as well as getting the Snakemake workflow invoked. The first environment (
hemoMIPs
) will contain only snakemake, the second (
ensemblVEP
) contains Ensembl VEP and htslib, the third (
prepTools
) contains some basic tools for preparing annotations (e.g. bedtools, samtools, htslib, bwa, picard):
conda env create -n hemoMIPs --file environment.yaml
conda env create -n ensemblVEP --file envs/vep.yml
conda env create -n prepTools --file envs/prep.yml
The
ensemblVEP
and
prepTools
environments are only needed for the initial setup and can be deleted afterwards. In case you are having difficulties installing the
hemoMIPs
environment (i.e. snakemake) using the yaml file, please try the following work-a-round:
conda create -n hemoMIPs -c bioconda -c conda-forge snakemake
Annotations of Ensembl VEP
We use
Ensembl Variant Effect Predictor (VEP)
to predict variant effects. You will need to install the annotation caches for VEP before you can run the snakmake workflow. For this purpose, you will need to run a tool from the
ensemblVEP
environment that we created above. Please adjust the path to your location in the command line below (
-c vep_cache/
).
Note that
snakemake
will later install a separate instance of VEP for running the pipeline and that we are only using the above environment to install the caches. Also note, that due to version conflicts with other software, VEP is not included in environments with other software. If you already have the VEP database, simply adjust the path to your database in the config.yml. We run the pipeline using VEP v98. If you wish to use another version or cache, you should up- or downgrade your specific version of VEP and make sure that the other VEP version is correctly referenced in the workflow.
The following commands will download the human VEP cache (approx. 14G), which may take a while.
conda activate ensemblVEP
mkdir vep_cache
vep_install -n -a cf -s homo_sapiens -y GRCh37 -c vep_cache/ –CONVERT
conda deactivate
Other required genome annotations
In the following steps, we are preparing the alignment and variant calling index of the reference genome as well as a VCF with known variants. We are using the above created
prepTools
environment:
conda activate prepTools
For alignment, we use the 1000 Genomes phase 2 build of the human reference
hs37d5.fa.gz
, which includes decoy sequences for sequences missing from the assembly. We will need the bwa index and picard/GATK dictionary index of this file.
mkdir reference_index
cd reference_index
wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5.fa.gz
gunzip hs37d5.fa.gz
bwa index hs37d5.fa
samtools faidx hs37d5.fa
picard CreateSequenceDictionary R=hs37d5.fa O=hs37d5.dict
HemoMIPs uses known variants reported by the 1000 Genomes project. To extract known variants for your target region, run the following command using your
target_coords.bed
. Here, we are using the file from the example project:
cd /~PathTo~/hemoMIPs/
mkdir known_variants
cd known_variants
tabix -h ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/release/20110521/ALL.wgs.phase1_release_v3.20101123.snps_indels_sv.sites.vcf.gz -R <( awk 'BEGIN{OFS="\t"}{print $1,$2-50,$3+50}' ../input/example_dataset/target_coords.bed | sort -k1,1 -k2,2n -k3,3n | bedtools merge ) | bgzip -c > phase1_release_v3.20101123.snps_indels_svs.on_target.vcf.gz
tabix -p vcf phase1_release_v3.20101123.snps_indels_svs.on_target.vcf.gz
If you decided to include MIPs to capture specific inversion alleles, you will also need to provide a BWA index for the inversion MIPs as well as the logic of evaluating those in
scripts/processing/summary_report.py
(lines 138-169). If you do not have inversion probes in your design, set the respective parameter (
Inv
) in the config file to "no". In the following, we will assume that you are using the pipeline for the analysis of the hemophilia MIP design and provide the relevant files with your input folder.
The environments needed to prepare the workflow can be removed at this step. Snakemake will install packages required for the workflow automatically during the first run of the pipeline. Do not remove the
hemoMIPs
environment as this is needed to invoke snakemake.
conda deactivate
conda env remove --name ensemblVEP
conda env remove --name prepTools
Config
Almost ready to go. After you prepared the files above, you may need to adjust locations and names of these files in the
config.yml
. Further, you need to specify your run type, i.e. whether you want to analyze paired-end read or single-end read data as well as your index design (single or double index) in the
config.yml
. The original workflow was developed for paired-end 2 x 120bp with one sample index read. The workflow however allows the analysis of single-end reads and up to two index reads/technical reads. If you have other read layouts, you might be able to reorganize your sequence data to match our workflow. For this purpose, see the section on 'Alternative Read Layouts' below.
To adjust single vs. paired-end, type of indexing or to deactivate inversion analysis, set the following parameters in the config file accordingly:
parameters:
inv: "yes" #set to "no" when no inversion design is provided
paired_end_reads: "yes" #set to "no" when single-read sequencing is applied
double_index: "no" #set to yes when double indexing is applied or a second technical read available
Please note that the workflow supports double indexing, i.e. sequence combinations between the two technical reads identify a specific sample, or the provision of a second technical read (e.g. for unique molecular identifiers, UMIs) which is not used for sample assignment but propagated in a separate BAM field for each read. If you are using double indexing set
double_index
to "yes" and provide a three column
sample_index.lst
file (see below). If sequence information from a technical read should be included, also set
double_index
to "yes", but provide a two column
sample_index.lst
file (see below). In this case, the first index read will be used to assign samples, and the sequence of the second read will be included in the BAM files, but will not be evaluated.
An example config can be found in
example_config.yml
. If you would like to run the example data set, please copy it to
config.yml
:
cd /~PathTo~/hemoMIPs/
cp example_config.yml config.yml
List of required input files
You need your NGS fastq files, information about your MIP design and the targeted regions. An example dataset is available with all relevant files in
input/example_dataset
. The required fastq input files should be created using the Illumina
bcl2fastq
tool (without using the tools' demultiplexing functionality). The pipeline can handle paired-end and single-end reads with up to two technical reads/index reads (i.e.
Undetermined_S0_L00{lane}_R1_001.fastq.gz
,
Undetermined_S0_L00{lane}_I1_001.fastq.gz
, additional for paired end read data:
Undetermined_S0_L00{lane}_R2_001.fastq.gz
, in case of a second index read:
Undetermined_S0_L00{lane}_I2_001.fastq.gz
). For instance, a paired-end single index dataset could be created by
bcl2fastq --create-fastq-for-index-reads --use-bases-mask 'Y*,I*,Y*'
.
Put your NGS fastq files in input/ together with:
-
MIP design file as generated by
https://github.com/shendurelab/MIPGEN
namedhemomips_design.txt
-
Named target regions (coordinates) of your MIP experiment named
target_coords.bed
-
A file containing known benign variants (can be left blank) named
benignVars.txt
. -
A barcode sample assignment file named
sample_index.lst
Examples and further information about these files is provided below.
Alternative read layouts
The most complex read layout supported by our workflow involves two reads for paired end sequences and up to two technical reads. From the technical reads either the first (single index) or both identify the sample (double index). If no double indexing is specified, but a second technical read specified, its sequence is propagated with the other read information. Thereby, UMI information can be maintained throughout the processing and later evaluated. If UMI sequences are actually read as part of a paired end or single read run, these might be moved to the second technical read. If double indexing is also used, the two double index sequences might be combined into one virtual read, freeing the second technical read for UMIs.
Below, we provide examples of how this reformatting of the input fastq files is achieved with commonly available bash commands. Please note that the examples assume only one lane, if several lanes need to be reformatted, these could be processed in parallel or with bash for loops.
Combining double indexes to free up a technical read
Combine I1 and I2 fastq files in one file:
paste <( zcat {Undetermined_S0_L001_I1_001.fastq.gz} ) \
<( zcat {Undetermined_S0_L001_I2_001.fastq.gz} ) | \
awk '{ count+=1; if ((count == 1) || (count == 3)) { print $1 } else { print $1$2 }; if (count == 4) { count=0 } }' | \
gzip -c > mod_Undetermined_S0_L001_I1_001.fastq.gz
For the barcode-to-sample assignment (
sample_index.lst
), use the format for a single index run and combine the double index sequences to one long string (Index1: GGATTCTCG and Index2: ACTGGTAGG becomes GGATTCTCGACTGGTAGG).
Cutting UMI sequences out of the main reads
If you have for example 4-bp-UMIs in the beginning of forward and reverse read, combine them to an 8 bp technical read:
paste <( zcat Undetermined_S0_L001_R1_001.fastq.gz ) \
<( zcat Undetermined_S0_L001_R2_001.fastq.gz ) | \
awk '{ count+=1; if ((count == 1) || (count == 3)) { print $1 } else { print substr($1,1,4)""substr($2,1,4) }; if (count == 4) { count=0 } }' | \
gzip -c > mod_Undetermined_S0_L001_I2_001.fastq.gz
Trim these 4 bp from the beginning of the reads:
zcat Undetermined_S0_L001_R1_001.fastq.gz | \
awk '{ count+=1; if ((count == 1) || (count == 3)) { print $1 } else { print substr($1,5) }; if (count == 4) { count=0 } }' | \
gzip -c > mod_Undetermined_S0_L001_R1_001.fastq.gz
zcat Undetermined_S0_L001_R2_001.fastq.gz | \
awk '{ count+=1; if ((count == 1) || (count == 3)) { print $1 } else { print substr($1,5) }; if (count == 4) { count=0 } }' | \
gzip -c > mod_Undetermined_S0_L001_R2_001.fastq.gz
If an 8 bp UMI is only present in the beginning of the forward read, copy these 8 bp into a technical read:
zcat Undetermined_S0_L001_R1_001.fastq.gz | \
awk '{ count+=1; if ((count == 1) || (count == 3)) { print $1 } else { print substr($1,1,8) }; if (count == 4) { count=0 } }' | \
gzip -c > mod_Undetermined_S0_L001_I2_001.fastq.gz
Do not forget to also trim these 8 bp from the forward read:
zcat Undetermined_S0_L001_R1_001.fastq.gz | \
awk '{ count+=1; if ((count == 1) || (count == 3)) { print $1 } else { print substr($1,9) }; if (count == 4) { count=0 } }' | \
gzip -c > mod_Undetermined_S0_L001_R1_001.fastq.gz
MIP probe design information
Information about the designed MIP probes and their location in the reference genome is needed as a tab-separated text file for the script
TrimMIParms.py
. The default input file has the following columns: index, score, chr, ext_probe_start, ext_probe_stop, ext_probe_copy, ext_probe_sequence, lig_probe_start, lig_probe_stop, lig_probe_copy, lig_probe_sequence, mip_scan_start_position, mip_scan_stop_position, scan_target_sequence, mip_sequence, feature_start_position, feature_stop_position, probe_strand, failure_flags, gene_name, mip_name. This format is obtained from MIP designs generated by MIPGEN (Boyle et al., 2014), a tool for MIP probe design available on GitHub (https://github.com/shendurelab/MIPGEN). Alternatively, files containing at least the following named columns can be used: chr, ext_probe_start, ext_probe_stop, lig_probe_start, lig_probe_stop, probe_strand, and mip_name. It is critical, that the reported coordinates and chromosome names match the reference genome used in alignment.
We used Y-chromosome specific targets (SRY) to detect the sex of the samples (see chromosome
Y
in
hemomips_design.txt
). Different Y chromosome targets can be designed for sex determination as the workflow simply counts Y-aligned reads. The pipeline also runs without Y-specific MIPs for sex determination, but in this case will output all samples to be female in the final report.
Named target regions in BED format
Please describe the target regions of your MIP experiments in a BED file. These regions and names will be used in the HTML report. An example of this BED file is provided below (see also
input/example_dataset/target_coords.bed
):
X 154250998 154251277 F8/upstream
X 154250827 154250998 F8/5-UTR
X 154250674 154250827 F8/1
X 154227743 154227906 F8/2
...
X 154088696 154088893 F8/25
X 154065871 154066037 F8/26
X 154064063 154065871 F8/3-UTR
X 154064033 154064063 F8/downstream
X 138612623 138612894 F9/upstream
X 138612894 138612923 F9/5-UTR
X 138612923 138613021 F9/1
X 138619158 138619342 F9/2
...
X 138642889 138643024 F9/7
X 138643672 138644230 F9/8
X 138644230 138645617 F9/3-UTR
X 138645617 138645647 F9/downstream
Known benign variants
A
benignVars.txt
can be used to describe known benign variants. If no such variants are available, an empty file with this name needs to be provided. If variants are provided in this file, these will be printed in gray font in the HTML report and separated in the CSV output files. An example of the format is provided below. The full file for the hemophilia project is available as
input/example_dataset/benignVars.txt
.
X_138633280_A/G
X_154065069_T/G
X_138644836_G/A
X_138645058_GT/-
X_138645060_-/GT
X_138645149_T/C
Barcode to sample assignment
A two or three column tab-separated file is required with the sequencing barcode information. The sample name will be used throughout the processing and reporting. If a two colum tab-separated file is provided, the sample barcode sequence is assumed to be in the first index read of the Illumina sequencing run (I1 FastQ read file). The pipeline can also handle double index designs where sequence combinations in the I1 and I2 files identify a specific sample. An example for the sample assignment files is provided below:
Single Index
#Seq Name
ACTGGTAGG Plate_001_01B.2
GCTCCAACG Plate_001_01C.3
GCGTAAGAT Plate_001_01D.4
TGACCATCA Plate_001_01E.5
GGATTCTCG Plate_001_01F.6
Double Index
#Index1 Index2 Name
GGATTCTCG ACTGGTAGG Plate_001_01A.1
CATGCGAGA GCGTAAGAT Plate_001_01B.2
TGACCATCA TGACCATCA Plate_001_01C.3
CATGCGAGA GGATTCTCG Plate_001_01D.4
Run pipeline
Ready to go! If you run the pipeline on a cluster see the
cluster.json
for an estimate of minimum requirements for the individual jobs. Note that this depends on your dataset size so you may have to adjust this.
To start the pipeline:
conda activate hemoMIPs
# dry run to see if everything works
snakemake --use-conda --configfile config.yml -n
# run the pipeline
snakemake --use-conda --configfile config.yml
We added an example_results folder to the repository to enable users to compare the output of the example_dataset analysis to our results.
Output files
The pipeline outputs varies files in intermediate steps as well as final analysis tables for the user to look at. Here, we describe the output folder structure. For further information about the various analysis steps, see Pipeline description below.
In the
output/
folder
dataset/
folders (named after your individual datasets) will be generated containing all output files.
Within this folder all processed files can be found in
mapping
, with genotyping files stored in
mapping/gatk4
or
mapping/gatk3
, respectively. The analysis tables and html report files can be found in
report
.
Mapping
/output/dataset/mapping/
The reads from the primary input fastq files are converted to BAM format (e.g.
mapping/sample.bam
). In case of multiple lanes, these are split into
mapping/sample_lX.bam
files. In these files, overlapping paired-end reads are merged (overlap consensus) and reads are assigned to samples using read groups. Information from the technical reads (I1/I2) is stored in
XI
and
YI
fields for the sequence and
XJ
and
YJ
fields for quality scores.
Individual (i.e. demultiplexed) sample.bams can be found in
mapping/by_sample/
.
Aligned and MIP arm trimmed files for each sample can be found in BAM format in
mapping/aligned/
. This folder also contains BAM index files. These are index files are for example required to visualize alignments in IGV.
Per sample information about reads aligning to the inversion MIP design (if provided) are stored in
mapping/inversion_mips
as individual BAM files and counts are summarized in
mapping/inversion_mips/inversion_summary_counts.txt
.
Genotyping using GATK4
/output/dataset/mapping/gatk4
Output files generated by GATK4 HaplotypeCaller (emitting all sites) can be found as
realign_all_samples.bam
and
bam.vcf.gz
. \
Genomic Variant Call Format (GVCF) files for each sample are available in
gatk4/gvcf/
as
SAMPLE.g.vcf.gz
files. \
realign_all_samples.all_sites.vcf.gz
is the combined VCF generated by GATK4 CombineGVCFs. \
The final genotyped VCF is called
realign_all_samples.vcf.gz
. \
MIP performance statistics can be found in
realign_all_samples.MIPstats.tsv
. \
Variant Effect Predictions are stored in
realign_all_samples.vep.tsv.gz
.
Genotyping using GATK3
/output/dataset/mapping/gatk3
This folder contains: \
A realigned BAM generated by GATK3 IndelRealigner:
realign_all_samples.bam
. \
A VCF containing genotypes for all sites generated by GATK3 UnifiedGenotyper:
realign_all_samples.all_sites.vcf.gz
. \
The final VCF with non-homozygote reference alleles:
realign_all_samples.vcf.gz
. \
A filtered list of InDels:
realign_all_samples.indel_check.txt
. \
MIP performance statistics:
realign_all_samples.MIPstats.tsv
. \
Variant Effect Predictions:
realign_all_samples.vep.tsv.gz
.
Report
/output/dataset/report
Final analysis tables and html files are stored in the
/report/gatk4
or
/report/gatk3
folder depending on which GATK version is used. A description of output files is available in the sections
Report generation
and
Report tables in text format
below.
Pipeline description
Primary sequence processing
The primary inputs are raw FastQ files from the sequencing run as well as a sample-to-barcode assignment. In primary processing, reads are converted to BAM format, demultiplexed (storing sample information as read group information), and overlapping paired-end reads are merged and consensus called (Kircher, 2012).
Alignment and MIP arm trimming
Processed reads are aligned to the reference genome (here GRCh37 build from the 1000 Genomes Project Phase II release) using Burrows-Wheeler Alignment (BWA) 0.7.5 mem (Li and Durbin, 2010). As MIP arm sequence can result in incorrect variant identification (by hiding existing variation below primer sequence), MIP arm sequences are trimmed based on alignment coordinates and new BAM files are created. In this step, we are using MIP design files from MIPgen (Boyle et al., 2014) by default. MIP representation statistics (text output file) are calculated from the aligned files. Further, reads aligning to Y-chromosome-unique probes (SRY) are counted for each sample and reported (text output file). In a separate alignment step, all reads are aligned to a reference sequence file describing only the structural sequence variants as mutant and reference sequences. Results are summarized over all samples with the number of reads aligning to each sequence contig in a text report.
Coverage analysis and variant calling
Coverage differences between MIPs are handled by down sampling regions of excessive coverage. Variants are genotyped using GATK (McKenna et al., 2010) UnifiedGenotyper (v3.4-46) in combination with IndelRealigner (v3.2-2). Alternatively, GATK v4.0.4.0 HaplotypeCaller is used in gVCF mode in combination with CombineGVCFs and GenotypeGVCFs. The gvcf output files are provided in the
output/dataset/mapping/gatk4/gvcf/
output folder for further sample specific information.
The hemophilia datasets perform similar when run either with the GATK3 or GATK4 workflow. However, in low quality genotype calls the performance might vary and a different call set might be obtained. In a reanalysis performed on one of the hemophilia sequencing experiments, the sample specific genotype agreement is above 0.99 (36 different out of 64,308 genotype calls) between the two GATK versions, with high agreement in associated genotype qualities. We therefore choose GATK4 as the standard setting for the workflow as this versions maintains support, is 50x faster and easier to upgrade.
Variant annotations of the called variants, including variant effect predictions and HGVS variant descriptions are obtained from Ensembl Variant Effect Predictor (McLaren et al., 2016).
Report generation
Different HTML reports are generated for visualization, interpretation and better access to all information collected in previous steps. There are two entry points to this information, organized as two different HTML reports – one summarizing all variant calls and MIP performance across samples and the other summarizing per-sample results in an overview table. The first report (
summary.html
) provides a more technical sample and variant summary, per region coverage and MIP performance statistics. This report across all samples can be used to assess assay performance (e.g. underperforming MIPs could be redesigned in future assays) and allows identification of suspiciously frequent variants (common variants or systematic errors).
The second report (
report.html
) provides an overview of results for each sample, highlighting putative deleterious variants and taking previously defined common/known benign variants out of focus (gray font). Additional information is provided about potential structural variants and incompletely covered regions. This table also provides an overall sample status field with information about passing and failing samples, as well as flags indicating outlier MIP performances.
Both reports provide links to individual report pages of each sample. The individual reports (
ind_SAMPLENAME.html
), provide quality measures like overall coverage, target region coverage, read counts underlying the inferred sample sex (counting Y aligned reads) and MIP performance statistics (over- or underperforming MIPs in this sample), but most importantly provide detailed information on the identified variants, structural variant call results and regions without coverage (potential deletions).
Report tables in text format
In additional to the HTML output files for visualization, results are also presented in computer readable CSV format (comma separated) files. These CSV files can be joined by either the variant or sample specific identifier columns. The following results are summarized in the respective table files:
-
ind_status.csv
outputs the sample sex inferred from SRY counts, reports outlier MIP performance, number of genotype (GT) calls, covered sites within the MIP design regions, average coverage, heterozygous sites, incompletely covered regions, deletions as well as a textual summary in a sample quality flag (e.g. OK, Failed Inversions, Check MIPs). -
variant_calls.csv
andvariant_calls_benign.csv
contain all or just benign variants, respectively, with location, genotype, quality scores, allelic depth, coverage and status information. -
variant_annotation.csv
provides additional annotations to called variants based on reference and alternative allele information. These annotations include gene name, exonic location, cDNA and CDS position, HGVS Transcript and Protein information, variant rsID, and 1000G allele frequency. -
inversion_calls.csv
contains count results for MIPs targeting predefined structural variants.
Optional
GATK v3
GATK v4 is included as a conda environment which automatically installs GATK v4.0.4.0 and all its dependencies.
If you prefer to run the original pipeline using GATK v3 (i.e. GATK 3.2.2 and GATK 3.4-46) you need to change
config.yml
to additionaly include "gatk3" or replace the gatk4 entry. Note that GATK 3.2.2 and 3.4-46 are no longer available for download from the BROAD websites. We therefore provide the required JAR files with this repository rather than obtaining them through Conda.
Shed Skin
Shed Skin is an experimental compiler, that can translate pure, but implicitly statically typed Python (2.4-2.6) programs into optimized C++. To fasten (~5x) the read overlapping process one of our python scripts can be translated to C++ with Shed Skin and cross-compiled. This will speed up the analysis but is not crucial for its implementation.
We are providing an example how we were able to cross-compile using shedskin. Please note that this example assumes that miniconda was installed. If you are using another source for Conda, you might need to adjust paths. Further, we need an environment with python v2.6 and the requirements for Shed Skin, which we provide as
envs/shedskin.yml
. Be sure that you are in your root hemoMIPs pipeline folder when executing the following commands.
# create a new environment
conda env create -f envs/shedskin.yml -n shedskin
mkdir -p ~/miniconda3/envs/shedskin/etc/conda/activate.d
mkdir -p ~/miniconda3/envs/shedskin/etc/conda/deactivate.d
echo '#!/bin/sh
export LD_LIBRARY_PATH="$HOME/miniconda3/envs/shedskin/lib:$LD_LIBRARY_PATH"' > ~/miniconda3/envs/shedskin/etc/conda/activate.d/env_vars.sh
echo '#!/bin/sh
unset LD_LIBRARY_PATH' > ~/miniconda3/envs/shedskin/etc/conda/deactivate.d/env_vars.sh
conda activate shedskin
Then download and install Shed Skin v0.9.4 into the bin directory of the hemoMIPs pipeline.
# Download Shed Skin 0.9.4
wget https://github.com/shedskin/shedskin/releases/download/v0.9.4/shedskin-0.9.4.tgz
# Create bin folder
mkdir -p bin
# Extract Shed Skin and remove file
tar -xzf shedskin-0.9.4.tgz -C bin
rm shedskin-0.9.4.tgz
# Install Shed Skin
cd bin/shedskin-0.9.4
python setup.py install
Now we can test the shed Skin installation. We have to modify the
Makefile
to point to the GC library.
shedskin -L ~/miniconda3/envs/shedskin/include test
sed -i '3s|$| -L ~/miniconda3/envs/shedskin/lib|' Makefile
make
./test
The result should look similar to:
*** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)
[analyzing types..]
********************************100%
[generating c++ code..]
[elapsed time: 1.29 seconds]
hello, world!
If the installation or test fails please have a look a the Shed Skin Dokumentation .
Compiling MergeTrimReads.py script
Now we need to compile the
MergeTrimReads.py
script as an extension module using Shed Skin:
# Go to the script folder
cd scripts/pipeline2.0
# create Makefile and edit it
shedskin -e -L ~/miniconda3/envs/shedskin/include MergeTrimReads
sed -i '3s|$| -L ~/miniconda3/envs/shedskin/lib|' Makefile
# Compile!
make
cd ../../
Code Snippets
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | import sys,os from optparse import OptionParser import pysam import gzip from collections import defaultdict def sameSign(val1,val2): if val1 > 0 and val2 > 0: return True elif val1 < 0 and val2 < 0: return True else: return False def sharedPrefix(s1,s2): minLength = min(len(s1),len(s2)) shared = 0 for ind in range(minLength-1): if s1[ind] == s2[ind]: shared+=1 else: break if minLength == 1: return max(0,shared-1) else: return shared def sharedSuffix(s1,s2): minLength = min(len(s1),len(s2))-1 shared = 0 for ind in range(minLength*-1,0)[::-1]: if s1[ind] == s2[ind]: shared+=1 else: break return shared parser = OptionParser() parser.add_option("-b","--BAM",dest="BAM",help="BAM file",default="realign_all_samples.bam") parser.add_option("-s","--sites",dest="sites",help="VCF file of variants (only InDels are extracted, def realign_all_samples.vcf.gz)",default="realign_all_samples.vcf.gz") parser.add_option("-o","--outfile",dest="outfile",help="Write output to file (def STDOUT)",default='') (options, args) = parser.parse_args() outfile = sys.stdout if options.outfile == "" else open(options.outfile,'w') if not os.path.exists(options.BAM) or not (os.path.exists(options.BAM+".bai") or os.path.exists(options.BAM.replace(".bam",".bai"))): sys.stderr.write("Error: input BAM file not found\n") sys.exit() if not os.path.exists(options.sites): sys.stderr.write("Error: input VCF file not found\n") sys.exit() variants = [] infile = gzip.open(options.sites) for line in infile: if line.startswith("#"): continue else: fields = line.rstrip().split("\t") if len(fields[3]) != len(fields[4]): ref = fields[3] for alt in fields[4].upper().split(','): if len(ref) == len(alt): continue else: trimValue = sharedPrefix(ref,alt) if trimValue != 0: nref = ref[trimValue:] nalt = alt[trimValue:] else: nref,nalt = ref,alt trimValue2 = sharedSuffix(nref,nalt) if trimValue2 != 0: nref = nref[:-trimValue2] nalt = nalt[:-trimValue2] if len(nalt) == len(nref) and len(ref) == 1: continue elif (trimValue == 0): variants.append((fields[0],int(fields[1]),nref,nalt)) else: variants.append((fields[0],int(fields[1])+trimValue,nref,nalt)) infile.close() #print variants #print variants[2:3] #sys.exit() #variants = [("X",154158201,"T","TG")] #variants = [("X",138645010,"TATATATAATATATATATAAA","T")] #variants = [("X",154158192,"C","CTTT")] infile = pysam.Samfile(options.BAM, "rb" ) for chrom,pos,ref,alt in variants: event = len(alt)-len(ref) res = defaultdict(int) start = pos-1 end = pos+max(len(ref),len(alt))-1 for pileupcolumn in infile.pileup(chrom,start,end,max_depth=10**9): if start-5 <= pileupcolumn.pos <= end+5: for pileupread in pileupcolumn.pileups: #print pileupread.indel,event,sameSign(pileupread.indel,event) if sameSign(pileupread.indel,event): RG = None for key,value in pileupread.alignment.tags: if key == "RG": RG=value res[RG]+=1 #res = filter(lambda (x,y):y>=3,res.iteritems()) res = map(lambda (x,y):"%d:%s"%(y,x),res.iteritems()) outfile.write("%s_%d_%s/%s %s\n"%(chrom,pos,ref,alt," ".join(res))) infile.close() if options.outfile != "": outfile.close() |
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | import sys for line in sys.stdin: chrom,posrange = line.strip().split(":") start,end = map(int,posrange.split("-")) if end-start > 300: csize = end-start number = (csize/200)+1 nsize = csize/number hstart,hend = start-50,start+nsize+50 print "%s:%d-%d"%(chrom,hstart,hend) while (hend < end): hstart,hend = hend-100,hend-50+nsize print "%s:%d-%d"%(chrom,hstart,hend) else: print "%s:%d-%d"%(chrom,start,end) |
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 | import sys, os from optparse import OptionParser import gzip import pysam import math from collections import defaultdict from AnalysisLib import get_from_tabix,eval_1000G_frequencies from bx.intervals.intersection import Intersecter, Interval genomeBuild = "GRCh37" #commonVars = set(['rs6048','rs6049','rs1800291','rs1800292','rs1800297','rs1050705','rs1396947','rs440051']) def prefix(alleles): if len(alleles) > 1: check_shared = alleles[0] while len(check_shared) > 0: if reduce(lambda x,y: x and y.startswith(check_shared),alleles,True): return check_shared else: check_shared = check_shared[:-1] return "" else: return "" def sharedPrefix(s1,s2): minLength = min(len(s1),len(s2)) shared = 0 for ind in range(minLength-1): if s1[ind] == s2[ind]: shared+=1 else: break if minLength == 1: return max(0,shared-1) else: return shared def sharedSuffix(s1,s2): minLength = min(len(s1),len(s2))-1 shared = 0 for ind in range(minLength*-1,0)[::-1]: if s1[ind] == s2[ind]: shared+=1 else: break return shared def splitFields(x): helper = x.partition("=") return helper[0],helper[2] def eval_sex_check(filename): res = {} infile = open(filename) for line in infile: fields = line.split() if len(fields) == 3: sample = fields[0] if sample.endswith(".bam"): sample = fields[0][:-4] if sample.endswith(".M"): sample = ".".join(sample.split(".")[:-1]) total = int(fields[2]) sry = int(fields[1]) state = "?" info = "SRY/Total: %d/%d = %.2f%%"%(sry,total,0 if total == 0 else sry/float(total)*100) if (total > 0) and (sry > total*0.001): state = 'M' elif (sry == 0) and (total > 1000): state = 'F' elif (total > 1000) and (sry < total*0.0001): state = 'F?' res[sample] = state,info infile.close() return res def median(vals): sorted_values = list(vals) sorted_values.sort() if len(sorted_values) == 0: return None elif len(sorted_values) % 2 == 0: return (sorted_values[len(sorted_values)//2-1]+sorted_values[len(sorted_values)//2])*0.5 else: return sorted_values[(len(sorted_values)-1)//2] def percentile(vals,percentile): sorted_values = list(vals) sorted_values.sort() if len(sorted_values) == 0 or (0 > percentile) or (percentile > 1.0): return None else: return sorted_values[min(int(round((len(sorted_values)-1)*percentile)),len(sorted_values)-1)] def cleanOtherAlleleString(helper): return helper.replace(",<NON_REF>","").replace(",*","") parser = OptionParser("%prog [options]") parser.add_option("--vcf", dest="vcf", help="Filename of input multi-sample VCF file with all sites (def 'realign_all_samples.all_sites.vcf.gz')",default="realign_all_samples.all_sites.vcf.gz") parser.add_option("--vep", dest="vep", help="VEP results (def 'realign_all_samples.vep.tsv.gz')",default="realign_all_samples.vep.tsv.gz") parser.add_option("-i","--inversions", dest="inversions", help="Analysis results for inversion MIPs (def 'inversion_mips/inversion_summary_counts.txt')",default="inversion_mips/inversion_summary_counts.txt") parser.add_option("-s","--sample_sex", dest="sample_sex", help="Analysis results for sex check (def 'samples_sex_check.txt')",default="samples_sex_check.txt") parser.add_option("-t","--target", dest="target", help="BED file of target regions (def 'target_coords.bed')",default="target_coords.bed") parser.add_option("-f", "--factor", dest="factor", help="Allowed deviation for MIP performance (def 10)",default=10,type="int") parser.add_option("-m","--mipstats", dest="mipstats", help="File with MIP performance counts (def 'realign_all_samples.MIPstats.tsv')",default="realign_all_samples.MIPstats.tsv") parser.add_option("-c","--indelCheck", dest="indelCheck", help="Only report indels with count evidence (def 'realign_all_samples.indel_check.txt')",default="realign_all_samples.indel_check.txt") parser.add_option("-d", "--design", dest="design", help="MIP design file (default hemomips_design.txt)",default="hemomips_design.txt") parser.add_option("--TG", dest="TG", help="1000 Genomes variant tabix file",default="") parser.add_option("-b", "--benign",dest="benign", help="List of benign variants",default="") #parser.add_option("--freq", dest="freq", help="Maximum 1000 Genomes allele frequency (def 0.05)",type="float",default=0.05) (options, args) = parser.parse_args() benignVars = set() if os.path.exists(options.benign): infile = open(options.benign) for line in infile: benignVars.add(line.rstrip()) infile.close() print benignVars sex_check = eval_sex_check(options.sample_sex) ################################################################################################################################ # Adapt these lines to specify the combinations of the Inversion based on the MIP design inversion_names = ["inv22_ID+IU","inv22_ED+2U","inv22_ED+3U","inv22_ID+2U","inv22_ID+3U","inv22_ED+IU","inv1_1IU+1ID","inv1_1IU+1ED"] INT22_inversion_types = [ ("INT22-1#1" , [False,True,False,False,True,False, None,None] ), ("INT22-1#2" , [False,True,False,False,False,True, None,None] ), ("INT22-1#3" , [False,True,False,False,True,True, None,None] ), ("INT22-1#4" , [False,False,False,False,True,True, None,None] ), ("INT22-2#5" , [False,False,True,True,False,False, None,None] ), ("INT22-2#6" , [False,False,True,True,False,True, None,None] ), ("INT22-2#7" , [False,False,True,False,False,True, None,None] ), ("INT22-2#8" , [False,False,False,True,False,True, None,None] ), ("INT22-unknown" , [False,False,False,False,False,True, None,None] ) ] noINT22_inversion_types = [ ("benign_dup" , [True,True,True,False,False,True, None,None] ), ("noINT22#1" , [True,True,True,False,False,False, None,None] ), ("noINT22#2" , [True,True,False,False,False,False, None,None] ), ("noINT22#3" , [True,False,False,False,False,False, None,None] ), ("noINT22#4" , [True,False,True,False,False,False, None,None] ), ("noINT22#5" , [False,True,True,False,False,False, None,None] ) ] INT22failed_inversion_types = [ ("INT22-FAILED#1" , [False,True,False,False,False,False, None,None] ), ("INT22-FAILED#2" , [False,False,True,False,False,False, None,None] ), ("INT22-FAILED#3" , [False,False,False,True,False,False, None,None] ), ("INT22-FAILED#4" , [False,False,False,False,True,False, None,None] ), ("INT22-FAILED#5" , [False,False,False,False,False,False, None,None] ) ] INT1_inversion_types = [ ("INT1" , [None,None,None,None,None,None, False,True] ), ("noINT1" , [None,None,None,None,None,None, True,False] ), ("INT1-FAILED" , [None,None,None,None,None,None, False,False] ), ("Conflict: INT1" , [None,None,None,None,None,None, True,True] ) ] if not os.path.exists(options.vep) or not os.path.exists(options.vcf): sys.stderr.write("Error: VEP and/or VCF input files not available!\n") sys.exit() TGTabix = None if not os.path.exists(options.TG+".tbi"): sys.stderr.write("1000 Genomes tabix: Require valid path to compressed tabix file and tabix index file.\n") sys.exit() else: sys.stderr.write('1000 Genomes variants tabix file (%s)...\n'%(options.TG)) TGTabix = pysam.Tabixfile(options.TG,'r'),None,None,None,"1000 Genomes variants" bedanno = None coverage_stats_by_region = {} sorted_regions = [] name2region = {} if options.target != "" and os.path.exists(options.target): bedanno = {} infile = open(options.target) for line in infile: fields = line.rstrip().split('\t') if len(fields) > 3: chrom = fields[0] start = int(fields[1]) end = int(fields[2]) name = fields[3] if chrom not in bedanno: bedanno[chrom] = Intersecter() bedanno[chrom].add_interval( Interval(start+1, end+1, value = name) ) sorted_regions.append(name) name2region[name] = (chrom,start,end) if name not in coverage_stats_by_region: coverage_stats_by_region[name] = defaultdict(int) infile.close() inversion_obs = {} if os.path.exists(options.inversions): infile = open(options.inversions) for line in infile: fields = line.split() if len(fields) >= 1: individual = fields[0] if individual.endswith(".M"): individual = ".".join(individual.split(".")[:-1]) inversion_obs[individual] = defaultdict(int) for counts in fields[1:]: count,name = counts.split(':') count = int(count) inversion_obs[individual][name]=count infile.close() MIPcoords = {} if os.path.exists(options.design): # DESIGN FILE ARM RANGES ARE 1-BASED, CLOSED INTERVALS fchrom,ligStart,ligEnd,extStart,extEnd,fstrand,fmipname = 2,7,8,3,4,17,-1 infile = open(options.design) for line in infile: fields = line.rstrip().split("\t") if len(fields) > 8: if line.startswith(">") or line.startswith('#'): for ind,elem in enumerate(fields): if elem == "chr" or elem == "chrom": fchrom = ind elif elem == "ext_probe_start": extStart = ind elif elem == "ext_probe_stop": extEnd = ind elif elem == "lig_probe_start": ligStart = ind elif elem == "lig_probe_stop": ligEnd = ind elif elem == "probe_strand": fstrand = ind elif elem == "mip_name": fmipname = ind else: chrom,lstart,lend,estart,eend,strand,mipname = fields[fchrom],int(fields[ligStart])-1,int(fields[ligEnd]),int(fields[extStart])-1,int(fields[extEnd]),fields[fstrand],fields[fmipname] if strand == "+": MIPcoords[mipname] = chrom,lend,estart+1 else: MIPcoords[mipname] = chrom,lstart,eend infile.close() else: sys.stderr.write("MIP design file (%s) not available.\n"%(options.design)) sys.exit() allsamples = [] sample2ind = {} TotalSample = [] MIPcounts = {} failedMIPs = defaultdict(list) failedMIPs_summary = defaultdict(list) if os.path.exists(options.mipstats): ###################### # Reading MIP counts # ###################### infile = open(options.mipstats) for line in infile: if line.startswith("#"): allsamples = map(lambda x: x if not x.endswith(".M") else ".".join(x.split(".")[:-1]),line.rstrip().split("\t")[1:]) TotalSample = len(allsamples)*[0] for ind,sample in enumerate(allsamples): sample2ind[sample] = ind #print allsamples #print TotalSample else: fields = line.rstrip().split("\t") if len(fields) == len(allsamples)+1: MIPcounts[fields[0]] = map(int,fields[1:]) for ind,count in enumerate(MIPcounts[fields[0]]): TotalSample[ind]+=count #else: #print len(fields), len(allsamples)+1 infile.close() ###################################### # By-plate MIP performance analysis # ###################################### plates = set() for i in allsamples: plates.add("_".join(i.split("_")[:2])) plates = list(plates) plates.sort() for plate in plates: for mip,counts in sorted(MIPcounts.iteritems()): if mip not in MIPcoords: continue if mip.startswith("Y"): continue vals = [] psamples = [] for ind,count in enumerate(counts): if allsamples[ind].startswith(plate): vals.append(count/float(TotalSample[ind])) psamples.append(allsamples[ind]) ## Infer variance across samples med,lowsig,uppersig = median(vals),percentile(vals,0.341),percentile(vals,0.682) if med == 0: continue #sig = ((med-lowsig) + (uppersig-med))*0.5 siglow,sighigh = med-lowsig, uppersig-med factor = options.factor # qnorm(.975) = 1.959964, qnorm(.995) = 2.575829 #intlow,inthigh = max(0.0,med-sig*factor),min(med+sig*factor,1.0) intlow,inthigh = max(0.0,med-siglow*factor),min(med+sighigh*factor,1.0) outliers = 0 for ind,individual in enumerate(psamples): if (intlow > vals[ind]) or (vals[ind] > inthigh): outliers+=1 if vals[ind] > inthigh: failedMIPs[individual].append("+:"+mip) failedMIPs_summary["+:"+mip].append(individual) elif vals[ind] < intlow: failedMIPs[individual].append("-:"+mip) failedMIPs_summary["-:"+mip].append(individual) else: sys.stderr.write("Error: MIP stats file (%s) not available.\n"%(options.design)) GT_stats = {} GT_stats_gene = defaultdict(int) genes = set() coverage_stats = {} coverage_holes = {} het_stats = {} var_stats = {} variants = {} VEP = {} if options.vep.endswith('.gz'): infile = gzip.open(options.vep) else: infile = open(options.vep) VEPheader = None for line in infile: if line.startswith('##'): continue elif line.startswith('#Chrom'): VEPheader = line[1:].rstrip().split("\t") VEPheader_html = map(lambda x: x.replace("_","<br>"),VEPheader) else: if VEPheader == None: sys.stderr.write("Error: VEP file misses header.\n") sys.exit() else: fields = line.rstrip().replace("%3D","=").split("\t") vepline = dict(zip(VEPheader,fields)) if 'Uploaded_variation' in vepline and genomeBuild+"_"+vepline['Uploaded_variation'] not in VEP: chrom,pos,alleles = vepline['Uploaded_variation'].split("_") pos = int(pos) alleles =alleles.split("/") TGTabix,variantLines = get_from_tabix(TGTabix,chrom,pos) #print variantLines,chrom,pos,alleles F1000g,ASN_AF,AMR_AF,AFR_AF,EUR_AF,isLowCov = eval_1000G_frequencies(variantLines,alleles[0],alleles[-1]) #print F1000g,vepline['Uploaded_variation'] #if F1000g <= options.freq: if F1000g > 0: fields[-1]+=';1000G_AF=%.5f'%F1000g fields[-1]=fields[-1].lstrip(";") isBenign = False if vepline['Uploaded_variation'] in benignVars: isBenign = True VEP[genomeBuild+"_"+vepline['Uploaded_variation']] = fields,isBenign else: sys.stderr.write("Error: Unexpected duplication of annotation line or misformed line in VEP.\n") sys.exit() infile.close() fchrom = 0 fpos = 1 fname = 2 fref = 3 falt = 4 fqual = 6 finfo = 7 fformat = 8 findividual = 9 sites = set() sites_gene = defaultdict(set) InDelCheck_counts = {} if os.path.exists(options.indelCheck): infile = open(options.indelCheck) for line in infile: fields = line.split() fdict = dict(map(lambda x: (":".join(x.split(":")[1:]),int(x.split(":")[0])),map(lambda x: x if not x.endswith(".M") else ".".join(x.split(".")[:-1]),fields[1:]))) InDelCheck_counts["%s_%s"%(genomeBuild,fields[0])] = fdict infile.close() if options.vcf.endswith('.gz'): infile = gzip.open(options.vcf) else: infile = open(options.vcf) header = None individuals = [] is_gatk4 = False varcalls_gatk4 = {} for line in infile: if line.startswith('##'): continue elif line.startswith('#CHROM'): header = line[1:].rstrip().split("\t") individuals = map(lambda x: x if not x.endswith(".M") else ".".join(x.split(".")[:-1]), header[findividual:]) for individual in individuals: GT_stats[individual] = 0 coverage_stats[individual] = 0 coverage_holes[individual] = [] het_stats[individual] = 0 var_stats[individual] = 0 variants[individual] = [] else: if header == None: sys.stderr.write("Error: VCF file misses header.\n") sys.exit() else: fields = line.rstrip().split("\t") chrom,pos = fields[0],int(fields[1]) if bedanno != None: if chrom not in bedanno: continue region_name = None for cinterval in bedanno[chrom].find(pos,pos+1): region_name = cinterval.value if region_name == None: continue gene_name = region_name.split("/")[0] genes.add(gene_name) sites_gene[gene_name]=set() #VCFline = dict(zip(header,fields)) if ('<NON_REF>' in fields[falt]): is_gatk4 = True alleles = [fields[fref]]+fields[falt].split(',')[:-1] elif (fields[falt] != '.'): alleles = [fields[fref]]+fields[falt].split(',') else: alleles = [fields[fref]] allele_dict = dict(map(lambda (x,y):(str(x),y), enumerate(alleles))) if is_gatk4 and len(varcalls_gatk4) == 0: finalVariantsFilename = options.vcf.replace(".all_sites","") if os.path.exists(finalVariantsFilename) and options.vcf.endswith('.gz'): finalVars = gzip.open(finalVariantsFilename) elif os.path.exists(finalVariantsFilename): finalVars = open(finalVariantsFilename) for vline in finalVars: if vline.startswith('#'): continue vfields = vline.rstrip().split("\t") varcalls_gatk4[(vfields[fchrom],vfields[fpos],vfields[fref])]=vfields finalVars.close() if (len(varcalls_gatk4) > 0): sys.stderr.write("Read %d variants from filtered variant output file (%s, assuming GATK 4).\n"%(len(varcalls_gatk4),finalVariantsFilename)) count_GT = 0 sample_GTs = [] sample_coverages = [] formatfields = fields[fformat].split(':') #print fields[:10] #print varcalls_gatk4[(fields[fchrom],fields[fpos],fields[fref])] for ind,individual in enumerate(individuals): values = dict(zip(formatfields[:len(fields[ind+findividual].split(':'))],fields[ind+findividual].split(':')[:len(formatfields)])) if ('<NON_REF>' in fields[4]) and ((fields[fchrom],fields[fpos],fields[fref]) not in varcalls_gatk4): # HOMOZYGOTE REFERENCE CALLS GATK 4 if ('DP' in values) and (int(values['DP']) >= 3): sample_GTs.append(1) count_GT += 1 sample_coverages.append(int(values['DP'])) else: sample_GTs.append(0) sample_coverages.append(0) elif ((fields[fchrom],fields[fpos],fields[fref]) in varcalls_gatk4) or fields[ind+findividual] != "./.": # COMPATIBILITY WITH GATK 3 callQual = fields[fqual] if (fields[fchrom],fields[fpos],fields[fref]) in varcalls_gatk4: vfields = varcalls_gatk4[(fields[fchrom],fields[fpos],fields[fref])] alleles = [vfields[fref]]+cleanOtherAlleleString(vfields[falt]).split(',') allele_dict = dict(map(lambda (x,y):(str(x),y), enumerate(alleles))) callQual = vfields[fqual] values = dict(zip(vfields[fformat].split(":")[:len(vfields[ind+findividual].split(':'))],vfields[ind+findividual].split(':')[:len(vfields[fformat].split(":"))])) if (('AD' in values) and ('GT' in values) and (sum(map(lambda x: 0 if not x.isdigit() else int(x),values['AD'].split(","))) >= 3)) or (('AD' not in values) and ('GT' in values) and ('DP' in values) and values['DP'].isdigit() and (int(values['DP']) >= 3)): obsalleles = [] sample_GTs.append(1) for gt in values['GT'].split('/'): if gt in allele_dict: obsalleles.append(allele_dict[gt]) if len(obsalleles) == 2: count_GT += 1 if ('GQ' in values) and (values['GQ'] != ".") and (float(values['GQ']) >= 30) and ('DP' in values) and (int(values['DP']) >= 8) and (len(set(obsalleles)) != 1): het_stats[individual]+=1 for alt in set(obsalleles): if alt != fields[fref]: ppos,pref,palt = fields[fpos],fields[fref],alt if not(len(palt) == len(pref) and len(pref) == 1): trimValue = sharedPrefix(pref,palt) if trimValue != 0: pref = pref[trimValue:] palt = palt[trimValue:] ppos = str(int(ppos)+trimValue) trimValue2 = sharedSuffix(pref,palt) if trimValue2 != 0: pref = pref[:-trimValue2] palt = palt[:-trimValue2] pfix = len(prefix([pref,palt])) varstring = "%s_%s_%s_%s/%s"%(genomeBuild,fields[fchrom],ppos,pref,palt) varObs = None if varstring not in InDelCheck_counts else (0 if individual not in InDelCheck_counts[varstring] else InDelCheck_counts[varstring][individual]) if varstring not in VEP: varstring = "%s_%s_%d_%s/%s"%(genomeBuild,fields[fchrom],int(ppos)+pfix,pref[pfix:],palt[pfix:]) if varstring not in VEP: varstring = "%s_%s_%d_%s/%s"%(genomeBuild,fields[fchrom],int(ppos)+pfix,"-",palt[pfix:]) if varstring not in VEP: varstring = "%s_%s_%d_%s/%s"%(genomeBuild,fields[fchrom],int(ppos)+pfix,pref[pfix:],"-") if varstring not in VEP: sys.stderr.write("Can not retrieve variant from VEP output %s_%s_%s_%s/%s\n"%(genomeBuild,fields[fchrom],fields[fpos],pref,palt)) continue if varObs != None and float(varObs)/int(values['DP']) < 0.05: continue if (('GQ' in values) and (float(values['GQ']) < 30)) or (('DP' in values) and (int(values['DP']) < 8)): callQual = "LowQual" else: if callQual == ".": callQual = "OK" var_stats[individual]+=1 variants[individual].append((varstring,values['GT'],values['GQ'],values['AD'],values['DP'],callQual)) else: sample_GTs.append(0) if ('DP' in values) and (int(values['DP']) >= 3): sample_coverages.append(int(values['DP'])) else: sample_coverages.append(0) else: sample_GTs.append(0) sample_coverages.append(0) if count_GT > len(individuals)//2 and (chrom,pos) not in sites: sites.add((chrom,pos)) sites_gene[gene_name].add((chrom,pos)) coverage_stats_by_region[region_name]['Count']+=1 for ind,individual in enumerate(individuals): GT_stats[individual]+=sample_GTs[ind] GT_stats_gene[(gene_name,individual)]+=sample_GTs[ind] coverage_stats[individual]+=sample_coverages[ind] coverage_stats_by_region[region_name][individual+':Cov']+=sample_coverages[ind] coverage_stats_by_region[region_name][individual+':GT']+=sample_GTs[ind] if sample_GTs[ind] == 0: if len(coverage_holes[individual]) > 0: last = coverage_holes[individual][-1] if last[0] == chrom and last[2]+1 == pos: coverage_holes[individual][-1]=(chrom,last[1],pos) else: coverage_holes[individual].append((chrom,pos,pos)) else: coverage_holes[individual].append((chrom,pos,pos)) infile.close() total_sites = len(sites) total_sites_genes = dict(map(lambda x: (x,len(sites_gene[x])),sites_gene.keys())) try: os.makedirs('report') except: pass outfile3 = open('report/report.html','w') outfile3.write("<html>\n <head>\n <title>Sample Summary Report</title>\n </head>\n<body>\n") outfile3.write("""<p align="center"><h1>Sample Summary Report</h1></p>\n""") outfile3.write("""<table cellpadding="5" border="3">\n""") outfile3.write("""<tr><th>Individual</th><th>Sex</th><th>Short variants</th><th>Incomplete coverage</th><th>Deletions<br>(<50% covered)</th><th>INT1</th><th>INT22</th><th>Status</th></tr>\n""") outfile = open('report/summary.html','w') outfile.write("<html>\n <head>\n <title>Summary Report</title>\n </head>\n<body>\n") outfile.write("""<p align="center"><h1>Summary Report</h1></p>\n""") outfile.write("""<p>Total number of samples: <strong>%d</strong></p>\n"""%(len(individuals))) outfile.write("""<p>Total number of sites considered [cov in >50%% samples]: <strong>%d</strong></p>\n"""%(total_sites)) outfile.write("""<p></p>\n""") outfile.write("""<p><h2><a href="report.html">Sample summary</a></h2></p>\n""") outfile.write("""<table cellpadding="5" border="3">\n""") outfile.write("""<tr><th>SampleID</th><th>Sex</th><th>GTs</th><th>%GTs</th><th>Ave.Cov</th><th>Hets</th><th>Variants</th><th>VariantList (incl. low quality)</th></tr>\n""") #sorted_regions = coverage_stats_by_region.keys() #sorted_regions.sort() variantStats = {} coveragRegionStats = {} inversionMipStats = {} tbl_out1 = open('report/ind_status.csv','w') tbl1_header = ["TubeID", "SampleID", "Sex", "SRY/Total", "Number of over-/under-performing MIPs", "Performance outlier MIPs", "Number of Sites with GTs", "Percent sites with GT"] for gene in sorted(genes): tbl1_header.append("%s-Number of Sites with GTs"%gene) tbl1_header.append("%s-Percent sites with GT"%gene) tbl1_header = tbl1_header+["Average Coverage", "Average Coverage of GT Calls", "Number Hets called", "Number short variants called", "Incomplete Coverage", "Deletions (<50% Covered)", "Status", "Notes"] # "Well", "Plate" tbl_out1.write('"%s"\r\n'%('","'.join(tbl1_header))) tbl_out2 = open('report/inversion_calls.csv','w') tbl2_header = ["ID", "SampleID", "Inversion MIP Reads", "Inversion Results", "Status"] tbl_out2.write('"%s"\r\n'%('","'.join(tbl2_header))) tbl_out3 = open('report/variant_calls.csv','w') tbl3_header = ["ID", "SampleID", "Location", "GT", "GQ", "AD", "DP", "Status"] tbl_out3.write('"%s"\r\n'%('","'.join(tbl3_header))) tbl_out3_benign = open('report/variant_calls_benign.csv','w') tbl_out3_benign.write('"%s"\r\n'%('","'.join(tbl3_header))) tbl_out4 = open('report/variant_annotation.csv','w') tbl4_header = ["Location", "Build", "Chrom", "Pos", "Ref", "Alt", "Gene", "Region", "cDNA", "CDS", "Protein", "HGVS Transcript", "HGVS Protein", "rsID", "AF1000G", "Notes"] tbl_out4.write('"%s"\r\n'%('","'.join(tbl4_header))) for location,(vepline,isBenign) in VEP.iteritems(): build,chrom,pos,ref_alt = location.split("_") ref,alt = ref_alt.split('/') tbl4_fields = [location,build,chrom,pos,ref,alt] helper = dict(map(lambda x:splitFields(x),vepline[-1].split(";"))) tbl4_fields.append(helper["SYMBOL"]) if "EXON" in helper: tbl4_fields.append("Exon %s"%(helper["EXON"])) elif "INTRON" in helper: tbl4_fields.append("Intron %s"%(helper["INTRON"])) else: tbl4_fields.append("unknown") if vepline[10] != "-": tbl4_fields.append(vepline[10]) else: tbl4_fields.append('') if vepline[11] != "-": tbl4_fields.append(vepline[11]) else: tbl4_fields.append('') if vepline[12] != "-": tbl4_fields.append(vepline[12]) else: tbl4_fields.append('') if ("HGVSc" in helper): tbl4_fields.append(":".join(helper["HGVSc"].split(":")[1:])) else: tbl4_fields.append("") if ("HGVSp" in helper): tbl4_fields.append(":".join(helper["HGVSp"].split(":")[1:])) else: tbl4_fields.append("") rsIDs = [] for cID in vepline[-2].split(','): if cID.startswith('rs'): rsIDs.append(cID) tbl4_fields.append(','.join(rsIDs)) if ("1000G_AF" in helper): tbl4_fields.append(helper["1000G_AF"]) else: tbl4_fields.append("") tbl4_fields.append("") tbl_out4.write('"%s"\r\n'%('","'.join(tbl4_fields))) tbl_out4.close() for individual in individuals: status = None # None - OK, True - check, False - failed status_flags = set() tbl1_fields = [''] outfile2 = open('report/ind_%s.html'%(individual),'w') tbl1_fields.append(individual) #well = individual.split("_")[2].split(".")[0] #well = well[-1]+well[:-1] #tbl1_fields.append(well) #tbl1_fields.append(str(int(individual.split("_")[1]))) outfile2.write("<html>\n <head>\n <title>Report for %s</title>\n </head>\n<body>\n"%individual) outfile2.write("""<p align="center"><h1>Report for %s</h1></p>\n"""%individual) long_sex = "%s (%s)"%(sex_check[individual][0],sex_check[individual][1]) tbl1_fields.append(sex_check[individual][0]) tbl1_fields.append(sex_check[individual][1].split()[1]) outfile2.write("""<p>Sex determined from SRY MIPs: <strong>%s</strong></p>\n"""%long_sex) if sex_check[individual][0].endswith("?"): status = True status_flags.add("sex") tobs = 0 if individual not in failedMIPs else len(failedMIPs[individual]) if tobs > 1: if status == None: status = True status_flags.add("MIPs") outfile2.write("""<p>Number of target region performance outlier MIPs: <font color="#FF6600"><strong>%d</strong></font></p>\n"""%(tobs)) else: outfile2.write("""<p>Number of target region performance outlier MIPs: <strong>%d</strong></p>\n"""%(tobs)) tbl1_fields.append(str(tobs)) if len(failedMIPs[individual]) > 10: tbl1_fields.append(",".join(failedMIPs[individual][:10])+",...") else: tbl1_fields.append(",".join(failedMIPs[individual])) outfile2.write("""<p>Number of sites with GTs: <strong>%d (%0.2f%%)</strong></p>\n"""%(0 if total_sites == 0 else GT_stats[individual],0 if total_sites == 0 else GT_stats[individual]/float(total_sites)*100)) tbl1_fields.append(str(0 if total_sites == 0 else GT_stats[individual])) tbl1_fields.append(str(0 if total_sites == 0 else GT_stats[individual]/float(total_sites)*100)) for gene in sorted(genes): tbl1_fields.append(str(0 if total_sites_genes[gene] == 0 else GT_stats_gene[(gene,individual)])) tbl1_fields.append(str(0 if total_sites_genes[gene] == 0 else GT_stats_gene[(gene,individual)]/float(total_sites_genes[gene])*100)) outfile2.write("""<p>Number of sites with GTs [%s]: <strong>%d (%0.2f%%)</strong></p>\n"""%(gene,0 if total_sites_genes[gene] == 0 else GT_stats_gene[(gene,individual)],0 if total_sites_genes[gene] == 0 else GT_stats_gene[(gene,individual)]/float(total_sites_genes[gene])*100)) outfile2.write("""<p>Average coverage: <strong>%0.2f</strong></p>\n"""%(0 if total_sites == 0 else coverage_stats[individual]/float(total_sites))) tbl1_fields.append("%0.2f"%(0 if total_sites == 0 else coverage_stats[individual]/float(total_sites))) outfile2.write("""<p>Average coverage of GT calls: <strong>%0.2f</strong></p>\n"""%(0 if GT_stats[individual] == 0 else coverage_stats[individual]/float(GT_stats[individual]))) tbl1_fields.append("%0.2f"%(0 if GT_stats[individual] == 0 else coverage_stats[individual]/float(GT_stats[individual]))) if het_stats[individual] > 0 and sex_check[individual][0].startswith('M'): outfile2.write("""<p>Number of hets called: <font color="#FF6600"><strong>%d</strong></font></p>\n"""%(het_stats[individual])) else: outfile2.write("""<p>Number of hets called: <strong>%d</strong></p>\n"""%(het_stats[individual])) tbl1_fields.append(str(het_stats[individual])) outfile2.write("""<p>Number of short variants called: <strong>%d</strong></p>\n"""%(var_stats[individual])) tbl1_fields.append(str(var_stats[individual])) if len(variants[individual])-var_stats[individual] > 0: outfile2.write("""<p>Number of low quality variants not counted above: <font color="#FF6600"><strong>%d</strong></font></p>\n"""%(len(variants[individual])-var_stats[individual])) else: outfile2.write("""<p>Number of low quality variants not counted above: <strong>%d</strong></p>\n"""%(len(variants[individual])-var_stats[individual])) #tbl1_fields.append(str(len(variants[individual])-var_stats[individual])) outfile2.write("""<p></p>\n""") outfile2.write("""<p><h2>Variants identified in target region:</h2></p>\n""") if het_stats[individual] > 0 and sex_check[individual][0].startswith('M'): status = True status_flags.add("variants") outfile3.write("""<tr><td><a href="ind_%s.html">%s</a></td><td>%s</td>\n"""%(individual,individual,sex_check[individual][0])) o3_variants = [] if len(variants[individual]) > 0: outfile2.write("""<table cellpadding="5" border="3">\n""") outfile2.write("<tr><th>"+"</th><th>".join(["GT","GQ","AD","DP","Status"]+VEPheader_html[:3]+VEPheader_html[5:])+"</th></tr>\n") for variant,gt,gq,ad,dp,cstatus in variants[individual]: tbl3_fields = [''] tbl3_fields.append(individual) tbl3_fields.append(variant) tbl3_fields.append(gt) tbl3_fields.append(gq) tbl3_fields.append(ad) tbl3_fields.append(dp) tbl3_fields.append(cstatus) if cstatus == "LowQual": cstatus = """<font color="#FF6600">%s</font>"""%cstatus helper,isBenign = list(VEP[variant][0]),VEP[variant][1] VEP[variant][1] helper[-1] = helper[-1].replace(";","<br>") helper[-2] = helper[-2].replace(",","<br>") bgcolor = "" if isBenign: bgcolor = 'bgcolor="#f0f0f0"' tbl_out3_benign.write('"%s"\r\n'%('","'.join(tbl3_fields))) else: tbl_out3.write('"%s"\r\n'%('","'.join(tbl3_fields))) outfile2.write("<tr><td %s>"%(bgcolor)+("</td><td %s>"%(bgcolor)).join([gt,gq,ad,dp,cstatus]+helper[:3]+helper[5:])+"</td></tr>\n") if variant not in variantStats: helper = dict(map(lambda x:splitFields(x),VEP[variant][0][-1].split(";"))) variantStats[variant] = [helper["SYMBOL"],VEP[variant][0][9],1] else: variantStats[variant][-1]+=1 if float(gq) >= 30 and int(dp) >= 8: helper = dict(map(lambda x:splitFields(x),VEP[variant][0][-1].split(";"))) if isBenign: ovariant_text = '<font color="#737373"><strong>'+helper["SYMBOL"] else: ovariant_text = "<strong>"+helper["SYMBOL"] if "EXON" in helper: ovariant_text +=" (E:%s)"%(helper["EXON"]) elif "INTRON" in helper: ovariant_text +=" (I:%s)"%(helper["INTRON"]) if ("HGVSc" in helper) and ("HGVSp" in helper): ovariant_text +=" %s / %s"%(":".join(helper["HGVSc"].split(":")[1:]),":".join(helper["HGVSp"].split(":")[1:])) elif "HGVSp" in helper: ovariant_text +=" %s"%(":".join(helper["HGVSp"].split(":")[1:])) elif "HGVSc" in helper: ovariant_text +=" %s"%(":".join(helper["HGVSc"].split(":")[1:])) if isBenign: ovariant_text += "</strong></font> [%s %s:%s %s]"%tuple(variant.split("_")) else: ovariant_text += "</strong> [%s %s:%s %s]"%tuple(variant.split("_")) o3_variants.append(ovariant_text) else: if status == None: status = True status_flags.add("variants") outfile2.write("</table>\n") outfile2.write("""<p>GT - Genotype call encoded as allele values separated by "/". The allele values are 0 for the reference allele, 1 for the first alternative allele, 2 for the second allele.<br> GQ - Conditional genotype quality, encoded as a phred quality -10*log_10(p) (genotype call is wrong, conditioned on the site's being variant); we only report GQ >= 30 on the summary page.<br> AD - Read depth for each variant at this position (first reference, followed by alternative alleles).<br> DP - Read depth at this position for this sample; we only report DP >= 8 on the summary page.</p>\n""") else: outfile2.write("<p>None</p>\n") outfile3.write("<td>"+(" " if len(o3_variants)==0 else "<br>".join(o3_variants))+"</td>") o3_variants = [] o3_variants_del = [] tbl_variants_del = '' outfile2.write("""<p></p>\n""") outfile2.write("""<p><h2>Coverage by target region</h2></p>\n""") outfile2.write("""<table cellpadding="5" border="3">\n""") outfile2.write("<tr><th>"+"</th><th>".join(["Region","Length","Sites","Called","Fraction","Ave.Cov"])+"</th></tr>\n") lind, lind_del = None, None for ind,region in enumerate(sorted_regions): cvalue = coverage_stats_by_region[region][individual+":Cov"] gvalue = coverage_stats_by_region[region][individual+":GT"] tvalue = coverage_stats_by_region[region]['Count'] chrom,start,end = name2region[region] length = end-start if tvalue > 0: outfile2.write("<tr><td>%s</td><td>%d</td><td>%d</td><td>%d</td><td>%.2f%%</td><td>%.2f</td></tr>\n"%(region,length,tvalue,gvalue,gvalue/float(tvalue)*100,cvalue/float(tvalue))) else: outfile2.write("<tr><td>%s</td><td>%d</td><td>%d</td><td>%d</td><td>%.2f%%</td><td>%.2f</td></tr>\n"%(region,length,tvalue,gvalue,0,0)) if gvalue < tvalue: if lind == ind-1: start = o3_variants[-1].split(" - ")[0] o3_variants[-1] = start+" - "+region else: o3_variants.append(region) lind = ind if gvalue < tvalue*0.5: if lind_del == ind-1: start = o3_variants_del[-1].split(" - ")[0] o3_variants_del[-1] = start+" - "+region else: o3_variants_del.append(region) lind_del = ind if region not in coveragRegionStats: coveragRegionStats[region] = [tvalue,0,0,0] coveragRegionStats[region][1]+=cvalue coveragRegionStats[region][2]+=gvalue coveragRegionStats[region][3]+=1 outfile2.write("</table>\n") outfile2.write("""<p></p>\n""") outfile2.write("""<p><h2>Regions with missing genotype calls</h2></p>\n""") covholes = [] tbl1_fields.append('') if len(coverage_holes[individual]) > 0: outfile2.write("""<table cellpadding="5" border="3">\n""") outfile2.write("<tr><th>"+"</th><th>".join(["Chrom","Start","End","Region"])+"</th></tr>\n") for chrom,start,end in coverage_holes[individual]: regionstr = None if bedanno != None: if chrom not in bedanno: continue for cinterval in bedanno[chrom].find(start,start+1): regionstr = cinterval.value for cinterval in bedanno[chrom].find(end,end+1): if regionstr == None or regionstr == cinterval.value: regionstr = cinterval.value else: regionstr = regionstr+" - "+cinterval.value if regionstr == None: regionstr = "NA" covholes.append("%s:%d-%d (%s)"%(chrom,start,end,regionstr)) outfile2.write("<tr><td>%s</td><td>%d</td><td>%d</td><td>%s</td><tr>\n"%(chrom,start,end,regionstr)) tbl1_fields[-1]+="%s(%s:%d-%d),"%(regionstr,chrom,start,end) outfile2.write("</table>\n") else: outfile2.write("""<p>None</p>\n""") tbl1_fields[-1]=tbl1_fields[-1].rstrip(',') tbl1_fields.append(",".join(map(lambda x: x.replace(" - ","-"),o3_variants_del))) if len(covholes) < 10: outfile3.write("<td>"+(" " if len(covholes)==0 else "<br>".join(covholes))+"</td>") else: outfile3.write("<td>"+(" " if len(o3_variants)==0 else "<br>".join(o3_variants))+"</td>") outfile3.write("<td>"+(" " if len(o3_variants_del)==0 else "<br>".join(o3_variants_del))+"</td>") if individual in inversion_obs: tbl2_fields_inv1 = ['',individual,'','','failed'] tbl2_fields_inv22 = ['',individual,'','','failed'] outfile2.write("""<p></p>\n""") outfile2.write("""<p><h2>Inversion MIP results</h2></p>\n""") tobs = sum(inversion_obs[individual].values()) outfile2.write("""<p>Total inversion MIP reads: <strong>%d</strong></p>\n"""%(tobs)) minObsInv1,minObsInv22 = None,None if tobs > 0: outfile2.write("""<p>INV1 MIPs:</p>\n<table cellpadding="5" border="3">\n""") outfile2.write("""<tr><th>MIP name</th><th>Count</th></tr>\n""") tbl2helper = "" for invkey in ["inv1_1IU+1ID","inv1_1IU+1ED"]: outfile2.write("<tr><td>%s</td><td>%d</td><tr>\n"%(invkey,inversion_obs[individual][invkey])) if inversion_obs[individual][invkey] > 0: tbl2helper += "%s:%d,"%(invkey,inversion_obs[individual][invkey]) if minObsInv1 == None: minObsInv1 = inversion_obs[individual][invkey] minObsInv1 = min(minObsInv1, inversion_obs[individual][invkey]) outfile2.write("</table>\n") outfile2.write("<p></p>\n") tbl2helper=tbl2helper.strip(',') if tbl2helper == "": tbl2_fields_inv1[-1]='Failed' else: tbl2_fields_inv1[2]=tbl2helper if minObsInv1 < 8: tbl2_fields_inv1[-1]='Low' else: tbl2_fields_inv1[-1]='OK' outfile2.write("""<p>INV22 MIPs:</p>\n<table cellpadding="5" border="3">\n""") outfile2.write("""<tr><th>MIP name</th><th>Count</th></tr>\n""") tbl2helper = "" for invkey in ["inv22_ID+IU","inv22_ED+2U","inv22_ED+3U","inv22_ID+2U","inv22_ID+3U","inv22_ED+IU"]: outfile2.write("<tr><td>%s</td><td>%d</td><tr>\n"%(invkey,inversion_obs[individual][invkey])) if inversion_obs[individual][invkey] > 0: tbl2helper += "%s:%d,"%(invkey,inversion_obs[individual][invkey]) if minObsInv22 == None: minObsInv22 = inversion_obs[individual][invkey] minObsInv22 = min(minObsInv22, inversion_obs[individual][invkey]) outfile2.write("</table>\n") tbl2helper=tbl2helper.strip(',') if tbl2helper == "": tbl2_fields_inv22[-1]='Failed' else: tbl2_fields_inv22[2]=tbl2helper if minObsInv22 < 8: tbl2_fields_inv22[-1]='Low' else: tbl2_fields_inv22[-1]='OK' inv1found = set() #Check INT1 for invname,flags in INT1_inversion_types: check = True for key,value in zip(inversion_names,flags): if (value and inversion_obs[individual][key] == 0) or (value == False and inversion_obs[individual][key] > 0): check = False break if check: inv1found.add(invname) break #Check INT22 found = False inv22found = set() for invname,flags in INT22_inversion_types+noINT22_inversion_types+INT22failed_inversion_types: check = True for key,value in zip(inversion_names,flags): if (value and inversion_obs[individual][key] == 0) or (value == False and inversion_obs[individual][key] > 0): check = False break if check: inv22found.add(invname.split("#")[0]) found = True if not found: inv22found.add("Conflict: INT22") #tbl2_fields_inv22[2]='' tbl2_fields_inv22[-1]='Conflict' outfile2.write("""<p><h3>Resulting inversion calls</h3></p>\n""") for invname in (inv1found | inv22found): invfields = invname.split("Conflict: ") conflict = len(invfields) > 1 cinvname = invfields[-1] if conflict and status == None: status = True status_flags.add("inversions") if "unknown" in cinvname and status == None: status = True status_flags.add("inversions") if "FAILED" in cinvname: status = False status_flags.add("inversions") if cinvname not in inversionMipStats: inversionMipStats[cinvname] = [0,0] inversionMipStats[cinvname][0] += 1 if conflict: inversionMipStats[cinvname][1] += 1 outfile2.write("<p><em>%s</em></p>\n"""%(invname)) tbl2_fields_inv1[-2]=",".join(inv1found) tbl2_fields_inv22[-2]=",".join(inv22found) tbl_out2.write('"%s"\r\n'%('","'.join(tbl2_fields_inv1))) tbl_out2.write('"%s"\r\n'%('","'.join(tbl2_fields_inv22))) outfile2.write("""<p></p>\n""") outfile2.write("""<p><h2>Under-/over-performing target-region MIPs</h2></p>\n""") tobs = 0 if individual not in failedMIPs else len(failedMIPs[individual]) if tobs > 0: outfile2.write("""<table cellpadding="5" border="3">\n""") outfile2.write("<tr><th>MIP</th><th>+/-</th><th>Coordinates</th><th>Region</th><th>Count/Total</th><tr>\n") sind = sample2ind[individual] for mip in failedMIPs[individual]: direction = mip[0] mip = mip[2:] if mip in MIPcoords: chrom,start,end = MIPcoords[mip] region_name = None if chrom in bedanno: for cinterval in bedanno[chrom].find(start,end): region_name = cinterval.value if region_name == None: for cinterval in bedanno[chrom].find(start-50,end+50): region_name = cinterval.value if region_name == None: region_name = "" gene_name = region_name.split("/")[0] outfile2.write("<tr><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%d/%d</td><tr>\n"%(mip,direction,"%s:%d-%d"%MIPcoords[mip],region_name,MIPcounts[mip][sind],TotalSample[sind])) outfile2.write("""</table>\n""") else: outfile2.write("""<p>None</p>\n""") outfile2.write("""<p> </p>\n""") outfile2.write("""<p>Go to <a href="report.html">Sample Summary Report</a></p>\n""") outfile2.write("""<p>Go to <a href="summary.html">Summary Report</a></p>\n""") outfile2.write("</body>\n</html>\n") outfile2.close() outfile3.write("<td>"+(" " if len(inv1found)==0 else "<br>".join(map(lambda x: x if x.startswith("no") else "<strong>%s</strong>"%x,list(inv1found))))+"</td>") outfile3.write("<td>"+(" " if len(inv22found)==0 else "<br>".join(map(lambda x: x if x.startswith("no") else "<strong>%s</strong>"%x,list(inv22found))))+"</td>") if status == None: outfile3.write("""<td><font color="#008000">OK</font></td>""") tbl1_fields.append('OK') elif status == True: outfile3.write("""<td><font color="#FF6600">CHECK<br>%s</font></td>"""%(",".join(status_flags))) tbl1_fields.append('CHECK: %s'%(",".join(status_flags))) else: outfile3.write("""<td><font color="#FF0000">FAILED<br>%s</font></td>"""%(",".join(status_flags))) tbl1_fields.append('FAILED: %s'%(",".join(status_flags))) tbl1_fields.append('') tbl_out1.write('"%s"\r\n'%('","'.join(tbl1_fields))) outfile3.write("</tr>\n") outfile.write("""<tr><td><a href="ind_%s.html">%s</a></td><td>%s</td><td>%d</td><td>%0.2f%%</td><td>%0.4f</td><td>%d</td><td>%d</td><td>%s</td></tr>\n"""%( individual, individual, sex_check[individual][0], GT_stats[individual], 0 if total_sites == 0 else GT_stats[individual]/float(total_sites)*100, 0 if total_sites == 0 else coverage_stats[individual]/float(total_sites), het_stats[individual], var_stats[individual],", ".join(map(lambda x: "%s:%s %s"%(tuple(x[0].split("_")[1:])),variants[individual])))) outfile.write("</table>\n") outfile.write("""<p></p>\n""") outfile.write("""<p><h2>Variant summary</h2></p>\n""") outfile.write("""<table cellpadding="5" border="3">\n""") outfile.write("<tr><th>"+"</th><th>".join(["Variant","Gene","Effect","Count"])+"</th></tr>\n") to_sort = map(lambda (var,(gene,effect,count)):(count,var,gene,effect),variantStats.iteritems()) to_sort.sort() for count,var,gene,effect in to_sort[::-1]: varfields = var.split("_") var = "%s:%s %s"%(varfields[1],varfields[2],varfields[3]) outfile.write('<tr><td>%s</td><td>%s</td><td>%s</td><td>%d</td></tr>\n'%(var,gene,effect,count)) outfile.write("</table>\n") outfile.write("""<p></p>\n""") outfile.write("""<p><h2>Per region coverage summary</h2></p>\n""") outfile.write("""<table cellpadding="5" border="3">\n""") outfile.write("<tr><th>"+"</th><th>".join(["Region","Length","Sites","Called","Fraction","Ave.Cov"])+"</th></tr>\n") for region in sorted_regions: tvalue,cvalue,gvalue,count = coveragRegionStats[region] chrom,start,end = name2region[region] length = end-start if tvalue > 0 and count > 0: outfile.write("<tr><td>%s</td><td>%d</td><td>%d</td><td>%.2f</td><td>%.2f%%</td><td>%.2f</td></tr>\n"%(region,length,tvalue,gvalue/float(count),gvalue/float(tvalue*count)*100,cvalue/float(tvalue*count))) else: outfile.write("<tr><td>%s</td><td>%d</td><td>%d</td><td>%.2f</td><td>%.2f%%</td><td>%.2f</td></tr>\n"%(region,length,tvalue,0,0,0)) outfile.write("</table>\n") outfile.write("""<p></p>\n""") outfile.write("""<p><h2>Inversion MIP summary</h2></p>\n""") outfile.write("""<table cellpadding="5" border="3">\n""") outfile.write("<tr><th>"+"</th><th>".join(["Inversion type","Count","Conflict","%Conflict"])+"</th></tr>\n") to_sort = inversionMipStats.keys() to_sort.sort() for invname in to_sort: values = inversionMipStats[invname] if values[1] > 0: outfile.write("<tr><td>%s</td><td>%d</td><td>%d</td><td>%.2f%%</td></tr>\n"%(invname,values[0],values[1],values[1]/float(values[0])*100)) else: outfile.write("<tr><td>%s</td><td>%d</td></tr>\n"%(invname,values[0])) outfile.write("</table>\n") outfile.write("""<p></p>\n""") outfile.write("""<p><h2>MIP performance summary</h2></p>\n""") outfile.write("""<table cellpadding="5" border="3">\n""") outfile.write("<tr><th>"+"</th><th>".join(["MIP","+/-","Coordinates","Region","Observed","Individuals"])+"</th></tr>\n") to_sort = map(lambda (x,y):(len(y),x,y),failedMIPs_summary.iteritems()) to_sort.sort() for count,mip,failedIndividuals in to_sort[::-1]: direction = mip[0] mip = mip[2:] #print mip,direction,MIPcoords if mip in MIPcoords: chrom,start,end = MIPcoords[mip] region_name = None if chrom in bedanno: for cinterval in bedanno[chrom].find(start,end): region_name = cinterval.value if region_name == None: for cinterval in bedanno[chrom].find(start-50,end+50): region_name = cinterval.value if region_name == None: region_name = "" gene_name = region_name.split("/")[0] outfile.write("<tr><td>%s</td><td>%s</td><td>%s</td><td>%s</td><td>%d</td><td>%s</td><tr>\n"%(mip,direction,"%s:%d-%d"%MIPcoords[mip],region_name,count,", ".join(failedIndividuals))) outfile.write("</table>\n") outfile.write("""<p> </p>""") outfile.write("""<p>Go to <a href="report.html">Sample Summary Report</a></p>""") outfile.write("""</body>\n</html>\n""") outfile.close() outfile3.write("""</table>\n""") outfile3.write("""<p> </p>""") outfile3.write("""<p>Go to <a href="summary.html">Summary Report</a></p>""") outfile3.write("</body>\n</html>\n") outfile3.close() tbl_out1.close() tbl_out2.close() tbl_out3.close() tbl_out3_benign.close() |
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | import sys, os from optparse import OptionParser parser = OptionParser("%prog [options]") parser.add_option("-s","--SNVs", dest="SNVs", help="Path to SNV output file (def STDOUT)",default=None) parser.add_option("-i","--InDels", dest="InDels", help="Path to InDel output file (def STDOUT)",default=None) (options, args) = parser.parse_args() def is_nucleotide(seq): for base in seq.upper(): if base not in 'ACGT': return False return True def sharedPrefix(s1,s2): minLength = min(len(s1),len(s2)) shared = 0 for ind in range(minLength-1): if s1[ind] == s2[ind]: shared+=1 else: break if minLength == 1: return max(0,shared-1) else: return shared def sharedSuffix(s1,s2): minLength = min(len(s1),len(s2))-1 shared = 0 for ind in range(minLength*-1,0)[::-1]: if s1[ind] == s2[ind]: shared+=1 else: break return shared ################### # VCF FIELDS ################### fchr = 0 fpos = 1 fdbsnp = 2 fref_allele = 3 falt_allele = 4 fgeno_qual = 5 fflag = 6 finfo = 7 fformat = 8 fvalues = 9 outsnvs = sys.stdout outindels = sys.stdout if options.SNVs != None: outsnvs = open(options.SNVs,'w') if options.InDels != None: outindels = open(options.InDels,'w') for line in sys.stdin: if line.upper().startswith("#CHROM"): fields = line.split() outsnvs.write("\t".join(fields[:falt_allele+1])+"\n") if options.SNVs != None or options.InDels != None: outindels.write("\t".join(fields[:falt_allele+1])+"\n") if line.startswith("#"): continue fields = line.split() if len(fields) > falt_allele and is_nucleotide(fields[fref_allele]): ref = fields[fref_allele].upper() for alt in fields[falt_allele].upper().split(','): if ref == alt: continue if is_nucleotide(alt): if len(alt) == len(ref) and len(ref) == 1: outsnvs.write("%s\t%s\t.\t%s\t%s\n"%(fields[fchr],fields[fpos],ref,alt)) else: trimValue = sharedPrefix(ref,alt) if trimValue != 0: nref = ref[trimValue:] nalt = alt[trimValue:] else: nref,nalt = ref,alt trimValue2 = sharedSuffix(nref,nalt) if trimValue2 != 0: nref = nref[:-trimValue2] nalt = nalt[:-trimValue2] if len(nalt) == len(nref) and len(ref) == 1: outsnvs.write("%s\t%d\t.\t%s\t%s\n"%(fields[fchr],int(fields[fpos])+trimValue,nref,nalt)) elif (trimValue == 0): outindels.write("%s\t%s\t.\t%s\t%s\n"%(fields[fchr],fields[fpos],nref,nalt)) else: outindels.write("%s\t%d\t.\t%s\t%s\n"%(fields[fchr],int(fields[fpos])+trimValue,nref,nalt)) if options.SNVs != None: outsnvs.close() if options.InDels != None: outindels.close() |
49 50 51 52 53 54 55 56 57 58 59 60 | shell:""" set +o pipefail read1length=$(zcat {input.R1} | head -n 2 | tail -n 1 | awk '{{ print length($1) }}') index1length=$(zcat {input.I} | head -n 2 | tail -n 1 | awk '{{ print length($1) }}') read2start=$((read1length+index1length+1)) ( paste <( zcat {input.R1} ) \ <( zcat {input.I} ) \ <( zcat {input.R2} ) | \ awk '{{ count+=1; if ((count == 1) || (count == 3)) {{ print $1 }} else {{ print $1$2$3 }}; if (count == 4) {{ count=0 }} }}' | \ scripts/pipeline2.0/SplitFastQdoubleIndexBAM.py --bases_after_index=ATCTCGTATGCCGTCTTCTGCTTG --bases_after_2ndindex='' -l $index1length -m 0 -s $read2start --summary -i {input.lst} -q 10 -p --remove | scripts/pipeline2.0/MergeTrimReadsBAM.py --mergeoverlap -p \ > {output.bam} ) 2> {log} """ |
74 75 76 77 78 79 80 81 82 83 84 85 86 87 | shell:""" set +o pipefail read1length=$(zcat {input.R1} | head -n 2 | tail -n 1 | awk '{{ print length($1) }}') index1length=$(zcat {input.I1} | head -n 2 | tail -n 1 | awk '{{ print length($1) }}') index2length=$(zcat {input.I2} | head -n 2 | tail -n 1 | awk '{{ print length($1) }}') read2start=$((read1length+index1length+1)) ( paste <( zcat {input.R1} ) \ <( zcat {input.I1} ) \ <( zcat {input.R2} ) \ <( zcat {input.I2} ) | \ awk '{{ count+=1; if ((count == 1) || (count == 3)) {{ print $1 }} else {{ print $1$2$3$4 }}; if (count == 4) {{ count=0 }} }}' | \ scripts/pipeline2.0/SplitFastQdoubleIndexBAM.py --bases_after_index=ATCTCGTATGCCGTCTTCTGCTTG --bases_after_2ndindex='' -l $index1length -m $index2length -s $read2start --summary -i {input.lst} -q 10 -p --remove | scripts/pipeline2.0/MergeTrimReadsBAM.py --mergeoverlap -p \ > {output.bam} ) 2> {log} """ |
100 101 102 103 104 105 106 107 108 | shell:""" set +o pipefail index1length=$(zcat {input.I} | head -n 2 | tail -n 1 | awk '{{ print length($1) }}') ( paste <( zcat {input.R1} ) \ <( zcat {input.I} ) | \ awk '{{ count+=1; if ((count == 1) || (count == 3)) {{ print $1 }} else {{ print $1$2 }}; if (count == 4) {{ count=0 }} }}' | \ scripts/pipeline2.0/SplitFastQdoubleIndexBAM.py --bases_after_index=ATCTCGTATGCCGTCTTCTGCTTG --bases_after_2ndindex='' -l $index1length -m 0 --summary -i {input.lst} -q 10 -p --remove | scripts/pipeline2.0/MergeTrimReadsBAM.py --mergeoverlap -p \ > {output.bam} ) 2> {log} """ |
122 123 124 125 126 127 128 129 130 131 132 | shell:""" set +o pipefail index1length=$(zcat {input.I1} | head -n 2 | tail -n 1 | awk '{{ print length($1) }}') index2length=$(zcat {input.I2} | head -n 2 | tail -n 1 | awk '{{ print length($1) }}') ( paste <( zcat {input.R1} ) \ <( zcat {input.I1} ) \ <( zcat {input.I2} ) | \ awk '{{ count+=1; if ((count == 1) || (count == 3)) {{ print $1 }} else {{ print $1$2$3 }}; if (count == 4) {{ count=0 }} }}' | \ scripts/pipeline2.0/SplitFastQdoubleIndexBAM.py --bases_after_index=ATCTCGTATGCCGTCTTCTGCTTG --bases_after_2ndindex='' -l $index1length -m $index2length --summary -i {input.lst} -q 10 -p --remove | scripts/pipeline2.0/MergeTrimReadsBAM.py --mergeoverlap -p \ > {output.bam} ) 2> {log} """ |
143 | shell: "samtools merge -c {output} {input}" |
150 151 152 | shell:""" ( echo -e "@HD\tVN:1.4\tSO:queryname"; bwa mem {input.fasta} <( echo -e '@test\nNNNNN\n+\n!!!!!') 2> /dev/null | head -n -2; tail -n +2 {input.lst} | awk 'BEGIN{{ FS="\\t"; OFS="\\t" }}{{ print "@RG","ID:"$NF,"PL:Illumina","LB:"$NF,"SM:"$NF}}' ) > {output} """ |
171 172 173 | shell: """ samtools view -u -F {params.samflag} -r {params.plate} {input.bam} | scripts/pipeline2.0/FilterBAM.py -q --qual_number 5 --qual_cutoff=15 -p > {output} """ |
185 186 187 | shell:""" bwa mem -L 80 -M -C {input.fasta} <( samtools view -F {params.samflag} {input.bam} | awk 'BEGIN{{ OFS="\\n"; FS="\\t" }}{{ helper = ""; for (i=12; i <= NF; i++) {{ helper = helper""$i"\\t" }}; sub("\\t$","",helper); print "@"$1" "helper,$10,"+",$11 }}' ) | samtools view -u - | samtools sort - | scripts/pipeline2.0/TrimMIParms.py -d {input.design} -p | samtools reheader {input.new_header} - | samtools sort -o {output} - """ |
193 | shell: "samtools index {input} {output}" |
206 | shell: "( for i in {input.bams}; do echo $( basename $i ) $(samtools view $i Y | wc -l) $(samtools view -F 4 $i | wc -l); done )> {output}" |
219 220 221 222 223 224 225 226 227 228 | shell: """ ( grep "@RG" {input.sam}; \ bwa mem -M -L 80 -C {input.inv} <( samtools view -r {params.plate} -F 1 {input.bam} | awk 'BEGIN{{ OFS="\\n" }}{{ if (length($10) >= 75) {{ print "@"$1,$10,"+",$11 }} }}' \ ) | awk '{{ if (($0 ~ /^@/) || ($3 ~ /^inv/)) print }}'; \ bwa mem -M -L 80 -p -C {input.inv} <( \ samtools view -r {params.plate} -f 1 {input.bam} | awk 'BEGIN{{ OFS="\\n" }}{{ print "@"$1,$10,"+",$11 }}' \ ) | awk '{{ if (($0 !~ /^@/) && ($3 ~ /^inv/)) print }}' \ ) | samtools view -b -F 768 - | samtools sort -O bam -o {output} - """ |
239 240 241 242 243 244 | shell: """ (for bam in {input.bam}; do i=`basename $bam ".bam"`; echo $i $( ( samtools view -F 513 $bam | awk 'BEGIN{{ FS="\\t" }}{{ split($12,a,":"); if (($6 !~ /S/) && (a[1] == "NM") && (a[3] <= 10)) {{ print $3 }} }}'; samtools view -f 2 -F 512 $bam | awk 'BEGIN{{ FS="\\t"; OFS="\\t" }}{{ split($12,a,":"); if (($6 !~ /S/) && (a[1] == "NM") && (a[3] <= 10)) {{ print $1,$3 }} }}' | sort | uniq -c | awk '{{ if ($1 == 2) print $3 }}' ) | sort | uniq -c | awk '{{ print $1":"$2 }}' ); done )> {output} """ |
250 251 252 | shell:""" awk '{{ print $NF }}' {input.lst} | tail -n +2 > {output} """ |
263 264 265 | shell: """ sort -k2,2n {input} | cut -f 1-3 | bedtools merge | awk '{{ print $1":"$2-50"-"$3+50 }}' > {output} """ |
278 | shell: "gatk HaplotypeCaller -R {input.fasta} -L {input.targets} $(ls -1 {input.bamin} | xargs -n 1 echo -I ) --genotyping-mode DISCOVERY --output-mode EMIT_ALL_SITES -bamout {output.bamout} -O {output.vcf} --disable-optimizations" |
292 293 294 | shell:""" gatk --java-options "-Xmx8G" HaplotypeCaller --min-base-quality-score 5 --base-quality-score-threshold 6 --max-reads-per-alignment-start 0 --kmer-size 10 --kmer-size 11 --kmer-size 12 --kmer-size 13 --kmer-size 14 --kmer-size 15 --kmer-size 25 --kmer-size 35 --max-num-haplotypes-in-population 512 --sample-name {params.plate} -ERC GVCF -R {input.fasta} -I {input.bam} -L {input.targets} -O {output.vcfgz} """ |
304 | shell:"gatk CombineGVCFs --break-bands-at-multiples-of 1 -R {input.fasta} $(ls -1 {input.gvcf} | xargs -n 1 echo -V ) -O {output.bamout}" |
313 | shell:"gatk GenotypeGVCFs --standard-min-confidence-threshold-for-calling 10 -R {input.fasta} -V {input.vcf} -O {output}" |
321 | shell:"cat {input} | python scripts/processing/splitIntervals.py > {output}" |
331 | shell: "java -Xmx8G -jar scripts/GenomeAnalysisTK-3.2-2.jar -T IndelRealigner -R {input.fasta} -DBQ 3 -filterNoBases -maxReads 1500000 -maxInMemory 1500000 -targetIntervals {input.intervals} $(ls -1 {input.bam} | xargs -n 1 echo -I ) -o {output} -dt BY_SAMPLE -dcov 500" |
339 | shell: "java -Xmx6G -jar scripts/GenomeAnalysisTK-3.4-46.jar -T UnifiedGenotyper -R {input.fasta} -I {input.bam} -L {input.targets} -o >( bgzip -c > {output} ) -glm BOTH -rf BadCigar --max_alternate_alleles 15 --output_mode EMIT_ALL_SITES -dt NONE" |
345 346 | shell: """ zcat {input} | awk 'BEGIN{{ FS="\\t" }}{{ if ($1 ~ /^#/) {{ print }} else {{ if ($5 != ".") print }} }}' | bgzip -c > {output}""" |
353 | shell: "tabix -p vcf {input} > {output}" |
359 | shell: "tabix -p vcf {input} > {output}" |
371 | shell:"scripts/pipeline2.0/MIPstats.py {input} -o {output}" |
378 | shell: "scripts/processing/checkPileUpAtInDels.py -b {input.bam} -s {input.sites} -o {output}" |
396 397 | shell: """zcat {input.vcf} | python scripts/processing/VCF2vepVCF.py | vep --no_stats --fasta {input.fasta2} --quiet --buffer 2000 --dir_cache {input.cache} --cache --offline --db_version={params.vepvers} --species {params.vepspecies} --assembly {params.vepassembly} --format vcf --symbol --hgvs --regulatory --af --sift b --polyphen b --ccds --domains --numbers --canonical --shift_hgvs 1 --output_file >( awk 'BEGIN{{ FS="\\t"; OFS="\\t"; }}{{ if ($1 ~ /^##/) {{ print }} else if ($1 ~ /^#/) {{ sub("^#","",$0); print "#Chrom","Start","End",$0 }} else {{ split($2,a,":"); if (a[2] ~ /-/) {{ split(a[2],b,"-"); print a[1],b[1],b[2],$0 }} else {{ print a[1],a[2],a[2],$0 }} }} }}' | grep -E "(^{params.transcripts})" | bgzip -c > {output} ) --force_overwrite """ |
417 418 | shell: """scripts/processing/summary_report.py --benign {input.benign} --vcf {input.vcf} --vep {input.vep} --inversions {input.inv} --sample_sex {input.sex} --target {input.target} --mipstats {input.mips} --design {input.hemomips} --TG {input.tg} && mv report/ {output.folder} """ |
435 436 | shell: """scripts/processing/summary_report.py --benign {input.benign} --vcf {input.vcf} --vep {input.vep} --inversions {input.inv} --sample_sex {input.sex} --target {input.target} --mipstats {input.mips} --indelCheck {input.indel} --design {input.hemomips} --TG {input.tg} && mv report/ {output.folder} """ |
Support
- Future updates
Related Workflows





