Category Archives: Ion Torrent Blogs

Annotate: A Plugin for the Ion Torrent Browser

The main topic of this blog post is to detail a plugin that I have developed for the Torrent Browser. There are currently two plugins which does variant calling: (1) Germ-lineVariantCaller is a general variant caller plugin and (2) AmpliSeqCancerVariantCaller is specific to the AmpliSeq Cancer Kit. The plugin “Annotate” supplements the two variant caller plugin currently available as it addresses three important questions in disease genetics.

Novel versus Common Variants

Whether a variant is novel or common in the population. This can be done by seeing if a variant exists in dbSNP (version 132). A tool that can differentiate between novel and common variants saves time as novel variants are more likely to be disease causing compared to common variants.  The Genome Analysis Toolkit (GATK) has an option to incorporate annotation from a VCF file through the -D option but I have decided against using this as the chromosome order in the dbSNP VCF file MUST match with the reference file used for variant calling.  This creates a little dilemma as the hg19 reference stored on the Torrent Server is ordered different to the dbSNP VCF file from the GATK 1.2 resouce bundle. For this plugin, I have decided to index the VCF file using tabix and call the variants outside the GATK framework.

Functional Consequence of Variant

Whether a variant lies within a gene and the functional consequence. For example, does the variant result in an amino acid change? (i.e. non-synonymous variant). Common tools used are SNPEff (Latest update on Christmas Day!!) and ANNOVAR. Although SNPEff uses Gencode annotation and therefore is more comprehensive, it is quite hard to summarize information and the majority of transcripts (ENST) are non-coding, thus for this plugin I have decided to go with ANNOVAR which uses Refseq (NM) annotations.

Functional Impact of Novel Non-Synonymous Variants

Whether a novel non-synonymous variant is likely to have a functional impact on the resulting protein. This can be achieve using functional impact prediction tools. I have decided to use PolyPhen2 and SIFT for predictions as pre-computed values are available as text files on the ANNOVAR download page. I have decided not to use ANNOVAR for calling the functional impact predictions as the implementation is unusually slow. To speed up things I sorted the SIFT and PolyPhen2 prediction text file followed by indexing using tabix. This allows variants to be more efficiently searched within the now sorted text file.

Screenshots

Figure 1. Result from the C01-288 run of the AmpliSeq kit available for download in the Ion Community. All GATK variants called are KNOWN.

Figure 2. Result from the BUT-317 run of CFTR amplicon sequencing available for download in the Ion Community. Only one variant was called by GATK which was a novel variant. As this is a screenshot, you can’t see the tool tip for Polyphen2 (PP2) and SIFT. D = Damaging and SIFT scores < 0.05 are considered damaging.

The plugin can be obtain via the Ion Community at this link. If you find this plugin useful, please vote for it in the Ion Community by clicking on “like”.

 We will be using this plugin in an up coming project using custom designed AmpliSeq primers on 10 large muscle disease causing genes across our undiagnosed patient cohort. Big thanks to Kelly and  Life Technologies for awarding an Application Grant to our lab for this project 😀

Licensing

The Annotate plugin is a shell script which calls a collection of tools. It is important for organizations using this to have a look at the licenses and conditions of use for the following tools: ANNOVAR, PolyPhen2, SIFT, GATK, samtools, Picard Tools and tabix. For instance, ANNOVAR may not be free to use for commerical organizations “ANNOVAR is open-source and free for non-profit use. If you use it for  commercial purposes, please contact Ellen Purpus, director of OTT (PURPUS@email.chop.edu) directly for license related issues.”

Thanks to David from EdgeBio for the feedback. EdgeBio created the first community developed Plugin called SNPEff, a neat plugin and you can check out more details on their blog post.

Disclaimer: For the good of all mankind! This is purely my opinion and interpretations. We sit on the shoulders on giants – this plugin is a script composed of available open source tools and resources.

Advertisements

Ion Torrent Sequencing on Humans

Bored to death seeing public releases of sequencing runs from E. coli coming off desktop sequencers? Today, Life Technologies released through the Ion Community a sequencing run that wasn’t from E. coli !! Thank goodness. Ion Torrent released a human shotgun sequencing run from NS12911 (aka Venter published in PLoS Biology). Unfortunately, it was only two separate runs (C18-99 and C24-141) from a 318 chip so has bugger all in terms of coverage. I am extremely grateful for the release of the data set (kudos to Matt in Life Tech for the early access :D), though it would have been much nicer if they released results from a custom capture because at least it wouldn’t be totally useless for analysis. However, all is not lost as the coverage from the mitochondrial genome was sufficient to do some analysis (Figure 1).

Figure 1. Shows the coverage (determined by BEDtools) from using the two supplied BAMs. The coverage output for bwa-sw aligned BAMs produces an almost identical coverage and produces a bit of a mess when it is plotted also.

mitochondrial variants

What’s made the analysis of the mitochondrial genome (chrM) a bit annoying is that the two BAM files supplied don’t appear to be aligned to a hg19 chrM reference. According to the BAM header the chrM it was aligned to was 16569 bases in length (@SQ     SN:chrM LN:16569). This is two bases less than the hg19 or hg18 chrM obtained from UCSC Genome Browsers.  Since I was missing this version of chrM I decided to create my own alignments using bwa-sw, in addition to using tmap and subsequent base calling performed by the Genome Analysis Toolkit (GATK). The table below shows the results (tmap alignment of C24 is missing as it was still grinding away, while writing this 😕

Variants Called
C18-99 (VCF supplied)
29 (including 3 INDELS)
C24-141 (VCF supplied)
30 (including 3 INDELS)
C18-99 (bwa-sw/GATK)
41
C24-141 (bwa-sw/GATK)
39
C18-99 (tmap/GATK)
35

It is hard to determine the overlap between the variants called by the supplied VCF (i.e. by mpileup) and the ones called by GATK as the differences between the two reference chrM creates an off by 1-2 base difference in the coordinates. On inspection the majority of the mpileup calls are due to differences between the two references evident in what is marked as the ref or alt base. Below is the Venn diagram showing the variants called by GATK between the two runs. Using the Integrative Genomics Viewer (IGV), the variants outside the intersection had reads supporting it on the run which it was not called on. The only exception is the 10279C>A variant, which appears to have borderline read support from each of the sequencing runs.

Figure 2. Relationship between the variants called on chrM from the two runs (i.e. C18 and C24). Ideally all variants should be in the intersection.

One noticeable difference is that GATK although the -glm BOTH option was turned on, did not call any insertions or deletions (INDELS) on chrM. Using Integrative Genomics Viewer (IGV), there does not appear to be enough reads supporting the deletions at positions 3105 and 3108. In contrast, the deletion at position 9905 had sufficient reads to support it. However, there appears to be an unusual amount of areas surrounding it in the form of colored bars (i.e. undercalls/overcalls) and black lines (deletions). For those that haven’t used IGV before, the bars/lines running horizontally are the reads which are mostly colored grey as they usually completely match with the reference.

Systematic Biases?

A public release of data would not be complete unless it included an E. coli data set. This release included 194X coverage PGM run from a 318 chip (C22-169). Despite the very high coverage, the supplied VCF file showed there were 36 INDELS, which were all deletions. There seems be a bias in undercalling G or C bases as they account for 33/36, while 4/36 were A or T undercalls. There was a deletion that involved undercalling both a G and a T and hence the appearance that I can’t add. 😳 These variants were counted manually and without a calculator so there may be a mistake anyways 🙂 Using IGV, I had a look at the sequence context for the A/T undercalls. All three (18297543545779, 4497732) have the exact same sequence context, that is AAAATTTT (click on each link to view IGV screen shot). There is a possibility that errors in mapping to low complexity or repetitive regions may also explain some of these instances.

Using the same methodology to identify the G/C undercalls, will help to identify the systematic biases that still remain in terms of base calling. This in combination with Torrent Scout and the wealth of Test Fragments data available would be a good avenue to pursue for the Accuracy challenge. I’ll insert some details on the methods a little later.

Next week I’ll post regarding the contributions signal processing and base calling make in regards to accuracy. Until then back to my PhD thesis and having no life 😥

materials and methods

The hg19 reference file labelled ucsc.hg19.fasta was taken from the GATK 1.3 resource bundle directory.

tmap alignment

#tmap parameters taken from the supplied BAM file header
tmap mapall -R LB:hg19 -R CN:IonSoCal -R PU:PGM/318B
-R ID:16A7I -R SM:polymerase -R DT:2011-11-09T09:33:45
-R DS:100KMTph755uMedta559S788Q -R PL:IONTORRENT
-n 6 -f ucsc.hg19.fasta -r in.fastq -v map1 map2 map3 >
out.sam

#Create a sorted BAM file compatible with GATK
AddOrReplaceReadGroups.jar I=out.sam O=out.bam
SORT_ORDER=coordinate RGPL=454 RGSM=NS12911
RGPU=PGM/318B RGLB=hg19

bwasw alignment

bwa bwasw -t 8 ucsc.hg19 in.fastq > out.sam

#Create a sorted BAM file compatible with GATK
AddOrReplaceReadGroups.jar I=in.sam O=out.bam
SORT_ORDER=coordinate RGPL=454 RGSM=NS12911 RGPU=Random
RGLB=hg19

#all BAMs used as input to GATK must be indexed first
samtools index in.bam
#GATK indel local realignment against known indels

#Mark Duplicates
MarkDuplicates.jar I=in.bam O=out.bam

GATK variant calling

#Variant calling restricted to chrM
GenomeAnalysisTK.jar -et NO_ET -T UnifiedGenotyper -nt 8
-glm BOTH -R ucsc.hg19.fasta -I in.bam -o out.vcf
-L chrM:1-16571

Coverage Plot (Figure 1)

coverageBed -abam in.bam -b mt.bed -d > out.txt
#mt.bed
chrM    1       16571

Comparing two variant call sets (Figure 2)

GenomeAnalysisTK.jar -et NO_ET -T CombineVariants -R
ucsc.hg19.fasta --variant:C18 C18.vcf --variant:C24 C24.vcf
-o merged.vcf

Disclaimer: For the good of all mankind! This is purely my opinion and interpretations. I dedicate this post to the fish and chips I had two weeks ago at some random place in New Zealand called Tauranga!

Ion Torrent and the Democratization of Sequencing

Back from my North East American trip and still jet lagged so I’ll return to the blog sphere with a non-technical post. The term “democratizing sequencing” is synonymous with the Ion Torrent. This probably doesn’t mean Life Technologies are pitching to a bunch of hippie scientist trying to relive the 70s but what does it mean instead? The definitions of “democracy” usually refers to a form of government so this general definition would be more suitable – “The practice or principles of social equality”. This post will cover the following components of social equality: Economical equality, Freedom of speech and Freedom of information. This month has seen a massive effort introducing initiatives to emphasize these components.

Economical equality

This world map shows the positions of where all the next generation sequencers are in the world. This requires the facility to self report so is not entirely accurate but is close because people like to brag 🙂 There are two things you may notice on this map:

  1. The richer countries tend to have more sequencers. This is not surprising as they tend to have more of everything including obese people 😛
  2. Within each country it tends to be the richer Institutes and Universities that have these machines. In the case of my home city, Sydney there are three sites with us way out in suburbia.

Given the correlation between high impact publications and next generation sequencing, why aren’t there more in Sydney? Simple answer, it costs at least 1 million dollars to build the infrastructure and then there is on going costs. In Australia, this would require many investigators to get together to apply for a massive grant. Too many egos involved and that’s why it rarely happens. The other alternative is to sell 2 million dollars worth of charity chocolates. This would require you to sell one chocolate to approximately every adult in Sydney. If this charity model is successful, we will have an even bigger type 2 diabetes problem 😥

What most researchers in Australia have to settle for is sending samples to Sequencing centres such as the Ramaciotti Centre and the Australian Genome Research Facility (AGRF), which provide a great service for Australian researchers. Then why get a sequencer, most researchers ask? We got a sequencer as a way of controlling each step of the workflow and more importantly the time frames in which projects can be completed. Ever collaborated in Science before? Felt disappointed how long things take? Well you are not the only one!! Then you would understand why controlling time frames is SO important for scientist. Most have realized this but never have had the money to act upon it. The Ion Torrent marketed at $USD 50 K is the first time a lab in Australia can seriously say lets get a sequencing machine. The Illumina MiSeq and Roche Junior are also competitively priced. A carefully planned strategy aligned with local sequencing facilities will now give everyone an equal opportunity to publish in good genetics journals as economics is no longer a barrier.

Freedom of speech

The advent of the Internet has amplified the freedom of speech of everyone! Something we should not take for granted. In the past (i.e. early 90s), if I wanted to communicate information I would use the following:

  • Publish a book, journal article, TV or radio
  • Local newspaper, public notice boards and town hall meetings
  • Letter box drops
  • Tell my mom!

There would be no way a teenager would have the ability to use the first option of communication if all they wanted to say was that “they had an epic World of Warcraft Raid” or a recording of them “owning a n00b on StarCraft 2“. Unfortunately they now can, it’s called Twitter, Facebook and YouTube 😛

Life Technologies has embraced the Internet and freedom of speech through the Ion Community. This site allows members to provide feedback and problems that they are having on the Ion Torrent. The comments made by members is NOT censored in any way. This allows people like me to say absolutely whatever they want. Most of the time I alternate between skeptic hater and annoying bug. Many are still afraid to speak their minds or even contribute which is a shame. It is good to say stuff but is worthless if you can not get to your targeted audience. In other words, the reason why you complain is because you want something to be done. From my experiences, Life Technologies are very fast to respond to comments and try their best to help.

In addition, Ion Torrent is providing strong support to the blogging community. This takes on the form of early access to data and resources allowing bloggers to do what they do best… review and complain 😀 The release of affordable sequencing technology has seen a massive explosion in technical blogging. I think there are few reasons for this:

  1. First and foremost it’s affordable, therefore a lot of people want to know more about it and want the opinion of the wise Internet. No one nowadays goes to a restaurant, hotel or buys anything without reading a review on the Internet. Next generation sequencing is no different!
  2. It may be Science but no one can wait for a suppressed report in a journal article which usually goes something like this “we suggest perhaps maybe the Ion Torrent would be good for X, however further research will be required”.
  3. The release of publicly available data set and for the first time in the history of Biotech the exact data set used to generate the application notes and brochures! This is a gold mine for reviewing and complaining 😀
  4. The support of Life Technologies, Illumina and Roche. Some more than others. I think they have realized… bloggers are like good global marketers, the only difference is we pay them absolutely nothing and people tend to believe them more!

Lastly, the greatest display of freedom of speech is allowing me to present at the Ion User Group Meeting. Putting everything in context, I am only a PhD student and quite unpredictable at times. I was given carte blanche so really could have said anything I felt like during the 10 minutes. Saying “I was busting to take a piss” during my talk shows I had freedom of speech.

Freedom of information

Currently Biotech companies have two types of customers, their preferred ones and the rest of their customers. The preferred customers usually get access to technology and information that other customers will see on a later date. How do they pick these preferred customers? Who knows! but I know one thing that these customers are usually the richer ones that can afford to do field testing for them. Having this information early gives these preferred customers an unfair advantage in terms of producing preliminary data for grant applications. These are usually the institutes that DO NOT require an advantage to compete for grants. This model is extremely non-democratic and not COOL :(, although makes economical sense to Biotech companies. There are two initiatives which Ion Torrent launched recently:

  1. Ion AmpliSeq Custom Kit Developer Access
  2. Ion 318 Chip Developer Access

In each of these initiatives, all customers are treated equally and therefore will be provided information whether they are a preferred customers or not. The main emphasis is on giving back to the community, in other words sharing what you have learned while having early access to the technology. A huge difference to using it to benefit only yourself! This will definitely rock the boat amongst the preferred customers but is the only way democracy and freedom of information can be achieved. Illumina being more established in sequencing will have a very difficult time doing this assuming they actually care about democratizing sequencing.

You can put a pipette (noun) in the hand of the scientist but you can’t make them pipette (verb)!

The paradigm shift in the business model implemented by Life Technologies is contingent upon  Ion Torrent PGM purchases and the success of the Ion Community. In order to help with the steep learning curve required for sample preparation, Ion Torrent has an Application Grant Program. The emphasis again is to give back to the community what you have learned. This will greatly help small labs like ours to develop successful workflows in order for us to produce preliminary data so we can be competitive for large government grants. The grant program is a great incentive to buy a PGM over the MiSeq or Junior.

The Ion Community like all online forums and communities in general suffer from the problem of participation. It’s in human nature to be more take than give. Due to internet lurking, forums typically follow the 1% rule or the 90:9:1 rule. 1% contribute, 9% edits/moderates, 90% just view. The Ion Community despite it’s steady increase in membership suffers from this same problem. It is no surprise the most active thread is the one where you get to boast how great your chip runs are with the possibility of winning a pack of chips. Thankfully, Ion Torrent has learned from this and have introduced an initiative called RecogitION. A program which aims to reward regular contributors. This reward system was extremely successful in the Sun Java forum I used to frequent to complain on. I nearly earned myself a free T-shirt 😦 Some people’s problems are just too difficult! Despite it’s extremely lame name, RecogitION will make for a more successful active community.

Scientist have recognized Ion Torrent through Semi-conductors as revolutionizing sequencing. After everything is said and done, it may be recognized instead as the first Biotech to make a bold move in embracing the Internet culture and what it stands for DEMOCRACY.

Disclaimer: For the good of all mankind! This is purely my opinion and interpretations. I dedicate this post to the all you can eat mud crabs in Rock Hall, Maryland. I try to send you bankrupt by eating all the crabs… only got to number 6 😦

Ion Torrent – Rapid Software Improvements

This is the second post of what is now to be a four part series looking at how Ion Torrent accuracy has improved over time. In this edition, I will show what a massive difference software can make with this technology. The results presented here was only possible because the software is open source. In addition, Mike and Mel have given me early access to binaries (ion-Analysis v1.61) that will be released in Torrent Suite v1.5. That’s a huge thank you to Mel and Mike!

There are three major areas that software can improve

  1. Throughput – Identify more Ion Sphere Particles (ISPs) with library fragments therefore increasing the total number of reads.
  2. Quality – More and longer reads aligning back to the reference sequence.
  3. Speed – Reduce the computational time to analyze the data.

The way I am going to present this data is to keep the data set the same (i.e. input DAT files) BUT perform the analysis using the different versions of the software, i.e. ion-Analysis. The ion-Analysis binary/software is responsible for ISP finding, signal processing and base calling. I have discussed signal processing and base calling in my previous blog posts. I have also briefly touched on bead finding  ISPs but will go into more detail in my Signal Processing blog series. The three versions I have used are:

  1. ion-Analysis v1.40 (from Torrent Suite v1.3) REL: 20110414
  2. ion-Analysis v1.52 (from Torrent Suite v1.4) REL: 20110712
  3. ion-Analysis v1.61 (pre-release Torrent Suite v1.5) DATED: 20110914

 Method

// The datadir contains a 314 Run of the DH10B library.
// Execution of the ion-Analysis program
// creates rawlib.sff and rawtf.sff
Analysisv1.nn datadir --no-subdir > Analysis.out
2> Analysis.err

// rename the files before trimming
mv rawlib.sff rawlib_untrimmed.sff
mv rawtf.sff rawtf_untrimmed.sff

// Trim the files based on quality, minimal length and
// remove 3' adapter
SFFTrim --in-sff rawlib_untrimmed.sff --out-sff rawlib.sff
--flow-order TACGTACGTCTGAGCATCGATCGATGTACAGC --key TCAG
--qual-cutoff 9 --qual-window-size 30 --adapter-cutoff 16
--adapter ATCACCGACTGCCCATAGAGAGGCTGAGAC --min-read-len 5

// Create the fastq file from the SFF file
SFFRead rawlib.sff -q rawlib.fastq

// performs tmap (v0.0.28) alignment, dumps quality metrics
// Q47.histo.dat used as input for a modified libGraphs.py
// python script to produce AQ47 quality distribution
alignmentQC.pl -i rawlib.fastq -g e_coli_dh10b
-q 7,10,17,20,23,47

// performs read length metrics.
// readLen.txt used as input for a modified
// trimmedReadLenHisto.py python script to produce
// Read Length distribution
SFFSummary -o quality.summary --sff-file rawlib.sff
--read-length 50,100,150 --min-length 0,0,0
--qual 0,17,20 -d readLen.txt

Throughput

The table below shows that between versions 1.40 and 1.52 there was a modest increase in the number of ISPs identified (i.e. occupied wells), resulting in an increase in final library reads. There has been a slight decrease in version 1.61 which I will show in the next section it is quality and not quantity which is really important. Between versions 1.52 and 1.61 there is a massive difference in the filtering metrics. The blame has been shifted from Poor signal reads to Mixed/Clonal reads. This has massive consequence on how throughput can be increased further. The problem of poor signal reads is largely due to the quality of the raw data and the downstream computational processing, while mixed/clonal reads are due to sample preparation. There is a possibility that there is a bug in the pre-release code.

Ion Sphere Particle (ISP) Metrics

v1.40 v1.52 v1.61
Library ISPs
737,584 860,574 843,844
Filtered: Too short
< 0.1% 3.24% 0.80%
Filtered: Keypass failure
10.0% 8.78% 0.60%
Filtered: Mixed/Clonal
6.70% 9.80% 40.16%
Filtered: Poor Signal Profile
31.70% 26.19% 8.26%
Final Library Reads
381,629 447,484 423,564

Read Length Distribution

The improvements in software has allowed not only for more reads but slightly longer reads. This can be observed as a slight shift of the distribution to the right and also a widening of the peak near 100 bp. Interestingly, version 1.61 also has a small shoulder at 150 bp.

Quality

Based on Full Library alignment to Reference (Quality: AQ20)

v1.40 v1.52 v1.61
Total Number of Bases [Mbp] 24.84 29.38 32.46
Mean Length [bp] 77 77 88
Longest Alignment [bp] 119 137 144
Mean Coverage Depth 5.30× 6.30× 6.90×
Percentage of Library Covered 98.96% 99.48% 99.66%

Based on Full Library alignment to Reference (Quality: AQ47)

v1.40 v1.52 v1.61
Total Number of Bases [Mbp] 23.21 27.29 29.68
Mean Length [bp] 72 72 82
Longest Alignment [bp] 119 133 138
Mean Coverage Depth 5.00× 5.80× 6.30×
Percentage of Library Covered 98.64% 99.26% 99.48%

Quality (AQ47) length distribution

At first glance the length distribution looks slightly better in version 1.40 compared to the later version 1.52. The peak in much higher and broader at around 100 bp for version 1.61. An important thing to note is that there will be a point that read length quality will be restricted by the library fragment length. For example, if the average library fragment length is 150 bp, it would be impossible to get a 400 bp read!


Speed (Computational Time)

There is a massive reduction in computational time between versions 1.40 and 1.52. This was when the NVIDIA Telsa GPU was employed through the use of the CUDA SDK. The use of GPU computing has been highly beneficial for Bioinformatics programs such as HMMER. In the case of Ion Torrent, the biggest reduction is observed within the processing of the raw flowgrams (i.e. signal processing). This requires loading data from all chip wells from 20 flows (i.e. 20 DAT files) into memory and performing parameter fitting (Levenberg–Marquardt algorithm) using matrix operation and linear algebra within armadillo, blas and lapack/lapackpp libraries. In addition, there is modest improvements between version 1.61 and 1.52. This maybe due to the tree dephasing algorithm used for base calling as most of the time reduction was observed in this stage. The name “tree” would suggests a non-greedy algorithm was implemented. See my previous post regarding the greedy implementation.

v1.40 v1.52 v1.61
Bead find
0.8 0.9 0.8
Bead Categorization
<0.1 <0.1 <0.1
Raw Flowgrams
48.2 22.5 24.9
Base calling
48.3 22.4 6.6
Total time
97.3
45.8
32.3

Note: Time is in minutes. Raw Flowgrams is the signal processing stage.

Besides more efficient algorithms, run time is dependent on number of wells and flows to process. As Ion Torrent aims to increase overall throughput through increasing number of reads (i.e. wells) and read lengths (i.e. flows), it is crucial to have computationally efficient algorithms which are constantly improving.

Life Grand Challenges – Accuracy

In this analysis there are two software versions that have or will be released shortly after the quarterly close of the accuracy challenge. This allows the unique opportunity to ask the question, would Ion Torrent software developers win the accuracy challenge themselves? In other words, how feasible given the time limits is it to achieve the goals set in the accuracy challenge. The goal is the equivalent of achieving a greater or equal the number of 100AQ20 reads (in the previous software release) but at 100AQ23 . It is the equivalent to the goals set by the challenge as software is released approximately every quarter.

v1.40 v1.52 v1.61
100AQ20 Reads
111,559 130,087 189,383
100AQ23 Reads
67,426 75,051 121,884

The 75,051 100AQ23 reads achieved by version 1.52 does not come close to the 111,559 100AQ20 reads achieved by version 1.40. Interestingly, the 121,884 100AQ23 reads is very close to the benchmark set by version 1.52 (i.e. 130,087 reads). If averaged over several input data sets, this may well have won the ONE million dollar accuracy challenge!! This shows the feasibility of the accuracy challenge and confirmed my initial thoughts, that with a moving target after the first two quarters it will be next to IMPOSSIBLE. There goes my chances so back to coming up with Facebook apps that may appeal to teenagers with too much time on their hands 😛

Conclusion

There are several limitations with the analysis I have performed. First, the different versions of ion-Analysis may have been optimized for different goals in mind. For example, version 1.61 may have been optimized for the new longer read chemistry, new flow rates, 316 chip and the soon to be released 318 chip. However, it does do a pretty good job with analyzing our 314 data set. Second, performance on a DH10B library may not be a good reflection on how it may perform on “real world data” or even human data that may have different challenges. Third, this is only the result from one input data set therefore may not be representative of the average performance. Fourth, when I was supplied with the pre-release binary the guys at Ion Torrent forgot to include the improved phred prediction table. I instead used the one from version 1.52. Improved quality prediction may lead to different read lengths after trimming, further improving the read length metrics. Lastly, the preparation of the samples before pressing RUN places an upper limit on how good the results can be. This also includes the size selection on the fragments during library preparations. In other words don’t expect to see 400 bp reads! The girl who prepared this is experienced in molecular biology but this is her first Next Generation Sequencing experiment! This is testimony to how simple the technology is to use and how great the lab girls are in our lab 😀

Again big thanks to Mel and Mike who have made a pre-release version 1.61 available to me. In the next post, I will discuss the thing that shall not be named…HP and that does not stand for Harry Potter 😛

Disclaimer: For the good of all mankind! This is purely my opinion and interpretations. Having early access to the ion-Analysis has made me one of them 😦

The Ion Torrent Accuracy Challenge – A non-biological explanation

My computing friends have read my blog and don’t understand it but think my lame jokes are quite good. They’re my friends so they MUST think the lameness is funny 😛  Anyways, in this blog post I will attempt to explain the Accuracy challenge in a more computer science friendly albeit nerdier way. Most of the biological jargon is good to know but not absolutely necessary to be able to solve the problem. The below is aimed at people who are familiar with the ACM problem sets (Sorry I fixed the dodgy link, my bad). I used to solve these problems when I was working in IBM, it was a good way of staying awake after a big lunch. My point with this post is to show that this could be pitched to the talented people who participate in ACM challenges if approached correctly.

Problem:

King Koopa (Bowser?) – Has no respect for thesis writing. This guy is really bad!

Yoshi – I think this dude is a lizard

The pipe used to hide Yoshi’s PhD thesis!

The evil King Koopa has stolen Yoshi’s PhD thesis before he was able to submit. Yoshi being a lizard has a language composed of 4 letters {A,C,T,G} from which words are constructed. Koopa decides to shred Yoshi’s thesis into one million words and place each one of them in these green tubes and seals them.

In order to rewrite his thesis, Yoshi  must infer what word is in each tube. This can be achieved by tapping one of the four sides of the tube over a 11 second period (round). Yoshi taps at a rate of 15 taps per second. Yoshi repeats this procedure on the tube sides for 260 rounds. Therefore each tube receives a total of 15 x 11 x 260 taps spread equally across the 4 sides.

The echo caused by the tapping allows Yoshi to infer the letters (because that’s what lizards do!). The inferred letter length can be 0, 1, 2, etc depending on how loud the echos are at crucial taps in a round of tapping. Each of the 4 sides corresponds to one of the letters in Yoshi’s 4 letter language. Yoshi tests each letter and cycles in a logical fashion.

Input:

Measured echo intensities can be thought of as a two dimensional array.

int data[n][m]

0 <= n < 260 – The round number. This determines the letter currently being decoded (i.e. side of the tube that’s being tapped on).

0 <= m < 11 x 15 – The tap number. Each round of tapping goes for 11 seconds. Therefore at a tapping rate of 15 taps/sec, there are 11 x 15 taps in total for a round.

Output:

Below is the result of 4 different rounds of tapping the same side of the green tube corresponding to the letter T in Yoshi’s alphabet. Yoshi has very good ears and is able to tell the difference between the subwords T, TTT, TTTTT and when there are no T in the current position of the word. Although Yoshi has good ears, over time the echos created from previous taps persists for longer making it hard for Yoshi to hear echos within and between rounds. In addition, in later rounds Yoshi gets tired and doesn’t tap as hard so the echos produced aren’t as loud making it difficult for Yoshi to discriminate between letter lengths. Often Yoshi gives up before getting to the end of the word and takes guesses at times.

The accuracy challenge is to help Yoshi filter out the persistent echos within and between rounds so he can concentrate on the “true echo”. Also it’s to get Yoshi to HARDEN UP and not get so tired in the later rounds of tapping.

Figure 1. Yeah I didn’t have time to relabel the axis. Anyways, this is the echo intensities plotted from 4 rounds of tube tapping. Each round lasts for 11 seconds at 15 taps per second.

Disclaimer: For the good of all mankind! This is purely my opinion and interpretations. If Koopa did that with my thesis, I would hit him with the red shell!

Ion Torrent – QV Prediction Algorithm

In this stand alone blog post, I will attempt to detail the predicted quality value (phred scoring) algorithm that the Ion Torrent is currently using. As the quality values is one of the battlegrounds the Next Generation Sequencing wars (Clone Wars is way cooler!) are currently being fought, it would be good to explain the difficulty in using this as a benchmark. Illumina has fought this battle on the predicted quality values. This is a good ground to have a fight on considering Illumina’s prediction algorithm is mature and is quite good at predicting the empirical quality (Figure 1). Illumina has pointed this out in their recent application note. A good prediction algorithm is good if there is no reference sequence to compare your target against (aka de novo sequecing). In addition, “the point of predicted accuracy is that many tools use this in their calculations. The more accurate these estimates, the happier those tools are. Of course, you can always go and recalibrate everything , but that’s an extra step one would rather avoid.” (Thanks Keith for the input taken from the comments section). Ion Torrent has fought on the empirical quality battleground. Their argument is who cares what the predicted values are, actual values are more important. This is a great point, given Economist spend most of their time explaining why things they predicted yesterday didn’t happen today. On the rare occasion when they get it right… their ego expands faster than the rate the Universe expanded slightly after the big bang! 😀 The reason why Ion Torrent has fought this battle on the empirical battleground is mainly due to the current weakness in their quality prediction algorithms (Figure 2).

Figure 1. Illumina phred score prediction is closer to the empirically derived values. This is read 1 from the MiSeq DH10B data set.

Figure 2. The Ion Torrent prediction algorithm under predicts quality by approximately 10 phred points.

Since Ion Torrent have released the source code, I am able to interpret how per base quality values have been calculated. These quality values are determined after carry forward, incomplete extension and droop correction (aka CAFIE or Phase correction). The quality values are recorded with the corrected signal incorporation in the SFF file.

Please note all equations are MY INTERPRETATION of the source code and since I didn’t write the code, I am probably incorrect sometimes.

Big thanks to Eugene (see comments below) from Life Technologies for correcting and providing an example for Predictors 4 and 5.

There are six metrics that are used to predict the per base quality values:

  1.  Residue (float)- distance the corrected incorporation value is from the nearest integer.
  2. Local noise (float) – maximum residue amongst the previous, current and next corrected incorporation value. Radius of 3 bases.
  3. Global noise (float) – Calculated from the mean and standard deviation of all zero-mer and 1mer signals for this well/read.
  4. “The homopolymer length, but it is assigned to the last base in the homopolymer (since there is a much higher chance of being off by 1 in the homopolymer length than by 2 or more).” All other bases in the homopolymer are assigned the value 1.
  5. “The homopolymer length the last time this nucleotide was incorporated – this basically a penalty for incomplete incorporation.”
  6. Local noise (float) – Calculated similar to (2) but with a radius of 10 bases.

An example of predictors 4 and 5 is detailed below:

A A A A T A C C C
1 1 1 4 1 1 1 1 3 (Predictor 4)
0 0 0 0 0 4 0 0 0 (Predictor 5)

Note: Predictor 5 is dependent on flow order so in the above case it depends where in the 32 redundant flow cycle these bases were called.

Once these six metrics have been calculated for this flow/base call these values are compared to a empirically derived phred table (Note: each flow produces a base call, many just have a value of zero). There is currently two versions for this phred lookup table. The comparison is made from the top of the table (i.e. phred score 33) and works it’s way down until the six metrics are below the minimum criteria for that phred score. The maximum phred score is 33, while the minimum is 7 and 5 for phred table versions 1 and 2, respectively. As the Ion Torrent is quite new, it is understandable that the phred scoring algorithm still needs more calibration. Therefore, it is quite unfair to compare Illumina predicted QVs against the Ion Torrent one.

Disclaimer: For the good of all mankind! This is purely my opinion and interpretations. This is an independent analysis using Novoalign kept simple so others can reproduce the results.

Ion Torrent Signal Processing (Part 1) – Background and Nucleotide Incorporation Models

This is the first part of a planned three part blog series on Ion Torrent signal processing. In this first part I will discuss the important aspects of the background and foreground model using key mathematical equations and pseudo code.  In the second part, I will outline the high level process of signal processing which includes the key parameters that must be fitted. In the final part, I will discuss the major assumptions and where the model breaks down.

The goal of Ion Torrent signal processing is summarize time series data (Figure 1) into just ONE value which is then stored in the 1.wells file. The 454 equivalent is the .cwf files (thanks flxlex), however the difference is that Life Technologies has made their signal processing OPEN through the release of their source code. Without the source code, I would just be speculating in this blog series. So yay to available source code and kudos goes to Ion Community contributors Mel, Mike and particularly Simon for answering all my questions in great detail.

In my opinion signal processing is the root cause of two problems:

  1. Reads that must be filtered out due to poor signal profile*. This can account up to 30% of the reads as observed in the long read data set that was released.
  2. The resulting base call particular towards the end of the reads. There is only so much signal normalization and correction (covered in Fundamentals of Base Calling series) that can be performed.

Therefore, improvements made will have the biggest effect on improving accuracy and increase the amount of reads. In other words, if you improve on this you can have ONE million dollars.

Ion Torrent – Signal Challenge

The major challenge of signal processing is that the foreground signal is not much bigger than the background signal. This is like trying to have a conversation with someone in a crowded noisy bar with loud music. This is very difficult but not impossible. Two reasons why it is possible:

  1. You start getting used to the background sound and learn to ignore it.
  2. You know how your friend sounds like and focus on only the key words in the sentence.

In reality though I refuse to try and instead nod my head away pretending to listen 😛 However, the Ion Torrent signal processing works on a similar principle.

Figure 1A. Uncorrected signal from the first 100 flows from a live well. This was from a 4 flow cycle (Q1 2001) and thus 25 flows per nucleotide.  If you look hard enough there are small bumps between 1500-2000 ms that represent nucleotide incorporation.

Figure 1B. A typical baseline corrected measurement from an occupied well (red) and an adjacent empty well (black). The tiny red bump between 1500-2000 ms represent a nucleotide incorporation.

Background Model

The background model aims to approximate how the signal will look like for a given flow if there was NO nucleotide incorporation. The problem is what to use as a point of reference.  The best and intuitive source is a zero-mer signal from the well itself as this would encapsulate all the well specific variance and parameters. A known zero-mer signal can be taken from the key flows (i.e. first 7 flows). The only draw back is that each well is a dynamic system which changes over time due to slight variance in flow parameters and changing state of the system. Another possibility is to re-estimate the zero-mer signal every N flows. The problem with this approach is that later on there will be no TRUE zero-mer signal as there will be contributions from lagging strands. The surrounding empty wells are the only candidate left.

The loading of a chip wells with Ion Sphere Particles is a probabalistic event and not all particles fall into wells. Due to the size of the particles and wells, it is physically impossible to fit two particles in a well.  Therefore, a well should either be empty or have one particle in it. The way the Ion Torrent detects whether a well is empty or not is by washing NaOH and measuring the signal delay compared to its neighboring wells (Figure 2). An empty well has less buffering capacity and therefore should respond earlier than its occupied neighbors with particles. There is sometimes a grey area in between and the Ion Torrent analysis uses clustering to best deal with this grey area.

Figure 2. The voltage response from the NaOH wash at the start to detect occupied and empty wells. I’ll explain in more detail in next blog post. The putative empty wells (colored black) respond earlier and much faster than occupied wells (rainbow colored).  The well represented as a red dotted line lies in the “grey zone”, i.e. hard to classify as either empty or occupied.

Background Signal

There are three major contributors to the background signal

  1. Measured average signal from neighboring empty wells (ve). This signal must be time shifted as it will be subtracted to leave foreground signal.
  2. Dynamic voltage change (delta v). Can’t explain it beyond that 😦
  3. Crosstalk flux (xtflux)

I will let the mathematics do all the talking below 🙂 This is a screen capture of a latex document I produced a few months a go so I don’t remember much 😥 Please note all equations are MY INTERPRETATION of the source code and since I didn’t write the code, I am probably incorrect sometimes.


Foreground Signal – Nucleotide Incorporation Model

The Foreground signal is calculated by subtracting the background signal away from the measured signal for an occupied well. By using this model, we can determine the value A which represents the nucleotide incorporation value (aka uncorrected signal) that gets stored in the 1.wells file.

During each nucleotide flow, the polymerase adds nucleotides in a relatively synchronous manner and therefore produces a combined signal for the well observed as a detectable voltage change. What I mean by “relatively” is that most nucleotides are incorporated soon after the nucleotide is flowed in, while some take a little longer to incorporate which is usually the case with the homopolymers . This looks like a sudden spike followed by an exponential decay (Figure 3). This foreground nucleotide incorporation is modeled as a Poisson distribution using empirically derived nucleotide base (A,C,T,G) specific parameters such as Km values (plagiarized from myself :lol:).

Figure 3. Signal produced by subtracting an empty well from an occupied live well, (i.e. subtracting the dotted black line from the red line in Figure 1). The peak of ~60. The average key flow peak in a typical Ion Torrent report is calculated in a similar way. This is from Q1 2011 DAT file so is not sampled at a more desired rate.

Nucleotide Specific Parameters

Nucleotide Incorporation Simulation

The goal is to find A that best reduces the error. I will let the mathematics below speak for itself.

In the next blog post for this series, I will list the major parameters used in signal processing. These are the mysterious unaccounted variables in all the above equations and also high level description on how parameter fitting is performed.

Disclaimer: For the good of all mankind! This is purely my opinion and interpretations.  I have tried my best to keep all analyses correct. The mathematical interpretation was done some time ago when I was in my “happy place”. Now I’m not in that “happy place” so don’t remember a thing!