DNA mixtures

Grits reports on the latest developments in forensics at a hearing of the Texas Forensic Science Commission, and what it means to the legal system in Texas and elsewhere.

First, a bit of background. DNA testing looks at two metrics on X and Y axes: Whether alleles are present at various loci, and the quantity of DNA available for testing at that spot. (The latter is complicated by allele drop-in, drop-out, and stacking, terms I’m only beginning to understand.) When examining the peak height of DNA quantity on the test results, DPS’ old method did not impose a “stochastic” threshold, which as near as I can tell is akin to the mathematical sin of interpreting a poll without ensuring a random sample. (The word “stochastic” was tossed around blithely as though everyone knew what it meant.) Basically, DPS did not discard data which did not appear in sufficient quantity; their new threshold is more than triple the old one.

That new methodology could change probability ratios for quite a few other cases, the panel predicted. One expert showed slides demonstrating how four different calculation methods could generate wildly different results, to my mind calling into question how accurate any of them are if they’re all considered valid. Applying the stochastic threshold in one real-world case which he included as an example reduced the probability of a match from one in 1.40 x 109 to one in 38.6. You can see where a jury might view those numbers differently.

Not every calculation will change that much and some will change in the other direction. The application of an improper statistical method generates all types of error, not just those which benefit defendants. There may be folks who were excluded that become undetermined, or undetermined samples may become suspects when they’re recalculated. The panel seemed to doubt there were examples where a positive association would flip all the way to excluded, but acknowledged it was mathematically possible.

DPS has identified nearly 25,000 cases where they’ve analyzed DNA mixtures. Since they typically represent about half the state’s caseload, it was estimated, the total statewide may be double that when it’s all said and done. Not all of those are problematic and in some cases the evidence wasn’t used in court. But somebody has to check. Ch. 64 of the Code of Criminal Procedure grants a right to counsel for purposes of seeking a DNA test, including when, “although previously subjected to DNA testing, [the evidence] can be subjected to testing with newer testing techniques that provide a reasonable likelihood of results that are more accurate and probative than the results of the previous test.” So there’s a certain inevitability about the need to recalculate those numbers.

See here for the Texas Tribune story that Grits references – WFAA also covered the hearing – and be sure to read the whole post. There’s a lot of scientific info out there if you google “DNA Mixtures”, but I’m not informed enough to point you to something useful. As noted, DNA is still very exact when comparing known samples, or in isolating a suspect from a rape kit. It’s when there are multiple unknown DNA donors that things get complicated, and there isn’t a single standard for that now. What we do know is that the method that had been used to provide match/elimination probabilities were not accurate, and some number of convictions in Texas and elsewhere will need to be reviewed in light of reinterpreted DNA evidence. Ultimately, questions about what the standards are and how the evidence should be analyzed will be settled by the courts, from the CCA to SCOTUS. This will be a long and occasionally messy process, and we’re at the very beginning of it. On the plus side, this should provide all kinds of fodder for mystery writers and TV showrunners. So at least there’s that.

Related Posts:

This entry was posted in Crime and Punishment and tagged , , , , , , . Bookmark the permalink.