Posts Tagged ‘forensic science’

Following up on my last post, in which I asked why there were still no national standards for forensic science five years after the National Academy of Sciences’2009 report Strengthening Forensic Science in the United States, and with scandal after scandal in U.S. crime labs all over the country, there may be light on the horizon.  On January 10, the U.S. Department of Justice (DOJ)  and the National Institute for Standards and Technology (NIST) announced the formation of the National Commission on Forensic Science.

According to the announcement issued by DOJ and NIST:

Members of the commission will work to improve the practice of forensic science by developing guidance concerning the intersections between forensic science and the criminal justice system. The commission also will work to develop policy recommendations for the U.S. Attorney General, including uniform codes for professional responsibility and requirements for formal training and certification.

John P. Holdren, Assistant to the President for Science and Technology and Director of the White House Office of Science and Technology Policy, said that the Commission “will help ensure that the forensic sciences are supported by the most rigorous standards available—a foundational requirement in a nation built on the credo of ‘justice for all.’ ”

The formation of the Commission could be the a significant milestone in the march toward the use of real science and defensible national standards in forensic labs.  But it may be limited in what it can achieve just by its creation and structure: it is not a body created by Congress with power to come up with and implement standards or to regulate anything.  Rather it is a federal advisory committee, formed under the Federal Advisory Committee Act of 1972.  (A quick primer on the Act is here.)   It investigates and debate designated topics, and then reports its recommendations to the relevant federal department(s) that formed it (in this case, the DOJ and NIST).  Those agencies could choose to embrace and follow, or could choose to reject, some, all, or none of the Commission’s suggestions.

Still, this is a hopeful sign that we might be heading in the right direction.  At the very least, we will see a national conversation between the very large number of Committee members; they come from a variety of backgrounds in government, science, the legal system, and elsewhere.  See the list of more than thirty Commission members at the bottom of this announcement.

I hope readers will weigh in on the following question: realistically, what will come from the Committee?  Will the government adopt these recommendations?  Will the recommendations include national standards to regulate forensic testing, assure quality control, and the like? In the end, will the work that you foresee coming from the Commission improve the U.S.’s largely unregulated system?

Here we are, more than a month after chemist Annie Dookhan, formerly of the Massachusetts State Drug Laboratory, entered a guilty plea to producing fraudulent forensic testing results, and went to prison.  The scandal, potentially involving tens of thousands of cases, has resulted in the release of hundreds of convicted persons. All of this has reportedly cost the state of Massachusetts more than 8 million dollars, and the state has budgeted almost 9 million more for the continuing damage.    Readers have seen coverage of the Massachusetts scandal, and several others, here and here and here.

But Dhookan and the Massachusetts Drug Laboratory are far from alone.  According to a report by National Public Radio, there have been  twelve major crime lab scandals in the U.S. in just the last two years.  With all of this damage  – to individual cases and defendants, to state and local budgets, and to the public trust — occurring all over the country, how have policy makers at the national level responded?

Well, they haven’t.  At least not yet.

Almost five years after the release of the National Academy of Sciences’ report “Strengthening Forensic Science in the U.S.: A Path Forward,” which recommended (among other things) the establishment of national industry standards for forensic labs and a National Institute of Forensic Science,  as well as the independence of every crime lab from police agencies and prosecution offices, none of this has happened.  Nor is accreditation of laboratories required.

According to National Public Radio, Senators John Cornyn of Texas and Patrick Leahy of Vermont “are working to introduce legislation this year” which could address some of these problems.  But nearly five years after the NAS report, and with the parade of crime lab scandals  continuing without let up, why has it taken five years to get to this very preliminary point?

Readers, would mandatory national standards help?  Are they appropriate?  What about requiring accreditation?

If you are from outside the U.S., does your country set mandatory national standards for crime labs?  Is accreditation required?

The time has long since passed for us to do something about this set of problems in the U.S.  We just can’t afford the damage to the credibility of our criminal justice system and the costs of  reviewing cases and releasing convicted prisoners — some of whom may very well be guilty, but whose cases are tainted.

The International Association of Chiefs of Police (IACP) is one of the leading organizations for law enforcement professionals in the U.S. and around the world.  I regularly turn to their model policy and training documents when working on those issues for police agencies.  So it’s a big deal to see their new report, prepared in conjunction with their partner, the U.S. Department of Justice Office of Justice Programs, announcing that their new effort in which they will play a leading role in fixing the problems in police investigation that cause wrongful convictions.

The report, titled, “National Summit on Wrongful Convictions: Building a Systemic Approach to Prevent Wrongful Convictions,” takes a full view of the issues that must be addressed to avoid convicting the wrong people, and announces a series of recommendations designed to bring the goal within reach.  It is based on work at a summit of people from IACP, DOJ, and a host of experts.  In a preliminary statement in the report, the President of the IACP and the Assistant Attorney General for the Office of Justice Programs, outlined how the report came to be and what it does.

This event gathered 75 subject matter experts from all key disciplines to address and examine the causes of and solutions to wrongful convictions across the entire spectrum of the justice system. Summit participants worked diligently during this one-day intensive event to craft 30 focused policy recommendations that guide the way to our collective mission to continually improve the criminal justice system. The summit focused on four critical areas: (1) making rightful arrests, (2) correcting wrongful arrests, (3) leveraging technology and forensic science, and (4) re-examining closed cases. The 30 resulting recommendations directly address these areas and lay a critical foundation for required changes in investigative protocols, policies, training, supervision, and assessment.

The report makes thirty recommendations on a number of topics: eyewitness identifications, false confessions, preventing investigative bias, improving DNA testing procedures, CODIS, correcting wrongful arrests, leveraging technology and forensic science, and re-examining closed cases with an openness to new information.

The report is absolutely essential reading for anyone interested in wrongful convictions and what can be done to correct them.  Readers of my book Failed Evidence will also recognize that the emergence of this consensus at the top of the law enforcement profession is exactly what I have called for: “Police and Prosecutors Must Lead the Effort” (pp. 158-159).

The use of DNA identification as a forensic tool, beginning in 1989, changed the way that we think about guilt, innocence, and traditional police investigation.  It isn’t just the 311 wrongful convictions that DNA identification has confirmed; it’s the far more numerous cases in which DNA has determined guilt — sometimes in cases years or decades old.

Now DNA identification is about to change: it will become even more powerful than it is now.  A new way of processing and interpreting DNA has arrived that will make our current DNA techniques look weak by comparison.

On Friday, Nov. 8, I attended an incredibly interesting talk by Dr. Ria David.  Dr. David is one of the co-founders of Cybergenetics, a company based in Pittsburgh.  Cybergenetics has perfected computer-based techniques and technologies that will change the way that DNA is analyzed.  With Cybergenetics’ TrueAllele (R) system, the analysis relies the power of computers instead of interpretation done by humans.   The talk was sponsored by the Center for Women’s Entrepreneurship (CWE) at Chatham University.  (Disclosure: my wife runs CWE; my wife and I know Dr. David and her Cybergenetics co-founder, Dr. Mark Perlin, but neither my wife nor I have any personal or financial ties of any kind to Cybergenetics.)

Most of us know that a DNA sample allows forensic scientists to say things like “the odds that this sample came from anyone other than the defendant are fifty million to one.”  Pretty powerful stuff — until you learn that Cybergenetics’ systems will allow prosecutors to offer juries odds of not tens of millions to one, but trillions or even quadrillions to one.  In addition, Cybergenetics will allow analysts to pull apart mixtures of DNA from different people, which is common at crime scenes, and which current DNA technology often can’t handle.  Readers of my book Failed Evidence: Why Law Enforcement Resists Science can get a little more information in Chapter 7, pp. 186-190; you can get the book here.

Cybergenetics’ DNA system has found ready acceptance in the United Kingdom, but the process has been slower in the U.S. There has been considerable resistance — something readers of Failed Evidence are quite familiar with — particularly at the FBI, which governs current DNA protocols and use.

There is much more to how the Cybergenetics’ TrueAllele system works, and what it can do; I’d urge readers to take a good look at Cybergenetics web site, which gives details on what they do, and the many criminal cases and mass disaster identification cases (including the identification of remains at the World Trade Center site).  Once law enforcement sees what this new method of using DNA can do, and once resistance to change is overcome, DNA will be able to identify many more guilty criminals, as well as exonerate many more of the wrongfully convicted, and it will do so with more certainty that we ever thought possible.

 

In today’s world, we think of DNA matching as the gold standard of identification: a scientifically precise method of matching human tissue or fluids left at a crime scene to a particular individual.  But in an op-ed article last week, High-Tech, High-Risk Forensics, published in the New York Times, Cal Hastings professor of law Osagie Obasogie reminds us that even DNA identification isn’t foolproof.  It’s an excellent point and one worth remembering.  But then he stumbles over the details of DNA identification, labeling it high risk, because many people might have the same DNA profiles.  Risk is always present, but is the risk of getting DNA identifications high because many people might have the same DNA profile?  No.

Professor Obasogie tells us the story of a man arrested for a murder near San Jose, California, when his DNA was found on the victim’s body.  Only after the man was arrested and jailed did his alibi surface: at the time of the murder, he was a patient at a hospital, suffering from severe intoxication, and there were voluminous records to prove it.  He was freed after five months, and prosecutors now think the most likely explanation is that paramedics who transported the man to the hospital were the same ones called to the crime scene later that night.  The DNA was likely transferred to the victim from the paramedics’ clothing or equipment.  Professor Obasogie decries “the certainty with which prosecutors charged Mr. Anderson with murder,” because it  “highlights the very real injustices that can occur when we place too much faith in DNA forensic technologies.” This is hard to argue with; for those who can remember this far back, it recalls lawyer Barry Scheck cross-examining Dennis Fung in the O.J. Simpson murder trial over the sloppy collection of DNA and other evidence.  Scheck and other lawyers argued that the DNA that seemed to implicate Simpson just couldn’t be trusted; “garbage in, garbage out.” Professor Obasogie is not saying that the paramedics did anything wrong or were sloppy in any conventional sense; rather he’s arguing (correctly) that contamination can happen even when we’re not aware of the possibility.  Thus caution is always advisable, even with DNA.

But then Professor Obasogie begins to argue that there are deeper problems with DNA identification than just contamination.

Consider the frequent claim that it is highly unlikely, if not impossible, for two DNA profiles to match by coincidence. A 2005 audit of Arizona’s DNA database showed that, out of some 65,000 profiles, nearly 150 pairs matched at a level typically considered high enough to identify and prosecute suspects. Yet these profiles were clearly from different people.

Professor David H. Kaye of Penn State Dickinson School of Law has pointed out the problem with this argument on his blog Forensic Science, Statistics & the Law.  In a blog post on July 26, Professor Kaye explains why Professor Obasogie is wrong on this point.  In his words:

The 150 or so matches were, in fact, mismatches. That is, they were partial matches that actually excluded every “matching” pair. Only if an analyst improperly ignored the nonmatching parts of the profiles or if these did not appear in a crime-scene sample could they be reported to match.

There is much more to Professor Kaye’s thorough and lucid explanation.  If you are interested in a real understanding of how DNA actually works, I strongly recommend Professor Kaye’s post.

The U.S. Department of Justice (DOJ), in partnership with the Innocence Project and the National Association of Criminal Defense Lawyers (NACDL), will review 2,000 cases in which microscopic hair analysis of crime scene evidence was conducted by the FBI Laboratory.  The review, prompted by the DNA-based exoneration of several men convicted on the basis of hair microscopy, will focus on “specific cases in which FBI Laboratory reports and testimony included statements that were scientifically invalid.”  The Innocence Project’s announcement of the review is here; a representative news article is here.

In a move that shows just how seriously the DOJ is taking this review, it has done something unheard of:

Because of the importance attached to these cases, the DOJ has agreed, for the first time in its history, not to raise procedural objections, such as statute of limitations and procedural default claims, in response to the petitions of criminal defendants seeking to have their convictions overturned because of faulty FBI microscopic hair comparison laboratory reports and/or testimony.

Translation: DOJ is not going to fight this review in any of these cases; they’re going to be part of it.

It’s hard to describe the magnitude of the shift in outlook this represents.  Usually, as readers of Failed Evidence know, law enforcement (and I include DOJ in that phrase) resists science-based review and testing; that’s the thrust of the book.  I am happy to say that this is refreshingly different.  According to Peter Neufeld, Co-Director of the Innocence Project, “[t]he government’s willingness to admit error and accept its duty to correct those errors in an extraordinarily large number of cases is truly unprecedented.  It signals a new era in this country that values science and recognizes that truth and justice should triumph over procedural obstacles.”

Of course, this review will not affect cases in which hair analysis was handled by state crime labs.  But here’s hoping they will take this as an example, as the Grits for Breakfast blog argues ought to be done in Texas.

For a sense of the damage that sloppy hair analysis and testimony about it has done in prior cases, listen to this NPR story and interview about the case of Dennis Fritz, in Ada, Oklahoma.  John Grisham’s nonfiction book “The Innocent Man” is an excellent read about the case.

Maybe this is the beginning of a trend.  Hats off to DOJ, the Innocence Project, and NACDL.

When a former high-ranking Justice Department official speaks of a “revolution” in criminal justice, with the whole field turning toward science, could it mean less failed evidence in the future?  What does it mean for those concerned with faulty forensic science?

Laurie Robinson served as Assistant Attorney General in both the Clinton and Obama Justice Departments, where she oversaw the Office of Justice Programs (OJP), the research, statistics and criminal justice assistance arm Justice.  That made her remarks to the Delaware Center for Justice the other day worth noticing.  According to the Wilmington News Journal, “[w]e’re seeing something akin to a revolution in criminal justice in this country,” said Robinson, now on the faculty of George Mason University. “We’re at an important crossroads, one where ideology has taken a back seat, and science and pragmatism have come to the fore.”

Robinson’s web page at George Mason says her tenure at OJP  “was marked by a focus on science and evidence-based programming.”  She was in Delaware to discuss the state’s re-entry programs  and other initiatives to reduce recidivism, and few would disagree that those important programs need scientific and statistical support.   But I wonder whether Robinson would be as optimistic about science’s role in forensic methods, which have played a role in about half of all wrongful convictions across the U.S.  Surely, forensic science needs “science and evidence-based” support and examination — badly.

 It has now been more than four years since the release of the landmark 2009 National Academy of Sciences report Strengthening Forensic Sciences in the United States: A Path Forward, which found that except for DNA and chemical analysis, most of what we think of as forensic science isn’t science at all.  In that time, little seems to have changed; scandals in crime labs continue to pile up in jurisdictions across the country (see my posts about lab scandals just this past year in Massachusetts  and Minnesota,  for example).  In a particularly compelling piece of writing in the Huffington Post, Radley Balko discusses the long-running crime lab scandal in Mississippi, and puts it in context: Mississippi’s scandal “is just the latest in a long, sad line of such stories” that Balko has already chronicled.

What’s to be done?  As a start, I argued in chapter 7 of  Failed Evidence, all federal money that goes to law enforcement should carry with it a requirement for compliance with best practices in police investigation and forensic science.  Not all law enforcement agencies run their own forensic labs (and as the NAS report said in Chapter 6, “Improving Methods, Practice, and Performance in Forensic Science,” labs should be independent of law enforcement.).  But for those that do, compliance with standards that would avoid systematic error, human biases, fraud, and improper scientific testing and testimony should be mandatory.

That would start a revolution right there.  Because what law enforcement agency could afford to just turn down federal funding, in budgetary times like these?

 

Of all of the methods studied in the 2009 National Academy of Sciences’ report on forensic science, forensic odontology — the analysis of bite marks to identify the perpetrators of crimes — has come under some of the harshest criticism.  And now an effort is underway to keep it out of courtrooms for good.

This article by the Associated Press examined “decades of court records, archives, news reports and filings by the Innocence Project in order to compile the most comprehensive count to date of those exonerated after being convicted or charged based on bite mark evidence.”   Typically, the evidence consisted of testimony by forensic dentists identifying marks on the bodies of victims (usually in cases of murder or sexual assault) as coming from the defendant’s teeth.  According to the AP, since 2000, DNA identification has overturned one after another of these cases, throwing the entire “discipline” into question, and rendering it nearly obsolete.

Critics make two main claims against forensic odontology.  First, there’s no scientific proof that a bite mark can be matched to any particular set of teeth.  There is no data, no experimental evidence — nothing — on which to base the idea that forensic odontology can make reliable identifications.  Second, human skin — almost always the site of the bite mark in question — does not reliably “record” bite marks, since the skin itself changes over time, even after death.  Because the skin changes shape, consistency, color, and even size  after the mark is made, this makes bite marks, and the method itself, inherently unstable.

Yet the dentists, who belong to the American Board of Forensic Odontology, insist that their methods of making identifications of the source of bite marks are reliable; it’s just that some forensic dentists are, well, not too good or biased.  “The problem lies in the analyst or the bias,” said Dr. Frank Wright, a forensic dentist in Cincinnati. “So if the analyst is … not properly trained or introduces bias into their exam, sure, it’s going to be polluted, just like any other scientific investigation. It doesn’t mean bite mark evidence is bad.”  It’s the familiar refrain: it’s just a few bad apples, not the whole barrel.  But according to the AP:

Only about 100 forensic dentists are certified by the odontology board, and just a fraction are actively analyzing and comparing bite marks. Certification requires no proficiency tests. The board requires a dentist to have been the lead investigator and to have testified in one current bite mark case and to analyze six past cases on file — a system criticized by defense attorneys because it requires testimony before certification…The consequences for being wrong are almost nonexistent. Many lawsuits against forensic dentists employed by counties and medical examiner’s offices have been thrown out because as government officials, they’re largely immune from liability.  Only one member of the American Board of Forensic Odontology has ever been suspended, none has ever been decertified, and some dentists still on the board have been involved in some of the most high-profile and egregious exonerations on record.

It’s worth noting that there’s nothing wrong with other uses of forensic dentistry, such as identifying human remains from teeth by matching them to existing dental records; that type of forensic work is generally rock solid and remains unchallenged by critics.  But on identification of perpetrators through bite mark analysis, the question is different: when a forensic “science” has as dismal a record as forensic odontology does, and no scientific proof of its validity exists, can anything justify allowing the use of bite mark evidence to convict a person of a crime?

The National Registry of Exonerations, a joint project of the law schools at University of Michigan and Northwestern, reported last week that in 2012, law enforcement cooperated in some way in a higher percentage of cases than in the past.  Does this mean less resistance of science by law enforcement?

The answer is that we can’t tell from this data.  But the report is worth looking at nonetheless.

Here is what the report says about law enforcement cooperation (I have removed the bullets, spacing, etc.):

In  2012 there was a dramatic increase in the number and the proportion of exonerations that prosecutors or police participated in obtaining.  Of the 63 exonerations in 2012, prosecutors or police initiated or cooperated in 34, or 54%. Over the past 24 years, prosecutors and police have cooperated in 30% of the exonerations we know about (317/1050). Last year for the first time they cooperated in a majority of exonerations, and the number of such cases is a large increase from the previous high (22 of 57 in 2008, or 39%).

This is all to the good.  But there are some aspects of the findings that counsel caution.  First, the author(s) of the report freely admit they don’t know why  this is happening.  It could have many causes.

This increase [in cooperation] may be due to a confluence of related factors: changes in state laws that facilitate post-conviction DNA testing, the emergence of Conviction Integrity Units in several large prosecutorial offices, and, perhaps, a change in how law enforcement officers view the possibility of false convictions at trial.

And just to be clear, it’s good that they admit that it’s not clear what the cause or causes could be.  Too often, those working with statistics take an opposite tack.

As far as resistance to science, however, the report may indicate that the cases where science and forensics matter most still do not get cooperation for law enforcement.  The first clue is that 57% of the exonerations in 2012 were homicide cases, and another 24% were sexual assaults — 81% in all.  These are the types of cases in which resistance science on eyewitness identification, interrogation, and  forensics can matter the most.  These cases also have the highest public profile.  And there, perhaps, is the rub: “Official cooperation is least common among exonerations for highly aggravated and publicized crimes – murders with death sentences and mass child sex abuse prosecutions – and most common among exonerations for robberies and drug crimes.”

The best way to answer whether this increased cooperation represents any lessening of the resistance to science would be to look at the individual exoneration cases for 2012: do they feature law enforcement cooperation over these science-based issues, or is it something else — for example, information on a witness interviews illegally withheld from the defendant in a previous trial?

Perhaps we will see that in the next report from the Registry.

When I’m in Cincinnati for talks on Failed Evidence tonight, April 4, at 7:00 pm at the Clifton Cultural Arts Center and tomorrow, April 5, at noon at the University of Cincinnati School of Law, one topic sure to come up is an article from the April 3 New York Times, “Advances in Science of Fire Free a Convict After 42 Years. ”   Louis Taylor was serving 28 life sentences for the deaths caused by a fire in a Tucson, Arizona hotel in December of 1970.  Taylor, then just 16 years old, was convicted of arson based on faulty forensic science.

Mr. Taylor has been release from prison.  He is now 58 years old.

The story highlights the state of arson investigation, past and present.

A few years ago, the National Academy of Sciences turned its attention to the misuse of science in courtrooms, saying that pseudoscientific theories had been used to convict people of crimes they may not have committed. By then, a small group of fire engineers had already begun to discredit many of the assumptions employed in fire investigations, like the practice of using the amount of heat radiated by a fire to assess if an accelerant had been used.

Unlike DNA evidence, which can exonerate one person and sometimes incriminate another, the evidence collected in some arson investigations does not yield precise results. Often much of the evidence has been lost or destroyed. In the case of the hotel fire here, all that is left are photographs, reports and chemical analysis, all of them assembled to prove arson.

As a result, “we can’t definitely say what really caused the fire,” said John J. Lentini, a veteran fire investigator who wrote a report on Mr. Taylor’s case. “But what we can do is discredit the evidence” used to support the charge.

The case recalls the story of the trial and execution of Cameron Todd Willingham in Texas, executed in 2004 for the deaths of his children in a fire.  Experts call the arson in that case terribly flawed — just as in Mr. Taylor’s case.

The science surrounding investigation is light years ahead of where it used to be, even a decade ago.  It’s time that all of the old cases in which verdicts depended on outmoded and discredited methods of arson investigation be re-examined, and if necessary overturned.