Forensic science is supposed to be a scientific process. But for decades, critics have complained evidence isn’t always evaluated in a laboratory setting, and empirical studies don’t back the methods of analysis.
The consequences for faulty forensic evidence have been severe. Forty-five percent of wrongful convictions that were later overturned due to DNA evidence were found to be the result of inaccurate evidence. Advocacy groups such as the Innocence Project argue that many forensic techniques, such as bitemark analysis, are unreliable and unscientific.
In 2016, the President’s Council of Advisors in Science and Technology (PCAST) looked at the scientific validity of commonly used forensic methods and found most weren’t reliable.
In recent years, an increase in peer-reviewed studies has helped to make forensic science more consistent and systematic. Here are five techniques scientists are continuing to test and improve:
1. Single Source DNA
At a crime scene, evidence technicians collect samples of blood, hair, semen or skin cells to create a DNA profile. In this process, the DNA is chemically extracted from the sample, and a polymerase chain reaction is used to amplify the DNA segments. The resulting DNA fragments are measured and analyzed by a software program. Many states now have databases with DNA profiles of past arrestees, and the profile in question can be compared to others for a match.
PCAST deemed single-source DNA a reliable forensic technique well supported by empirical research. However, the council also cautioned the method could be subject to human error if samples are tainted in the field or mislabeled in the lab.
Read More: The Problem With Forensic Sciences
2. Ballistics Forensics
The type of weapon used in a crime can be informative to police, prosecutors and judges. Ballistics analysis is becoming increasingly sophisticated, and the method is improving reliability. In a 2022 study in the Journal of Forensic Sciences, a research team found professional analysts had a strong accuracy rate for correctly identifying bullets and cartridge cases.
The research team recruited 173 participants through professional organizations such as the Association of Firearm and Toolmark Examiners. The team mailed a test packet to participants containing 15 cartridge case sets and 15 bullet sets. About 80 percent of the participants did not make errors in identifying either the bullets or the cartridge cases.
The research team found that most errors belonged to just a handful of participants. Thirteen participants were responsible for more than half of the errors, and six participants made 30 percent of the errors. These results were consistent with other firearm studies.
3. Bloodstain Pattern Analysis (BPA)
At a suspected murder scene, BPA can provide additional information to law enforcement. BPA, for example, can determine whether a fatal gunshot resulted from suicide or homicide. Or if a defendant is claiming self-defense, BPA might indicate whether a struggle indeed occurred.
Courtroom testimony has accepted BPA for over 150 years, but the PCAST report criticized the technique’s high error rates. Scientists have responded with empirical research to test accuracy rates and identify consistent flaws with the process.
In a 2021 study in Forensic Science International, an interdisciplinary team sought to determine how often analysts were incorrect or contradicted each other. They recruited 75 practicing analysts who reviewed 192 bloodstain patterns.
In samples with known causes, the participants had an 11.2 percent error rate. They also contradicted each other 7.8 percent of the time. The authors concluded the error and contradiction rates could be lessened if the BPA discipline streamlined its terminology and limited semantic disagreements, which they warned could have serious consequences in the real world.
4. Fingerprint Analysis
With this method, investigators identify a fingerprint or partial fingerprint left at a crime scene or on a murder weapon. They then compare the print to complete sets made from suspects or stored in databases. The technique has been used in the U.S. for over a century, but only in the past 20 years have scientists begun to scrutinize the process.
The 2004 bombing in the Madrid subway system prompted scientists to rethink the method’s reliability. Spanish investigators struggled to identify a suspect. They turned to international investigation groups, such as the FBI, and provided their evidence, including a latent fingerprint found on a bag of explosives.
The FBI found the print matched with a lawyer in Oregon whose full set was on record due to his past military history. But at the time of the bombing, the lawyer said he didn’t have a valid U.S. passport and hadn’t left the country since the early 1990s. The FBI later had to apologize for an erroneous accusation, and the forensic field began questioning the accuracy of fingerprint analysis.
In the 2016 PCAST report, the council noted the FBI had conducted empirical studies since the bombing to improve reliability. The report concluded that the method had become “foundationally valid” but warned that juries must be informed that false positives occur. In one study they cited, there was one error per 306 cases, but it was as high as 1 in 18 in another study.
5. Multiple Source DNA
At crime scenes, investigators might encounter more than one DNA profile. For example, there might be blood samples from the victim and the perpetrator. Or there can be evidence of “touch DNA,” meaning skin cells left behind when multiple people touch the same object.
The PCAST report noted the growing interest in “touch DNA” but found that multiple-source DNA was most reliable when limited to two profiles. The council warned that DNA analysis of “complex mixtures” was not yet foundationally valid and could lead to erroneous results.
Touch DNA, for example, was used to convict American university student Amanda Knox for the 2007 murder of her roommate in Italy. Knox was later exonerated.