Why forensic evidence may not be as certain as we’d like to think it is

Forensic evidence can be open to the same subjectivity that affects other types of evidence

Dr Itiel Dror,  cognitive neuroscientist at University College London
Dr Itiel Dror, cognitive neuroscientist at University College London

There are few certainties in a criminal trial – witnesses forget, barristers spin and defendants lie. So thank heavens for forensic experts, those objective specialists whose evidence is based in science and presented as irrefutable fact.

What though if the experts are as biased as the rest of us? According to a growing number of studies, forensic evidence is vulnerable to the same subjectivity that plagues other types of evidence such as eye-witness testimony.

Experts in the field of cognitive bias believe forensic scientists are subconsciously influenced by many factors which can affect the decisions they come to and the evidence they give in court. The main problem is that when an examiner receives a piece of evidence, they are also given details about the crime such as how violent it was, whether there were eye-witnesses or if a suspect has confessed.

Innocent bits of information can have a massive influence on findings, even with DNA and fingerprinting, the gold standards of forensics
Innocent bits of information can have a massive influence on findings, even with DNA and fingerprinting, the gold standards of forensics

Such information can subconsciously prejudice even the most experienced of scientists. Facts as basic as which side the evidence comes from, the prosecution or defence, can influence the findings.

READ MORE

Dr Itiel Dror, a cognitive neuroscientist at University College London, was one of the first people to raise the problem 10 years ago. Since then he has worked with forensic labs and police services around the world, including the FBI and London Metropolitan Police, in an effort to mitigate this bias.

“Rather than looking at the evidence alone, forensic scientists are influenced by expectations and information provided by the context of the case which makes them see things differently and not see things at all,” he says. “So really, we’re contaminating their minds and not enabling them to look at the evidence impartially.”

According to Dror, one of the biggest problems is that forensic evidence is usually treated in court as beyond reproach and frequently not even challenged by the defence. It is so powerful it can force defendants to plead guilty in the hope of a lesser sentence, even if they are innocent.

“Often forensic examiners overstate the evidence and they rarely present the limitations and uncertainty.

“Every science has uncertainties, but they forget their role. They think they’re there to help the prosecution or defence, rather than presenting the evidence of what they’ve done, what they know and what they don’t know. Their evidence comes across as impartial, objective and very strong and it’s taken as that by the jury and the judge. The examiners are playing a game which is not scientific, which is actually anti-scientific.”

In one of Dror’s most famous studies, he took sets of fingerprints which had been examined by forensic scientists five years before and found to be matching. He gave the same prints to the same unsuspecting experts and this time told them they needed to examine them because the FBI had mistakenly identified them as matching.

Four out of the five experts changed their previous conclusions and said they did not match. The only thing that changed between the two examinations was the information about the FBI findings and with it the clear suggestion the prints did not match.

Dror and his colleagues have since completed dozens of studies which have enforced the view that seemingly innocent bits of information can have a massive influence on findings, even with those gold standards of forensics, DNA and fingerprinting.

Juries are rarely told that matching a crime scene DNA sample often comes down to a judgment call. DNA at crime scenes is frequently mixed together with other people’s biological material, meaning that matching it becomes much more complicated and subjective than matching two pristine lab samples.

The same goes for fingerprints. Pairs of prints taken in the calm surrounds of a police station are easy to compare, but criminals are rarely so obliging. Crime scene prints can differ due to elasticity of the skin, the angle the print is left at and the material it is left on. They can also be smeared or mixed with other prints.

“People say the fingerprint doesn’t lie, I say the fingerprint doesn’t talk,” Dror says. “The problem is prints can be very similar and the examiner has to decide if they’re similar enough to come from the same person. That’s where the subjectivity comes in.”

The most infamous recent example of forensic bias is the case of Brendan Mayfield, an American lawyer who converted to Islam and represented clients accused of terrorism activities.

Following the Madrid train bombings in 2004, the FBI matched prints taken from a bag of detonators found at the scene to those of Mayfield, despite the fact that he hadn’t left the US in over a decade. The prints were even confirmed as a match by an independent examiner. The FBI maintained the prints were a 100 per cent match, right up until the Spanish authorities arrested the real suspect, an Algerian national.

According to Dror, Mayfield is just the tip of the iceberg. “How big it is under the water we don’t know. With the Mayfield example, we only found out about it because of very special circumstances; because the Spanish police found the real man.”

It is a fairly terrifying prospect, but Dror believes there are ways we can stop it happening. First, ensure forensics examiners receive only the information that is absolutely necessary to do their jobs. A forensic case manager can be put in overall charge of the evidence and given all the details. They could direct what work needs to be done and by who without contaminating the examiners.

Second there is linear sequential unmasking (LSU). Dror describes this is a system of examining evidence in the order least likely to cause bias. Instead of examining two samples simultaneously and looking for a match, the examiner looks at the sample from the crime scene, categorises it and then moves on to the suspect’s sample and categorises that. Only then are the two samples compared.

When Dror first started suggesting that forensics were not infallible, he ran into a lot of resistance. One lab director wrote that stripping out the gruesome details of a case would make a forensic examiner’s job too boring. Others sent the doctor hate mail.

So how does Ireland compare? Dr Sheila Willis, the director general of Ireland's Forensic Science Laboratory, says the laboratory is aware of the issue and some measures are in place to mitigate cognitive bias. "It is a subject that any thinking forensic scientist would have given thought to as they aim to produce objective findings," she says. "That said, bias is unavoidable and the important thing is to be conscious of it and put measures in place to reduce its effect."

The lab uses a form of LSU similar to that outlined by Dror. Furthermore, DNA samples are generally produced by one person and examined by another and scientists also cross-check each other’s work.

Willis herself has recently led a Europe-wide initiative to standardise the language used in how forensic examinations are reported to investigation parties. The aim is to make the reports more precise and to allow for uncertainty to be expressed in findings.

Willis believes making examiners work without context could do more harm than good. “In my opinion, it is more dangerous to produce results in a blind fashion without context.”

Blind testing is “science 101”, according to Dror. “This is done in any scientific domain. In the medical domain, we use placebos, for example, because we know people are affected by bias. Except for some reason it has escaped forensic science for a century and now we’re trying to put it in.”