Would it surprise you to learn the federal government has been spending millions of dollars to develop a voice stress-based credibility-assessment technology to vet foreign individuals seeking entry into the United States from places like Syria? Hardly. But it might surprise you to learn the money has been spent despite the fact that kind of technology already exists and has proven itself over and over again in places like Afghanistan, Iraq and Guantanamo Bay.
During an exhaustive four-year investigation of the federal government’s use of credibility-assessment technologies, including the polygraph, I found numerous individuals — most of whom worked with or for government agencies — eager to disparage the idea that one can detect deception by measuring stress in the human voice. Toward the end of my investigation, I learned about a government-funded effort at the University of Arizona to develop a voice stress-based technology despite the fact such a technology already exists and has proven itself to the point that more state and local law enforcement agencies use it than use the polygraph.
Slightly modified with the addition of links in place of footnotes for stand-alone publication, details of my brief electronic exchanges with a man involved in the aforementioned research at the U of A appear below as excerpted from my second nonfiction book, The Clapper Memo:
If, as polygraph loyalists have claimed for decades, it is not possible to detect stress in the human voice, then why have so many taxpayer dollars been dedicated to pairing the study of the human voice with credibility-assessment technologies?
Seeking an answer to that question, I contacted Jay F. Nunamaker, Ph.D. and lead researcher at the National Center for Border Security and Immigration (a.k.a., “BORDERS”) at the University of Arizona in Tucson. In reply to my inquiry August 6, 2012, Dr. Nunamaker shared details about the project.
He began by explaining that the program has received funding from several sources, including — but not limited to — the U.S. Department of Homeland Security (DHS), the Intelligence Advanced Research Projects Activity (IARPA), the National Science Foundation (NSF), and no fewer than three branches of the U.S. military.
Next, he described the history of the project.
“We started down this path to develop a non-intrusive, non-invasive next-generation polygraph about 10 years ago with funding from the Polygraph Institute at Ft. Jackson,” he wrote.
If, per Dr. Nunamaker, the effort began 10 years ago at Polygraph Headquarters, that means it got its start at about the same time the 2003 National Research Council report, “The Polygraph and Lie Detection,” was published and offered, among other things, that the majority of 57 research studies touted by the American Polygraph Association were “unreliable, unscientific and biased.”
In a message August 31, 2012, Dr. Nunamaker offered more details about his research.
“The UA team has created an Automated Virtual Agent for Truth Assessment in Real-Time (AVATAR) that uses an embodied conversational agent–an animated human face backed by biometric sensors and intelligent agents–to conduct interviews,” he explained. “It is currently being used at the Nogales, Mexico-U.S. border and is designed to detect changes in arousal, behavior and cognitive effort that may signal stress, risk or credibility.”
In the same message, Dr. Nunamaker pointed me to a then-recent article in which the AVATAR system was described as one that uses “speech recognition and voice-anomaly-detection software” to flag certain exchanges “as questionable and worthy of follow-up interrogation.”
Those exchanges, according to the article, “are color coded green, yellow or red to highlight the potential severity of questionable responses.” Ring familiar?
Further into the article, reporter Larry Greenemeier relied upon Aaron Elkins, a post-doctoral researcher who helped develop the system, to provide an explanation of how anomaly detection is employed by AVATAR.
After stating that it is based on vocal characteristics, Elkins explained a number of ways in which a person’s voice might tip the program. One of his explanations was particularly interesting.
“The kiosk’s speech recognition software monitors the content of an interviewee’s answers and can flag a response indicating when, for example, a person acknowledges having a criminal record.”
Elkins clarified his views further during an interview eight days later.
“I will stress that is a very large leap to say that they’re lying…or what they’re saying is untrue — but what it does is draw attention that there is something going on,” he said. At the end of that statement, reporter Som Lisaius added seven words — precisely the intent behind any credibility assessment — with which I’m certain every [sic] Computer Voice Stress Analyzer® examiner I’ve interviews during the past four years would agree.
To even the most-impartial observer, Elkins’ explanations confirm beyond a shadow of a doubt that BORDERS researchers believe stress can be detected in the voice utterances of individuals facing real-life jeopardy.
NOTE: Though I tried twice between August 2012 and February 2013 to find out from officials at the BORDERS program how much funding they have received from the U.S. Department of Homeland Security and all other sources since the inception of the program, I received no replies to my inquiries.
To learn more about why federal government agencies are funding this kind of research despite the fact a polygraph replacement already exists and has proven itself in a wide range of applications, one must understand that a technological “turf war” is to blame and has been raging silently for more than 40 years. Details of that turf war can be found inside The Clapper Memo.
For links to other articles of interest as well as photos and commentary, join me on Facebook and Twitter. Please show your support by buying my books and encouraging your friends and loved ones to do the same. Thanks in advance!