The Dangerous Junk Science Behind 'Vocal Risk Assessment' Technology

The Dangerous Junk Science Behind 'Vocal Risk Assessment' Technology

Can the United States government prevent crimes before they even happen? In the classic literature of George Orwell, such powerful federal oversight was left to the regime of Big Brother, the fictitious authoritarian leader who commanded the thought police in their crusade against wrong-thinkers.

As acknowledged by journalist Ava Kofman of The Intercept, “the idea may seem ludicrous — like something out of the science fiction of ‘1984’ in which Big Brother detects any unconscious look ‘that carried with it the suggestion of abnormality’ — yet some companies [and governments] have recently begun to answer this ‘thought crime’ question in the affirmative.” Kofman’s investigation centres on AC Global Risk, a recent start-up firm which claims to “determine your level of ‘risk’ as an employee or an asylum-seeker” based on an undisclosed methodology which somehow analyzes vocal data. 

This is called the Remote Risk Assessment (RRA) a 10-minute screening process, which supposedly includes “automated, yes-or-no interview questions,” which “measures the characteristics” of the subject’s voice to produce a report of evaluation. In his recent interviews, the program’s CEO Alex Martin has touted the program as a “highly accurate, scalable, cost-effective” form of vetting that can “forever change for the better how human risk is measured.” 

For the betting man, this wager seems too good to be true… and that’s because it is, according to Kofman’s cited academics. The U.S. government, however, doesn’t seem to care. 

As explained by Kofman, the fundamental “junk science” behind RRA’s technology plays second fiddle to the government’s desire of free-flowing military industrial complex dollars, which is perpetuated by the corporate-state sugar daddies with a vested interest in helping their hawkish friends in high places.

The advisory board for the firm is packed with the same neo-conservative legal minds instrumental to the Bush-era Iraq War, which include the likes of Former Defense Secretary Robert Gates, former National Security Advisor Stephen Hadley and even former Secretary of State Condoleezza Rice, whom Kofman discovered have “advertised contracts with the U.S. Special Operations Command in Afghanistan, the Ugandan Wildlife Authority, and the security teams at Palantir, Apple, Facebook, and Google, among others.” 

While the past comments of President Donald Trump have shown his deep disdain for the Bush legacy, once declaring the Iraq War a “big fat mistake” that “deserved impeachment,” his government policy speaks louder than his words ever could. 

“In response to President Trump’s repeated calls for the “extreme vetting” of immigrants,” Kofman writes, “the company has pitched itself as the ultimate solution for ‘the monumental refugee crisis the U.S. and other countries are currently experiencing.’” 

The cited proposal grants the U.S. Department of Homeland Security (DHS) the opportunity to use their technology, which hasn’t itself been vetted for accuracy, in exchange for their state investment. The offer appears to remain under consideration despite its flaws. Kofman’s various experts, notorious for their prowess in the fields of vocal analytics, algorithmic bias and machine learning, “find the trend toward digital polygraph tests troubling, pointing to the faulty methodology” left unsaid by the elusive AC Global Risk. 

“There is some information in dynamic changes in the voice and they’re detecting it. This is perfectly plausible,” explained Alex Todorov, a Princeton University psychologist with an academic focus on social perceptions. “But the question is, How unambiguous is this information at detecting the category of people they’ve defined as risky? There is always ambiguity in these signals.” 

The government seems to be under the impression that RRA technology is akin to the Automated Virtual Agent for Truth Assessments in Real-Time (AVATAR), an A.I. program currently used to “measure changes in the voice, posture, and facial gestures of travellers” in order to flag those who “appeared untruthful or seemed to pose a potential risk” when crossing borders. In 2012, this similar technology was used by DHS volunteers at the U.S.-Mexico border and was found by the European Union to “reduce the workload and subjective errors caused by human agents.” 

It begs the question, why doesn’t the government just improve its current technology instead of outsourcing the effort to these unchecked bodies? Dare mention it could be to financially help the Bush-era old-guard and you’re suddenly persona non grata fake news. 

Sure, Secretary Rice was quick to virtue signal during the campaign saying “Enough! Donald Trump should not be President” and requesting he withdraw for not holding “dignity and stature to run for the highest office,” yet her own business partners seem quite enthusiastic to brown-nose for his military money. Rice’s own withdrawal from such a hawkish deal would speak louder than any Facebook post.

Experts fear Customs and Border Protection agents, “already using information about how someone speaks or looks as a pretext to search individuals in the 100-mile border zone, or to deny individuals entry to the U.S.” would naturally use this untested technology to perpetuate the same “biases” and “pervasive” conduct the government claims to want to avoid. 

“They’re defining risk as self-evident, as though it’s a universal quality,” said Joseph Pugliese, an Australian academic whose work focuses on biometric discrimination. “It assumes that people already know what risk is, whereas of course the question of who defines the parameters of risk and what constitutes those is politically loaded. When they say they are triaging for risk, there is a self-evident notion that they have an objective purchase on the signs that constitute ‘criminal intent’” Pugliese continued, “but we don’t know what actual signs would constitute these criminal predictors.”

“It’s bogus bullshit,” writes Björn Schuller, a professor at the University of Augsburg focused on the legitimacy of vocal emotion detection. “From an ethical point of view, it’s very dubious and shady to give the impression that recognizing deception from only the voice can be done with any accuracy. Anyone who says they can do this should themselves be seen as a risk.”