We all know that artificial intelligence, in its many forms, is the wave of the future. Certain industries – trucking and retail are two of the ripest for AI – will be all but completely automated in the not-so-distant future. But a Telegraph report being circulated that ‘robocops’ of sorts will be integrated into the London police force brings to light ethical and technical issues that many, including Elon Musk and Stephen Hawking, have long warned about.
While the prediction of machines overtaking man seem a bit far-fetched, or at least premature, warnings about the use of artificial intelligence should be heeded. Still, it’s likely that the widespread implementation of AI will leave many workers structurally boxed out of the workforce. McKinsey & Company estimated that 45% of workplace activity could be automated, should current technologies be tailored to do so. It will take some time to perfect such bot technologies, as Facebook Messenger’s 70% failure rate exemplified, but know that automation is coming. Microsoft’s Tay chatbot, which ‘learns’ modes of communication and lines of thought through interaction with human users, was shut down after it was deemed to be unknowingly spouting ‘offensive’ and ‘racist’ rhetoric.
Offensive words are far from Isaac Asimov’s darkest vision for robotic takeover. But it shows the fallibility of current technology, and begs some questions that should not be taken lightly.
How deeply do we want to ingrain artificial intelligence, especially those used for surveillance and policing, in our societies? Should a line be drawn when it comes to issues as critical and potentially life-threatening as policing?
Undoubtedly, the use of robots in policing can help to save lives, as we saw on display in Dallas circa July 2016. As NPR details, police used a bomb robot to detonate a device that killed a suspected sniper who had taken aim at 12 officers during a rally, killing five of them. The suspect was killed, and the Dallas Police Department reported that they believe the use of the robot likely prevented further injury to other officers, who had engaged in a five-hour standoff before sending in the bomb robot.
Policing has already entered a new era in which data collection and analysis is being used to ‘prevent crime.' New York was a testing ground for a Microsoft-created Domain Awareness System, or DAS. And, while in theory it could be of great use, Fast Company highlights how it also could be seen as an infringement on privacy and potentially civil rights.
‘Although DAS is officially being touted as an anti-terrorism solution, it will also give the NYPD access to technologies that–depending on the individual’s perspectives–veer on science fiction or Big Brother to combat street crime.
According to publicly available documents, the system will collect and archive data from thousands of NYPD- and private-operated CCTV cameras in New York City, integrate license plate readers, and instantly compare data from multiple non-NYPD intelligence databases. Facial recognition technology is not utilized and only public areas will be monitored, officials say. Monitoring will take place 24 hours a day, seven days a week at a specialized location in Lower Manhattan.’
It is not only the police who will have access to ‘incidental data’ which can be used ‘for a legitimate law enforcement or public safety purpose.' Private stakeholders will also serve as staffers in the command center where such video and other information is stored and processed, with those stakeholders apparently including the Federal Reserve, the Bank of New York, Goldman Sachs, Pfizer, and CitiGroup.
For those cynical about the motives and agendas of the world’s most powerful organizations and corporations, this seems like more than a breach of privacy. It seems like a conflict of interest to comingle video used for ‘anti-terrorism’ and ‘public safety’ purposes with the private sector. They say facial recognition technology is not being used for any purpose, but most with a healthy dose of skepticism may choose not to believe such a claim.
Bill Gates, whose company was in charge of putting together such technology, tells us not to worry about artificial intelligence. Surely, Microsoft’s role in integrating technology with policing has something to do with such a stance. Maybe he’s being straight. Who’s to say?
According to the Telegraph, there are some legitimate concerns surrounding expansion of AI and robotics, especially in policing. As their report admits, the London model for using artificial intelligence in policing has some obvious flaws.
The bots’ primary use would be to ‘answer 999 calls, detect crimes and identify offenders.’ Market Armor details how the ‘Durham Constabulary is also planning to use AI for deciding whether to keep suspects in custody.’ They add that ‘AI could be used to assist “investigations by ‘joining the dots’ in police databases, the risk assessment of offenders, forensic analysis of devices, transcribing and analysis of CCTV and surveillance, security checks and the automation of many administrative tasks,” it said.’
Detainment determinations and risk assessment, along with noted issues with racial biases and a potential inability to effectively reason with humans seem to be the most alarming aspects of the AI, as proposed. The bigger concern for most will be the increasing feeling that we are being watched 24/7 regardless of our relative criminality.
To most, these changes reek of police state-like measures to collect as much data and monitor the population’s movements as humanly possible. And, at the very least, it seems creepy. The Minority Report and I, Robot metaphors are easy to make, but it is the humans who control these monitoring systems that are perhaps most to be feared.
A) Can they properly regulate artificial intelligence systems, as even its proponents admit is a critically necessary aspect of its spread, particularly in policing?
B) Will the comingling of private enterprise, artificial intelligence, and policing be used only for noble purposes?
These are fair questions to ask, and it seems that they should be answered unequivocally before plans for robo-policing are put into action. But, as has been reported, the plans are underway, yet the answers to these questions remain opaque, at best.