Mountain View, CA — At least a dozen Google employees have resigned in protest over the company collaborating with the Department of Defense by supplying artificial intelligence for a controversial military pilot program for the DoD known as Project Maven, after thousands of employees signed a letter last month asking the company to cancel the Pentagon contract and institute a policy against working for the military.
‘We can no longer ignore our industry’s and our technologies’ harmful biases, large-scale breaches of trust, and lack of ethical safeguards. These are life and death stakes,” the petition read.
[Related: US Army Developing Drones With AI Targeting]
Project Maven, developed to scan images in drone footage and identify targets and classify images of objects and people— was launched in April 2017, and according to a Pentagon memo, aims to “augment or automate Processing, Exploitation and Dissemination (PED) for unmanned aerial vehicles (UAVs) in support of the Defeat-ISIS campaign” in order to “reduce the human factors burden of [full motion video] analysis, increase actionable intelligence, and enhance military decision-making.”
More than 1,000 academics and researchers penned an open letter in support of the Google employees and calling on the company to cease work on Project Maven. The letter touches on the implications of Google working with the Pentagon:
With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage.
While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.
The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.
A few of the Google employees that chose to resign in protest spoke to Gizmodo anonymously about the reasoning behind their decision.
“At some point, I realized I could not in good faith recommend anyone join Google, knowing what I knew. I realized if I can’t recommend people join here, then why am I still here?” one resigning Google employee told Gizmodo.
“I tried to remind myself right that Google’s decisions are not my decisions. I’m not personally responsible for everything they do. But I do feel responsibility when I see something that I should escalate it,” another said.
“Actions speak louder than words, and that’s a standard I hold myself to as well. I wasn’t happy just voicing my concerns internally. The strongest possible statement I could take against this was to leave,” a resigning employee added.