The U.S. Army recently announced that it is working on developing unmanned aerial vehicles, better known as drones, that will be capable of identifying and subsequently firing upon targets they select using artificial intelligence (AI). Once complete, the drones— which are currently controlled by humans— would decide who to kill with little to no human involvement, opening the door to mass killings with minimal accountability and transparency.
The project, titled “Automatic Targeting Recognition of Personnel and Vehicles from an Unmanned Aerial System Using Learning Algorithms,” would form partnerships with both private and public research institutions which will help develop image-based AI targeting systems. The end result is expected to be drones that use neural networks combined with AI in order to create a deadly aerial weapon that is capable of acting as judge, jury and executioner without human input.
While the use of armed drones has been a fixture of both covert and overt U.S. military action abroad, the Army’s description of the project forebodes the use of the deadly and controversial technology within the United States. It states that “one of the desired characteristics of the system is to use the flexibility afforded by the learning algorithms to allow for the quick adjustment of the target set or the taxonomy of the target set DRCI categories or classes. This could allow for the expansion of the system into a Homeland Security environment.”
Another implication is that technology companies involved in maintaining or creating the AI systems for the drones could result in such companies, as well as engineers and scientists involved in creating aspects of these systems, to be labeled valid military targets for their role in helping to build the machines. As The Conversation notes:
Companies like Google, its employees or its systems, could become liable to attack from an enemy state. For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.
In addition, the Department of Defense has reportedly been working on developing an AI system for identifying targets automatically that is set to be populated by massive data sets that include blogs, websites, and public social media posts such as those found on sites like Twitter, Facebook and Instagram. This AI system will employ such data in order to carry out predictive actions, such as the predictive-policing AI system already developed by major Pentagon contractor Palantir.
The reported system, which is planned to be used to control the Pentagon’s increasing investments in robotic soldiers and tanks will also seek to “predict human responses to our actions.” As journalist Nafeez Ahmed has noted, the ultimate idea – as revealed by the Department of Defense’s own documents — is to identify potential targets, i.e,. persons of interest and their social connections, in real-time by using social media as “intelligence.” The Army’s upcoming work on automated drones will be just a part of this larger system which has global and unprecedented implications for the future of the U.S. military and its actions both domestically and abroad.