Tag Archives: Artificial Intelligence

Google Employees Resign In Protest Of Pentagon AI Project

Mountain View, CA — At least a dozen Google employees have resigned in protest over the company collaborating with the Department of Defense by supplying artificial intelligence for a controversial military pilot program for the DoD known as Project Maven, after thousands of employees signed a letter last month asking the company to cancel the Pentagon contract and institute a policy against working for the military.

‘We can no longer ignore our industry’s and our technologies’ harmful biases, large-scale breaches of trust, and lack of ethical safeguards. These are life and death stakes,” the petition read.

[Related: US Army Developing Drones With AI Targeting]

Project Maven, developed to scan images in drone footage and identify targets and classify images of objects and people— was launched in April 2017, and according to a Pentagon memo, aims to “augment or automate Processing, Exploitation and Dissemination (PED) for unmanned aerial vehicles (UAVs) in support of the Defeat-ISIS campaign” in order to “reduce the human factors burden of [full motion video] analysis, increase actionable intelligence, and enhance military decision-making.”

More than 1,000 academics and researchers penned an open letter in support of the Google employees and calling on the company to cease work on Project Maven. The letter touches on the implications of Google working with the Pentagon:

With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage.

While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.

The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.

A few of the Google employees that chose to resign in protest spoke to Gizmodo anonymously about the reasoning behind their decision.

“At some point, I realized I could not in good faith recommend anyone join Google, knowing what I knew. I realized if I can’t recommend people join here, then why am I still here?” one resigning Google employee told Gizmodo.

“I tried to remind myself right that Google’s decisions are not my decisions. I’m not personally responsible for everything they do. But I do feel responsibility when I see something that I should escalate it,” another said.

“Actions speak louder than words, and that’s a standard I hold myself to as well. I wasn’t happy just voicing my concerns internally. The strongest possible statement I could take against this was to leave,” a resigning employee added.

US Army Developing Drones With AI Targeting

The U.S. Army recently announced that it is working on developing unmanned aerial vehicles, better known as drones, that will be capable of identifying and subsequently firing upon targets they select using artificial intelligence (AI). Once complete, the drones— which are currently controlled by humans— would decide who to kill with little to no human involvement, opening the door to mass killings with minimal accountability and transparency.

The project, titled “Automatic Targeting Recognition of Personnel and Vehicles from an Unmanned Aerial System Using Learning Algorithms,” would form partnerships with both private and public research institutions which will help develop image-based AI targeting systems. The end result is expected to be drones that use neural networks combined with AI in order to create a deadly aerial weapon that is capable of acting as judge, jury and executioner without human input.

While the use of armed drones has been a fixture of both covert and overt U.S. military action abroad, the Army’s description of the project forebodes the use of the deadly and controversial technology within the United States. It states that “one of the desired characteristics of the system is to use the flexibility afforded by the learning algorithms to allow for the quick adjustment of the target set or the taxonomy of the target set DRCI categories or classes. This could allow for the expansion of the system into a Homeland Security environment.”

Another implication is that technology companies involved in maintaining or creating the AI systems for the drones could result in such companies, as well as engineers and scientists involved in creating aspects of these systems, to be labeled valid military targets for their role in helping to build the machines. As The Conversation notes:

Companies like Google, its employees or its systems, could become liable to attack from an enemy state. For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.

In addition, the Department of Defense has reportedly been working on developing an AI system for identifying targets automatically that is set to be populated by massive data sets that include blogs, websites, and public social media posts such as those found on sites like Twitter, Facebook and Instagram. This AI system will employ such data in order to carry out predictive actions, such as the predictive-policing AI system already developed by major Pentagon contractor Palantir.

The reported system, which is planned to be used to control the Pentagon’s increasing investments in robotic soldiers and tanks will also seek to “predict human responses to our actions.” As journalist Nafeez Ahmed has noted, the ultimate idea – as revealed by the Department of Defense’s own documents — is to identify potential targets, i.e,. persons of interest and their social connections, in real-time by using social media as “intelligence.” The Army’s upcoming work on automated drones will be just a part of this larger system which has global and unprecedented implications for the future of the U.S. military and its actions both domestically and abroad.

New Bill Aims to Create National Security Commission on AI

Washington, D.C. – The chair of the House Armed Services Subcommittee on Emerging Threats and Capabilities, Rep. Elise Stefanik (R-NY), introduced artificial intelligence (AI) legislation on Wednesday, warning that the ongoing revolution in AI will require a national security commission to study America’s national security needs with respect to intelligent machines.

Stefanik is seeking to add the proposed AI legislation to this year’s national defense authorization bill. Defense News reports the legislation would “develop a commission to review advances in AI, identify the nation’s AI needs and make recommendations to organize the federal government for the threat.”

“Artificial Intelligence is a constantly developing technology that will likely touch every aspect of our lives,” Stefanik said in a statement. “AI has already produced many things in use today, including web search, object recognition in photos or videos, prediction models, self-driving cars, and automated robotics. It is critical to our national security but also to the development of our broader economy that the United States becomes the global leader in further developing this cutting edge technology.”

The Defense News report also notes:

Stefanik’s proposed national security commission on artificial intelligence would address and identify America’s national security needs with respect to AI. It would also look at ways to preserve America’s technological edge, foster AI investments and research, and establish data standards and open incentives to share data…

The bill also aims to air ethical considerations related to AI and machine learning, and identify and understand the risks of AI advances under the law of armed conflict and international humanitarian law.

Former Deputy Defense Secretary Bob Work, of the Center for a New American Security, has previously spoken about establishing an AI Center of Excellence, but also believes more government leadership is needed from the top in establishing a “national AI agency.”

“To have a national response you have to have a national push from above,” Work told Defense News. “In my view it must start from the White House.”

“One of the things that the national AI agency should do is to address this problem: How do we address IP? How do we address combining all of the strengths of our DoD labs and our technology sector for the betterment of the country? This is across all things, in medicine, in finance, in transportation. And yes, hopefully, in defense,” Work said at the launch of a new CNAS artificial intelligence working group on March 15.

“We’re going to need something, help from Congress, either a caucus or someone who takes this as a leadership position,” Work cautioned. “To have a national response, you have to have a national push from above.”

While speaking at the South by Southwest (SXSW) conference and festival on March 11, inventor Elon Musk warned “AI is far more dangerous than nukes” regarding the dangers associated with the rapidly developing field of artificial intelligence (AI). “I’m very close to the cutting edge in AI and it scares the hell out of me,” Musk said.

“Narrow AI is not a species-level risk. It will result in dislocation… lost jobs… better weaponry and that sort of thing. It is not a fundamental, species-level risk, but digital super-intelligence is,” Musk told the audience in Austin.

“I think the danger of AI is much bigger than the danger of nuclear warheads by a lot,” Musk said. “Nobody would suggest we allow the world to just build nuclear warheads if they want, that would be insane. And mark my words: AI is far more dangerous than nukes.”

Elon Musk Warns “AI is Far More Dangerous than Nukes” at SXSW

Austin, TX – Speaking at the South by Southwest (SXSW) conference and festival on March 11, billionaire polymath pioneer Elon Musk warned “AI is far more dangerous than nukes” regarding the dangers associated with the rapidly developing field of artificial intelligence (AI). “I’m very close to the cutting edge in AI and it scares the hell out of me,” Musk told the crowd.

“Narrow AI is not a species-level risk. It will result in dislocation… lost jobs… better weaponry and that sort of thing. It is not a fundamental, species-level risk, but digital super-intelligence is,” Musk told the audience in Austin.

“I think the danger of AI is much bigger than the danger of nuclear warheads by a lot,” Musk said. “Nobody would suggest we allow the world to just build nuclear warheads if they want, that would be insane. And mark my words: AI is far more dangerous than nukes.”

Musk made clear that he believes the rapid advancements in artificial intelligence are outpacing any potential regulation of the technology, thus creating a dangerous paradigm and a need for regulation.

“I’m not normally an advocate of regulation and oversight,” Musk said. “There needs to be a public body that has insight and oversight to confirm that everyone is developing AI safely.”

Musk highlighted the case of Google’s AlphaGo, AI-powered software that can play the ancient Chinese board game Go— reputedly the world’s most demanding strategy game— as evidence of exponential learning capacity of machines. In early 2017, AlphaGo clinched a decisive victory over the top Go player in the world.

While some AI experts dismiss such fears, downplaying potential threats posed by artificial intelligence to humanity, Musk believes these “experts” are falling victim to their own delusions of intellectual superiority over machines, calling their thought process “fundamentally flawed.”

“The biggest issue I have with AI experts… is that they think they’re smarter than they are. This tends to plague smart people,” says Musk. “They’re defining themselves by their intelligence… and they don’t like the idea that a machine could be smarter than them, so they discount the idea. And that’s fundamentally flawed.”

During his SXSW talk Musk also discussed the possibility of another coming of the Dark Ages and a colony on Mars as a “hedge” against civilizational collapse on Earth.

“I think it’s unlikely that we won’t have another world war again… there probably will be at some point,” Musk said. “[So] there’s likely to be another Dark Ages,” Musk said. “Particularly if there’s a Third World War… [And] we want to make sure that there’s enough of a seed of human civilization somewhere else so that we can bring civilization back and shorten the length of the dark ages.”

1,000 Experts Sign Letter Warning Of ‘Artificial Intelligence Arms Race’

By Jonah Bennett

More than 1,000 experts in the artificial intelligence industry and other fields have put their names down on a letter pushing for an outright ban on research involving autonomous weapons systems.

These experts are gathering together at the International Joint Conference on Artificial Intelligence Tuesday where the signed letter, crafted by the Future of Life Institute, is set to be presented, The Guardian reports.

The warning doesn’t include a call for prohibiting technology like cruise missiles or drones, as those require humans making targeting decisions behind the scenes. Instead, the authors of the letter argue that the central problem is that AI weapons will likely be feasible in the near future and would count as the third revolution in warfare.

Part of the problem with AI weapons, the letter notes, is that while the technology would reduce casualties suffered by the operators, it would simultaneously lower the barriers to military action, thus prompting a net increase in war. Moreover, if one country starts researching AI weapons, other countries will likely be forced to follow, which would create a dangerous arms race.

Earlier this year, Elon Musk, founder of Tesla Motors, gave Future of Life a $10 million donation to support research keeping AI safe for humanity. Musk, one of the signatories, has previously argued that AI is “potentially more dangerous than nukes.” The letter also features support from Stephen Hawking, as well as Steve Wozniak, the co-founder of Apple.

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” the authors write. “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.”

The letter points out that prohibition isn’t a hopeless task, since similar results have been achieved in the chemical/biological warfare field. Some countries aren’t convinced of the threat AI weapons pose. In April at a UN conference, the UK pushed back against a proposed ban.

Follow Jonah Bennett on Twitter

 

 

 

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org.