Washington, D.C. – The chair of the House Armed Services Subcommittee on Emerging Threats and Capabilities, Rep. Elise Stefanik (R-NY), introduced artificial intelligence (AI) legislation on Wednesday, warning that the ongoing revolution in AI will require a national security commission to study America’s national security needs with respect to intelligent machines.

Stefanik is seeking to add the proposed AI legislation to this year’s national defense authorization bill. Defense News reports the legislation would “develop a commission to review advances in AI, identify the nation’s AI needs and make recommendations to organize the federal government for the threat.”

“Artificial Intelligence is a constantly developing technology that will likely touch every aspect of our lives,” Stefanik said in a statement. “AI has already produced many things in use today, including web search, object recognition in photos or videos, prediction models, self-driving cars, and automated robotics. It is critical to our national security but also to the development of our broader economy that the United States becomes the global leader in further developing this cutting edge technology.”

The Defense News report also notes:

Stefanik’s proposed national security commission on artificial intelligence would address and identify America’s national security needs with respect to AI. It would also look at ways to preserve America’s technological edge, foster AI investments and research, and establish data standards and open incentives to share data…

The bill also aims to air ethical considerations related to AI and machine learning, and identify and understand the risks of AI advances under the law of armed conflict and international humanitarian law.

Former Deputy Defense Secretary Bob Work, of the Center for a New American Security, has previously spoken about establishing an AI Center of Excellence, but also believes more government leadership is needed from the top in establishing a “national AI agency.”

“To have a national response you have to have a national push from above,” Work told Defense News. “In my view it must start from the White House.”

“One of the things that the national AI agency should do is to address this problem: How do we address IP? How do we address combining all of the strengths of our DoD labs and our technology sector for the betterment of the country? This is across all things, in medicine, in finance, in transportation. And yes, hopefully, in defense,” Work said at the launch of a new CNAS artificial intelligence working group on March 15.

“We’re going to need something, help from Congress, either a caucus or someone who takes this as a leadership position,” Work cautioned. “To have a national response, you have to have a national push from above.”

While speaking at the South by Southwest (SXSW) conference and festival on March 11, inventor Elon Musk warned “AI is far more dangerous than nukes” regarding the dangers associated with the rapidly developing field of artificial intelligence (AI). “I’m very close to the cutting edge in AI and it scares the hell out of me,” Musk said.

“Narrow AI is not a species-level risk. It will result in dislocation… lost jobs… better weaponry and that sort of thing. It is not a fundamental, species-level risk, but digital super-intelligence is,” Musk told the audience in Austin.

“I think the danger of AI is much bigger than the danger of nuclear warheads by a lot,” Musk said. “Nobody would suggest we allow the world to just build nuclear warheads if they want, that would be insane. And mark my words: AI is far more dangerous than nukes.”

Latest Reality Check With Ben Swann - Powered by Dash
Visit WhatFinger News: The Internet's Independent Media Front Page