Elon Musk Leaves OpenAI Over Conflict of Interest
Written by Sue Gee   
Friday, 23 February 2018

Elon Musk has resigned from the board of OpenAI, the non-profit organization he co-founded in 2015. He will continue to donate to and advise the organization which co-authored  a major report on the threats posed by artificial intelligence, a topic on which Musk has forceful views.

elonmusk

 

The news of Musk's departure from the OpenAI board came in a post on the OpenAI blog which explained:

As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon.

Conflict for Musk would seem inevitable on two fronts.

One is, as stated, Tesla's own advances in AI as part of its self-driving Autopilot project.  The latest Autopilot software, which has been subject to delay as, according to Mush it was "significantly more complicated than anticipated" features a new architecture powered by its own neural net and computer vision technology. This is being spearheaded by Tesla's Director of AI Andrej Karpathy who was one of the initial group of seven researchers at OpenAI but was hired by Tesla in June 2017.

The other is that Musk continues to regard AI as the 

“biggest existential threat” 

to humanity and poses

"vastly more risk then North Korea"

while since its formation OpenAI was quick to open source an OpenAI Gym for reinforcement learning, has made significant progress in robotics, has developed AI that beat the world's best human players of the popular video game Dota.

At the same time as applauding the fact that OpenAI's bot was the first to beat the world's best players in competitive eSports, Musk also warned that such increasingly powerful artificial intelligence would eventually need to be reined in for our own safety tweeting:

"Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too."

While noting that Musk would continue to fund its efforts, the OpenAI blog post also announced a number of new donors, including video game developer Gabe Newell, Skype founder Jaan Tallinn, and the former US and Canadian Olympians Ashton Eaton and Brianne Theisen-Eaton who have retired from sport, moved to San Francisco and started a tech company. OpenAI said it was broadening its base of funders in order to ramp up investments in:

“people and the compute resources necessary to make consequential breakthroughs in artificial intelligence.”  

The post also stated:

"in the coming months you can also expect us to articulate the principles with which we’ll be approaching the next phase of OpenAI, and the policy areas in which we wish to see changes to ensure AI benefits all of humanity."

The news of Musk's resignation from the OpenAI board comes within days of the publication of The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation co-authored by 26 experts on the security implications of emerging technologies including three members of OpenAI.

This timing is probably not significant since the 100-page report comes from a 2-day event held in February 2017 under the auspices of the Future of Humanity Institute, University of Oxford and the Centre for the Study of Existential Risk. Other contributors to the report were from the Electronic Frontier Foundation, the Center for a New American Security, Stanford University - in all 14 institutions, spanning academia, civil society, and industry.

The report's executive summary opens with:

Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats.

In the OpenAI blog post "Preparing for Malicious Uses of AI", contributing authors Jack Clark, Michael Page and Dario Amodei pull out the following  recommendations from the report as ones that companies, research organizations, individual practitioners, and governments can take to ensure a safer world:

  • Acknowledge AI’s dual-use nature: AI is a technology capable of immensely positive and immensely negative applications. We should take steps as a community to better evaluate research projects for perversion by malicious actors, and engage with policymakers to understand areas of particular sensitivity. As we write in the paper: “Surveillance tools can be used to catch terrorists or oppress ordinary citizens. Information content filters could be used to bury fake news or manipulate public opinion. Governments and powerful private actors will have access to many of these AI tools and could use them for public good or harm.” Some potential solutions to these problems include pre-publication risk assessments for certain bits of research, selectively sharing some types of research with a significant safety or security component among a small set of trusted organizations, and exploring how to embed norms into the scientific community that are responsive to dual-use concerns.
  • Learn from cybersecurity: The computer security community has developed various practices that are relevant to AI researchers, which we should consider implementing in our own research. These range from “red teaming” by intentionally trying to break or subvert systems, to investing in tech forecasting to spot threats before they arrive, to conventions around the confidential reporting of vulnerabilities discovered in AI systems, and so on.
  • Broaden the discussion: AI is going to alter the global threat landscape, so we should involve a broader cross-section of society in discussions. Parties could include those involved in the civil society, national security experts, businesses, ethicists, the general public, and other researchers.

The blog post commits to begin engaging with a wider audience on these issues which seems entirely compatible with the aim of ensuring that AI benefits all of humanity. So no conflict of interest there. 

 

opena1new

More Information

OpenAI Supporters

Global AI experts sound the alarm

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

Related Articles

AI Goes Open Source To The Tune Of $1 Billion 

OpenAI Bot Triumphant Playing Dota 2

OpenAI Universe - New Way of Training AIs

OpenAI Gym Gives Reinforcement Learning A Work Out

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


Ruby On Rails Adds Kamal And Thruster Support
17/12/2024

Ruby on Rails 8 has been released. The new version comes preconfigured with Kamal 2 for application deployment, a new proxy called Thruster, and a trio of SQLite database-backed adapters named Solid C [ ... ]



Google Releases Gemini 2 And Jules Code Agent
18/12/2024

Google has announced an updated version of Gemini, saying that Gemini 2.0 Flash Experimental will "enable even more immersive and interactive applications", along with new coding agents that can take  [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Friday, 23 February 2018 )