Super AI will come out within seven years, OpenAI intends to invest "big money" to prevent out of control

**Source: **Financial Association

Edited by Huang Junzhi

ChatGPT developer OpenAI said on Wednesday (5th) that the company plans to invest significant resources and create a new research team to ensure that its artificial intelligence (AI) is safe for humans and eventually achieve self-supervision of AI.

"The immense power of a superintelligence could ... lead to human disempowerment, or even extinction," wrote OpenAI co-founder Ilya Sutskever and Jan Leike, head of the AI Consistency team tasked with improving system security, in a blog post.

"Currently, we do not have a solution for manipulating or controlling a potentially superintelligent AI and preventing it from getting out of hand," they wrote.

AI alignment refers to the need for AI behavior to be in line with the designer's interests and expectations.

20% computing power is used to solve the problem of AI out of control

They predict that superintelligent AI (that is, systems smarter than humans) may arrive in this decade (by 2030), and that humans will need better technology than currently to control superintelligent AI, so they need to be in the so-called A breakthrough in the AI Consistency Study, which focuses on ensuring that artificial intelligence is beneficial to humans.

According to them, with the support of Microsoft (Microsoft), **OpenAI will spend 20% of its computing power in the next four years to solve the problem of AI out of control. **Additionally, the company is forming a new team to organize this work, called the Super Consistency Team.

It is reported that the team's goal is to create a "human-level" AI researcher, which can then be expanded by a large amount of computing power. OpenAI says this means they'll use human feedback to train the AI system, train the AI system to aid human evaluation, and then finally train the AI system to do the actual consistency study.

Expert questioned

However, this move has been questioned by experts as soon as it was announced. Connor Leahy, an AI safety advocate, said OpenAI's plan was fundamentally flawed because a rudimentary version of AI that could reach "human levels" could spin out of control and wreak havoc before it could be used to solve AI safety problems. "

“You have to solve the consistency problem before you can build human-level intelligence, otherwise you can’t control it by default. I personally don’t think it’s a particularly good or safe plan,” he said in an interview.

The potential dangers of AI have long been a top concern for AI researchers and the public. In April, a group of AI industry leaders and experts signed an open letter calling for a moratorium on training AI systems more powerful than OpenAI's new GPT-4 model for at least six months, citing their potential risks to society and humanity.

A recent poll found that more than two-thirds of Americans are concerned about the possible negative impact of AI, and 61% believe that AI may threaten human civilization.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)