🎉 Gate xStocks Trading is Now Live! Spot, Futures, and Alpha Zone – All Open!
📝 Share your trading experience or screenshots on Gate Square to unlock $1,000 rewards!
🎁 5 top Square creators * $100 Futures Voucher
🎉 Share your post on X – Top 10 posts by views * extra $50
How to Participate:
1️⃣ Follow Gate_Square
2️⃣ Make an original post (at least 20 words) with #Gate xStocks Trading Share#
3️⃣ If you share on Twitter, submit post link here: https://www.gate.com/questionnaire/6854
Note: You may submit the form multiple times. More posts, higher chances to win!
📅 July 3, 7:00 – July 9,
Super AI will come out within seven years, OpenAI intends to invest "big money" to prevent out of control
**Source: **Financial Association
Edited by Huang Junzhi
ChatGPT developer OpenAI said on Wednesday (5th) that the company plans to invest significant resources and create a new research team to ensure that its artificial intelligence (AI) is safe for humans and eventually achieve self-supervision of AI.
"Currently, we do not have a solution for manipulating or controlling a potentially superintelligent AI and preventing it from getting out of hand," they wrote.
20% computing power is used to solve the problem of AI out of control
They predict that superintelligent AI (that is, systems smarter than humans) may arrive in this decade (by 2030), and that humans will need better technology than currently to control superintelligent AI, so they need to be in the so-called A breakthrough in the AI Consistency Study, which focuses on ensuring that artificial intelligence is beneficial to humans.
According to them, with the support of Microsoft (Microsoft), **OpenAI will spend 20% of its computing power in the next four years to solve the problem of AI out of control. **Additionally, the company is forming a new team to organize this work, called the Super Consistency Team.
Expert questioned
However, this move has been questioned by experts as soon as it was announced. Connor Leahy, an AI safety advocate, said OpenAI's plan was fundamentally flawed because a rudimentary version of AI that could reach "human levels" could spin out of control and wreak havoc before it could be used to solve AI safety problems. "
“You have to solve the consistency problem before you can build human-level intelligence, otherwise you can’t control it by default. I personally don’t think it’s a particularly good or safe plan,” he said in an interview.
The potential dangers of AI have long been a top concern for AI researchers and the public. In April, a group of AI industry leaders and experts signed an open letter calling for a moratorium on training AI systems more powerful than OpenAI's new GPT-4 model for at least six months, citing their potential risks to society and humanity.
A recent poll found that more than two-thirds of Americans are concerned about the possible negative impact of AI, and 61% believe that AI may threaten human civilization.