Recently, Sébastien Bubeck, head of the Machine Learning Theory Group at Microsoft Redmond Research Institute, together with Li Yuanzhi, winner of the 2023 Sloan Research Award, published a highly concerned paper. The paper, titled "The Spark of General Artificial Intelligence: Early Experiments with GPT-4," is 154 pages long. Some good people found from the LaTex source code that the original title was actually "First Contact with AGI".
The publication of this paper marks an important milestone in the field of artificial intelligence research. It provides valuable insights into our deep understanding of the development and application of artificial general intelligence (AGI). At the same time, the original title of the paper, "First Contact with AGI", also highlights the importance and forward-looking of the exploration towards general artificial intelligence.
I patiently read the translated version of this paper. To be honest, I half understand and half don't understand, and I don't know what's going on. Let me first introduce the core proposition of this paper: GPT-4 presents a form of general intelligence, blooming the spark of general artificial intelligence. This is manifested in its core mental abilities (such as reasoning, creativity, and deduction), the range of topics in which it acquires expertise (such as literature, medicine, and coding), and the variety of tasks it can perform (such as playing games, using tools, explain yourself, etc.).
After reading this article, I revisited my article "The Future Can Be Expected|Artificial Intelligence Painting: Let Everyone Become an Artist". I asked myself a question: Is it a bit narrow to classify large models such as ChatGPT in artificial intelligence generated content (AIGC) before? As the Microsoft Research paper stated, GPT-4 is actually not just AIGC, it is more like the prototype of general artificial intelligence.
Before explaining what AGI means, let me tell readers of The Paper's "Future Expected" column the three words ANI, AGI, and ASI.
Artificial Intelligence (AI) is usually divided into three levels:
① Weak artificial intelligence (Artificial Narrow Intelligence, ANI);
② Artificial General Intelligence (AGI);
③ Artificial Superintelligence (ASI).
Next, I will briefly introduce the differences and development of these three levels.
①Weak artificial intelligence (ANI):
Weak AI is by far the most common form of AI. It focuses on performing a single task or solving a problem in a specific domain. For example, image recognition, speech recognition, machine translation, etc. Such intelligent systems may be better than humans at certain tasks, but they can only work to a limited extent and cannot handle the problems they were not designed to solve. There are many smart products and services on the market today, such as virtual assistants (Siri, Microsoft Xiaoice, etc.), smart speakers (Tmall Genie, Xiaoai Speaker, etc.) and AlphaGo, all of which belong to the category of weak artificial intelligence. The limitation of weak artificial intelligence is that it lacks comprehensive understanding and judgment ability, and can only perform well on specific tasks.
With the continuous development of big data, algorithms and computing power, weak artificial intelligence is gradually penetrating into all areas of our daily life. In the fields of finance, medical care, education, entertainment, etc., we have witnessed many successful application cases.
②General Artificial Intelligence (AGI):
General artificial intelligence, also known as strong artificial intelligence, refers to artificial intelligence that has the same intelligence as humans and can perform all the intelligent behaviors of normal humans. This means that AGI can learn, understand, adapt and solve problems in various fields just like humans. Unlike ANI, AGI can independently complete a variety of tasks, not just limited to a specific field. At present, artificial general intelligence is generally believed to have not yet been achieved, but many technology companies and scientists are working hard to get closer to this goal.
③ Super Artificial Intelligence (ASI):
Super artificial intelligence refers to an artificial intelligence system that far surpasses human intelligence in various fields. It can not only complete various tasks that humans can complete, but also far surpass humans in terms of creativity, decision-making ability, and learning speed. The emergence of ASI may trigger many unprecedented technological breakthroughs and social changes, and solve difficult problems that humans cannot solve.
However, ASI also brings with it a range of potential risks. For example, it may lead to the loss of human value and dignity, trigger the abuse and misuse of artificial intelligence, and may even lead to the rebellion of artificial intelligence and other issues.
It turns out that we have always believed that artificial intelligence technology, from weak artificial intelligence to general artificial intelligence to super artificial intelligence, will go through a long and complicated development process. In this process, we have enough time to make various preparations, ranging from laws and regulations to the psychological preparation of each individual. Recently, however, I have a strong feeling that we may be only a few steps away from having general artificial intelligence, and it may only take 20 years or less. If some say the process will only take 5-10 years, I wouldn't completely rule it out.
OpenAI mentioned in its "Planning and Outlook to AGI": "AGI has the potential to bring incredible new capabilities to everyone. We can imagine a world where all of us can get almost any Aid in cognitive tasks, providing a huge force multiplier for human intelligence and creativity."
However, this plan emphasizes a "gradual transition" process, rather than overemphasizing the powerful capabilities of AGI. "Giving people, policymakers and institutions the time to understand what's going on, to experience for themselves the good and bad of these systems, to adjust our economy, and to put regulation in place."
In my opinion, the underlying message conveyed by this passage is that the technology leading to AGI is already there. However, in order to allow human society to have a process of adaptation, OpenAI is deliberately slowing down the pace of technological progress. They intend to balance technological progress and the readiness of human society, allowing more time for legal, ethical and social adaptation discussions, and taking necessary measures to deal with the challenges that may arise.
There is such a sentence in "The Analects of Confucius·Ji's General's Conquest of Zhuanyu": "The tiger's mouth came out of the scorpion". Now, all kinds of GPTs, like tigers, have escaped from their cages. As Yuval Noah Harari, the author of "A Brief History of the Future," said: Artificial intelligence is mastering language with an ability that exceeds the average human level. By mastering language, artificial intelligence already has the ability to create intimate relationships with hundreds of millions of people on a large scale, and is holding the key to invade the human civilization system. He further warned: The biggest difference between nuclear weapons and artificial intelligence is that nuclear weapons cannot create more powerful nuclear weapons. But **AI can produce a more powerful AI, so we need to act quickly before the AI gets out of hand. **
On March 22, the Future of Life Institute (Future of Life) issued an open letter to the whole society entitled "Suspension of Giant Artificial Intelligence Research: An Open Letter", calling on all artificial intelligence laboratories to immediately suspend the comparison of GPT-4 Training of more powerful AI systems with a pause of at least 6 months. Landon Klein, director of U.S. policy at the Future of Life Institute, said: "We see the current moment as akin to the beginning of the nuclear age..." This open letter has been supported by more than a thousand people, including Elon Musk (Founder of Tesla), Sam Altman (CEO of OpenAI), Joshua Bengio (winner of Turing Award in 2018) and other famous people.
If many people realize that we are about to enter an era of "AI nuclear proliferation," then we really need to explore a possibility. This possibility is to establish an international organization similar to the International Atomic Energy Agency, whose goal is to supervise all artificial intelligence companies, based on the number of GPUs they use, energy consumption and other indicators. Systems that exceed capability thresholds are subject to audit. Through international organizations like this, we can work together to ensure that AI technologies bring benefits to humanity, rather than potential harms.
Some scientists advocate that when advancing the research and application of artificial intelligence, it is necessary to avoid the ability of artificial intelligence system to exceed human control. They made a number of metaphors to describe the catastrophe that was unfolding, including "a 10-year-old tried to play chess with Stockfish 15", "the 11th century tried to fight the 21st century", and "Australopithecus tried to fight Homo sapiens". These scientists want us to stop imagining AGI as "an inanimate thinker that lives in the Internet." Instead, imagine them as a whole alien civilization thinking a million times faster than humans. It's just that they were initially confined to computers.
However, some scientists are optimistic. In their view, the development and research of artificial intelligence should continue to promote scientific progress and technological innovation. They compared that if in the era when the car was born, if a coachman proposed that the driver be suspended from driving for 6 months. Looking back now, is this the behavior of a mantis acting like a cart? They believe that through a transparent and accountable research approach, underlying problems can be addressed and the controllability of AI technologies ensured. Moreover, it is through continuous experimentation and practice that we can better understand and respond to the challenges that artificial intelligence may bring.
Recently, the World Health Organization (WHO) issued a warning about the combination of artificial intelligence and public health care. They point out that relying on AI-generated data for decision-making can be at risk of bias or being misused. In a statement, the WHO said it is critical to assess the risks of using generative large language models (LLMs) such as ChatGPT to protect and promote human well-being. They emphasize that steps are needed to ensure the accuracy, reliability and impartiality in the use of these technologies to protect the public interest and advance the medical field.
Many countries have already taken steps to regulate the field of artificial intelligence. On March 31, the Italian Personal Data Protection Agency announced that the use of ChatGPT will be temporarily banned from now on. At the same time, EU officials are also working on a new draft law called the "Artificial Intelligence Act (AI Act)", which includes: prohibiting the use of specific artificial intelligence services and formulating related legal norms.
The U.S. Department of Commerce issued a notice to solicit public comments, including: whether new artificial intelligence models that pose a risk of harm need to be reviewed before they can be released. At the same time, the Department of Commerce also promised to crack down on harmful artificial intelligence products that violate civil rights and consumer protection laws.
On May 16, OpenAI CEO and co-founder Sam Altman participated in the hearing of the US Congress for the first time, speaking on the potential dangers of artificial intelligence technology. He admitted that as artificial intelligence advances, there is concern and anxiety about how it will change the way we live. To this end, he believes that government intervention can prevent artificial intelligence from "wild self-replication and self-infiltration". He proposed creating an entirely new regulatory body that would implement the necessary safeguards and issue licenses for AI systems, with the power to revoke them.
At the hearing, Altman was asked what his biggest concern about the potential of artificial intelligence was. He did not elaborate, saying only that "if this technology goes wrong, it can be very wrong" and may "Causing significant harm to the world".
Prior to this hearing, China's regulatory and control measures in the field of artificial intelligence have also attracted widespread attention. On April 11, in order to promote the healthy development and standardized application of generative artificial intelligence technology, in accordance with the "Network Security Law of the People's Republic of China" and other laws and regulations, the Cyberspace Administration of China drafted the "Management Measures for Generative Artificial Intelligence Services (Draft for Comment)" ". This approach clarifies the support and encouragement attitude for the generative artificial intelligence industry, for example, "The state supports the independent innovation, promotion and application, and international cooperation of basic technologies such as artificial intelligence algorithms and frameworks, and encourages the priority adoption of safe and reliable software and tools. , computing and data resources.”
The opinion draft requires that the content generated by generative artificial intelligence should reflect the core values of socialism, and must not contain subversion of state power, overthrow of the socialist system, incitement to split the country, undermine national unity, promote terrorism, extremism, promote ethnic hatred, Ethnic discrimination, violence, obscene and pornographic information, false information, and content that may disrupt economic and social order. The opinion draft also requires suppliers to declare safety assessments, take measures to prevent discrimination, and respect privacy rights.
The collective measures taken by these countries remind us that the current development of artificial intelligence brings great opportunities and challenges. We urgently need clear ethical guidelines and regulations to ensure the proper use and transparency of AI technologies. We will face a series of important questions: How to ensure data privacy and security? How to deal with algorithmic bias and injustice? How to ensure the transparency and explainability of artificial intelligence decision-making? These issues need to be answered through clear regulations and institutions.
As I write this, my thoughts inexplicably jump to the first paragraph of the poem titled "The Long Season" in the recent popular domestic drama "The Long Season":
snap your fingers, he said
Let's snap our fingers
distant things will be shattered
The people in front of me don't know it yet
Here, I want to make a bold prediction: when 2024 comes and we come to select the 2023 word of the year, ChatGPT will be one of the top ten hot words, and may even become the word of the year or person of the year.
Let's break down the word "AI". If A stands for Angel (Angel), then I can stand for Devil (Iblis). Today, artificial intelligence technology is developing rapidly, showing an amazing symbiotic relationship between "angels and demons". In the face of this reality, we need to take action for a goal - "let us enjoy a golden and long harvest of artificial intelligence, instead of falling into a cold winter unprepared."
(The author Hu Yi, a big data worker who likes to imagine the future.)
View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
Opinion | The Double-Edged Sword of General Artificial Intelligence: Controlling Development and Avoiding Crisis
Author: Hu Yi
Source: The Paper
Recently, Sébastien Bubeck, head of the Machine Learning Theory Group at Microsoft Redmond Research Institute, together with Li Yuanzhi, winner of the 2023 Sloan Research Award, published a highly concerned paper. The paper, titled "The Spark of General Artificial Intelligence: Early Experiments with GPT-4," is 154 pages long. Some good people found from the LaTex source code that the original title was actually "First Contact with AGI".
The publication of this paper marks an important milestone in the field of artificial intelligence research. It provides valuable insights into our deep understanding of the development and application of artificial general intelligence (AGI). At the same time, the original title of the paper, "First Contact with AGI", also highlights the importance and forward-looking of the exploration towards general artificial intelligence.
I patiently read the translated version of this paper. To be honest, I half understand and half don't understand, and I don't know what's going on. Let me first introduce the core proposition of this paper: GPT-4 presents a form of general intelligence, blooming the spark of general artificial intelligence. This is manifested in its core mental abilities (such as reasoning, creativity, and deduction), the range of topics in which it acquires expertise (such as literature, medicine, and coding), and the variety of tasks it can perform (such as playing games, using tools, explain yourself, etc.).
After reading this article, I revisited my article "The Future Can Be Expected|Artificial Intelligence Painting: Let Everyone Become an Artist". I asked myself a question: Is it a bit narrow to classify large models such as ChatGPT in artificial intelligence generated content (AIGC) before? As the Microsoft Research paper stated, GPT-4 is actually not just AIGC, it is more like the prototype of general artificial intelligence.
Before explaining what AGI means, let me tell readers of The Paper's "Future Expected" column the three words ANI, AGI, and ASI.
Artificial Intelligence (AI) is usually divided into three levels:
① Weak artificial intelligence (Artificial Narrow Intelligence, ANI);
② Artificial General Intelligence (AGI);
③ Artificial Superintelligence (ASI).
Next, I will briefly introduce the differences and development of these three levels.
①Weak artificial intelligence (ANI):
Weak AI is by far the most common form of AI. It focuses on performing a single task or solving a problem in a specific domain. For example, image recognition, speech recognition, machine translation, etc. Such intelligent systems may be better than humans at certain tasks, but they can only work to a limited extent and cannot handle the problems they were not designed to solve. There are many smart products and services on the market today, such as virtual assistants (Siri, Microsoft Xiaoice, etc.), smart speakers (Tmall Genie, Xiaoai Speaker, etc.) and AlphaGo, all of which belong to the category of weak artificial intelligence. The limitation of weak artificial intelligence is that it lacks comprehensive understanding and judgment ability, and can only perform well on specific tasks.
With the continuous development of big data, algorithms and computing power, weak artificial intelligence is gradually penetrating into all areas of our daily life. In the fields of finance, medical care, education, entertainment, etc., we have witnessed many successful application cases.
②General Artificial Intelligence (AGI):
General artificial intelligence, also known as strong artificial intelligence, refers to artificial intelligence that has the same intelligence as humans and can perform all the intelligent behaviors of normal humans. This means that AGI can learn, understand, adapt and solve problems in various fields just like humans. Unlike ANI, AGI can independently complete a variety of tasks, not just limited to a specific field. At present, artificial general intelligence is generally believed to have not yet been achieved, but many technology companies and scientists are working hard to get closer to this goal.
③ Super Artificial Intelligence (ASI):
Super artificial intelligence refers to an artificial intelligence system that far surpasses human intelligence in various fields. It can not only complete various tasks that humans can complete, but also far surpass humans in terms of creativity, decision-making ability, and learning speed. The emergence of ASI may trigger many unprecedented technological breakthroughs and social changes, and solve difficult problems that humans cannot solve.
However, ASI also brings with it a range of potential risks. For example, it may lead to the loss of human value and dignity, trigger the abuse and misuse of artificial intelligence, and may even lead to the rebellion of artificial intelligence and other issues.
It turns out that we have always believed that artificial intelligence technology, from weak artificial intelligence to general artificial intelligence to super artificial intelligence, will go through a long and complicated development process. In this process, we have enough time to make various preparations, ranging from laws and regulations to the psychological preparation of each individual. Recently, however, I have a strong feeling that we may be only a few steps away from having general artificial intelligence, and it may only take 20 years or less. If some say the process will only take 5-10 years, I wouldn't completely rule it out.
OpenAI mentioned in its "Planning and Outlook to AGI": "AGI has the potential to bring incredible new capabilities to everyone. We can imagine a world where all of us can get almost any Aid in cognitive tasks, providing a huge force multiplier for human intelligence and creativity."
However, this plan emphasizes a "gradual transition" process, rather than overemphasizing the powerful capabilities of AGI. "Giving people, policymakers and institutions the time to understand what's going on, to experience for themselves the good and bad of these systems, to adjust our economy, and to put regulation in place."
In my opinion, the underlying message conveyed by this passage is that the technology leading to AGI is already there. However, in order to allow human society to have a process of adaptation, OpenAI is deliberately slowing down the pace of technological progress. They intend to balance technological progress and the readiness of human society, allowing more time for legal, ethical and social adaptation discussions, and taking necessary measures to deal with the challenges that may arise.
There is such a sentence in "The Analects of Confucius·Ji's General's Conquest of Zhuanyu": "The tiger's mouth came out of the scorpion". Now, all kinds of GPTs, like tigers, have escaped from their cages. As Yuval Noah Harari, the author of "A Brief History of the Future," said: Artificial intelligence is mastering language with an ability that exceeds the average human level. By mastering language, artificial intelligence already has the ability to create intimate relationships with hundreds of millions of people on a large scale, and is holding the key to invade the human civilization system. He further warned: The biggest difference between nuclear weapons and artificial intelligence is that nuclear weapons cannot create more powerful nuclear weapons. But **AI can produce a more powerful AI, so we need to act quickly before the AI gets out of hand. **
On March 22, the Future of Life Institute (Future of Life) issued an open letter to the whole society entitled "Suspension of Giant Artificial Intelligence Research: An Open Letter", calling on all artificial intelligence laboratories to immediately suspend the comparison of GPT-4 Training of more powerful AI systems with a pause of at least 6 months. Landon Klein, director of U.S. policy at the Future of Life Institute, said: "We see the current moment as akin to the beginning of the nuclear age..." This open letter has been supported by more than a thousand people, including Elon Musk (Founder of Tesla), Sam Altman (CEO of OpenAI), Joshua Bengio (winner of Turing Award in 2018) and other famous people.
If many people realize that we are about to enter an era of "AI nuclear proliferation," then we really need to explore a possibility. This possibility is to establish an international organization similar to the International Atomic Energy Agency, whose goal is to supervise all artificial intelligence companies, based on the number of GPUs they use, energy consumption and other indicators. Systems that exceed capability thresholds are subject to audit. Through international organizations like this, we can work together to ensure that AI technologies bring benefits to humanity, rather than potential harms.
Some scientists advocate that when advancing the research and application of artificial intelligence, it is necessary to avoid the ability of artificial intelligence system to exceed human control. They made a number of metaphors to describe the catastrophe that was unfolding, including "a 10-year-old tried to play chess with Stockfish 15", "the 11th century tried to fight the 21st century", and "Australopithecus tried to fight Homo sapiens". These scientists want us to stop imagining AGI as "an inanimate thinker that lives in the Internet." Instead, imagine them as a whole alien civilization thinking a million times faster than humans. It's just that they were initially confined to computers.
However, some scientists are optimistic. In their view, the development and research of artificial intelligence should continue to promote scientific progress and technological innovation. They compared that if in the era when the car was born, if a coachman proposed that the driver be suspended from driving for 6 months. Looking back now, is this the behavior of a mantis acting like a cart? They believe that through a transparent and accountable research approach, underlying problems can be addressed and the controllability of AI technologies ensured. Moreover, it is through continuous experimentation and practice that we can better understand and respond to the challenges that artificial intelligence may bring.
Recently, the World Health Organization (WHO) issued a warning about the combination of artificial intelligence and public health care. They point out that relying on AI-generated data for decision-making can be at risk of bias or being misused. In a statement, the WHO said it is critical to assess the risks of using generative large language models (LLMs) such as ChatGPT to protect and promote human well-being. They emphasize that steps are needed to ensure the accuracy, reliability and impartiality in the use of these technologies to protect the public interest and advance the medical field.
Many countries have already taken steps to regulate the field of artificial intelligence. On March 31, the Italian Personal Data Protection Agency announced that the use of ChatGPT will be temporarily banned from now on. At the same time, EU officials are also working on a new draft law called the "Artificial Intelligence Act (AI Act)", which includes: prohibiting the use of specific artificial intelligence services and formulating related legal norms.
The U.S. Department of Commerce issued a notice to solicit public comments, including: whether new artificial intelligence models that pose a risk of harm need to be reviewed before they can be released. At the same time, the Department of Commerce also promised to crack down on harmful artificial intelligence products that violate civil rights and consumer protection laws.
On May 16, OpenAI CEO and co-founder Sam Altman participated in the hearing of the US Congress for the first time, speaking on the potential dangers of artificial intelligence technology. He admitted that as artificial intelligence advances, there is concern and anxiety about how it will change the way we live. To this end, he believes that government intervention can prevent artificial intelligence from "wild self-replication and self-infiltration". He proposed creating an entirely new regulatory body that would implement the necessary safeguards and issue licenses for AI systems, with the power to revoke them.
At the hearing, Altman was asked what his biggest concern about the potential of artificial intelligence was. He did not elaborate, saying only that "if this technology goes wrong, it can be very wrong" and may "Causing significant harm to the world".
Prior to this hearing, China's regulatory and control measures in the field of artificial intelligence have also attracted widespread attention. On April 11, in order to promote the healthy development and standardized application of generative artificial intelligence technology, in accordance with the "Network Security Law of the People's Republic of China" and other laws and regulations, the Cyberspace Administration of China drafted the "Management Measures for Generative Artificial Intelligence Services (Draft for Comment)" ". This approach clarifies the support and encouragement attitude for the generative artificial intelligence industry, for example, "The state supports the independent innovation, promotion and application, and international cooperation of basic technologies such as artificial intelligence algorithms and frameworks, and encourages the priority adoption of safe and reliable software and tools. , computing and data resources.”
The opinion draft requires that the content generated by generative artificial intelligence should reflect the core values of socialism, and must not contain subversion of state power, overthrow of the socialist system, incitement to split the country, undermine national unity, promote terrorism, extremism, promote ethnic hatred, Ethnic discrimination, violence, obscene and pornographic information, false information, and content that may disrupt economic and social order. The opinion draft also requires suppliers to declare safety assessments, take measures to prevent discrimination, and respect privacy rights.
The collective measures taken by these countries remind us that the current development of artificial intelligence brings great opportunities and challenges. We urgently need clear ethical guidelines and regulations to ensure the proper use and transparency of AI technologies. We will face a series of important questions: How to ensure data privacy and security? How to deal with algorithmic bias and injustice? How to ensure the transparency and explainability of artificial intelligence decision-making? These issues need to be answered through clear regulations and institutions.
As I write this, my thoughts inexplicably jump to the first paragraph of the poem titled "The Long Season" in the recent popular domestic drama "The Long Season":
snap your fingers, he said
Let's snap our fingers
distant things will be shattered
The people in front of me don't know it yet
Here, I want to make a bold prediction: when 2024 comes and we come to select the 2023 word of the year, ChatGPT will be one of the top ten hot words, and may even become the word of the year or person of the year.
Let's break down the word "AI". If A stands for Angel (Angel), then I can stand for Devil (Iblis). Today, artificial intelligence technology is developing rapidly, showing an amazing symbiotic relationship between "angels and demons". In the face of this reality, we need to take action for a goal - "let us enjoy a golden and long harvest of artificial intelligence, instead of falling into a cold winter unprepared."
(The author Hu Yi, a big data worker who likes to imagine the future.)