🎉 #Gate xStocks Trading Share# Posting Event Is Ongoing!
📝 Share your trading experience on Gate Square to unlock $1,000 rewards!
🎁 5 top Square creators * $100 Futures Voucher
🎉 Share your post on X – Top 10 posts by views * extra $50
How to Participate:
1️⃣ Follow Gate_Square
2️⃣ Make an original post (at least 20 words) with #Gate xStocks Trading Share#
3️⃣ If you share on Twitter, submit post link here: https://www.gate.com/questionnaire/6854
Note: You may submit the form multiple times. More posts, higher chances to win!
📅 End at: July 9, 16:00 UTC
Show off your trading on Gate Squ
Bill Gates: The risks of artificial intelligence are real, but manageable
Written by: Bill Gates
Source: Gatesnotes
The risks posed by artificial intelligence may seem overwhelming. What happens to people whose jobs are taken away by smart machines? Will Artificial Intelligence Affect Election Results? What if future artificial intelligence decides that humans are no longer needed, and wants to get rid of us?
These are legitimate questions, and the concerns they raise need to be taken seriously. But we have good reason to believe that we can solve these problems: this is not the first time a major innovation has brought new threats that must be controlled, and we have encountered them before.
Whether it was the advent of the automobile or the rise of the personal computer and the internet, people have experienced other transformative moments that, despite many upheavals, ended up for the better. Shortly after the first cars hit the road, the first crashes happened. But instead of banning cars, we have adopted speed limits, safety standards, driver's license requirements, drink-driving laws, and other rules of the road.
We are now in the early stages of another profound transformation, the age of artificial intelligence. It's akin to the era of uncertainty before speed limits and seat belts. AI is changing so quickly, it's not clear what's going to happen next. We are facing big questions about how current technology works, how people will use it maliciously, and how artificial intelligence will change society and individuals.
In moments like these, it's natural to feel uneasy. But history shows that it is possible to address the challenges posed by new technologies.
I once wrote an article about how artificial intelligence will completely change our lives. It will help solve problems in health, education, climate change, and more that have seemed intractable in the past. The Gates Foundation has made this a priority, and our CEO Mark Suzman recently shared his thoughts on AI's role in reducing inequality.
I will talk more about the benefits of AI in the future, but in this post I want to address some of the concerns I hear and read often, many of which I share, and explain how I see them .
One thing that is clear from all the writing on the risks of AI so far is that no one has all the answers. Another clear point is that the future of artificial intelligence is not as grim as some imagine, nor as rosy as others imagine. The risks are real, but I am optimistic that they can be managed. As I address each of these concerns, I will return to the following themes:
In this article, I will focus on existing or imminent risks. I won't discuss what happens when we develop an AI that can learn any topic or task, unlike today's dedicated AI. Whether we arrive at this point in the next decade or a century, society has profound questions to consider. What if a superintelligent AI set its own goals? What if they come into conflict with humans? Should we build a super artificial intelligence?
However, thinking about these longer-term risks should not come at the expense of more immediate risks.
AI Generated Deep Fakes and Misinformation Could Destroy Elections and Democracy
The use of technology to spread lies and disinformation is nothing new. People have been doing this through books and leaflets for centuries. This became easier with the advent of word processors, laser printers, email and social networking.
Artificial intelligence has taken the problem of faking text and extended it so that almost anyone can create fake audio and video, known as deepfakes. If you get a voice message that sounds like your child is saying "I've been kidnapped, please send $1,000 to this bank account within the next 10 minutes and don't call the police", the horrific emotional impact it can have will be far more powerful than an email saying the same thing.
On a larger scale, AI-generated deepfakes could be used to try to influence elections. Of course, it doesn't take sophisticated technology to cast doubt on an election's legitimate winner, but artificial intelligence will make it much easier.
Already, fake videos of fake footage of well-known politicians have surfaced. Imagine that on the morning of the election, a video showing a candidate robbing a bank goes viral. It was false, but it took hours for news organizations and the campaign to prove it. How many people are going to see this video and change their vote at the last minute? That could tip the scales, especially if the race is tight.
Recently, when OpenAI co-founder Sam Altman testified before a U.S. Senate committee, senators from both parties spoke about AI’s impact on elections and democracy. I hope this topic continues to be on everyone's agenda.
We certainly haven't solved the problem of misinformation and deepfakes. But two things make me cautiously optimistic. One is that people have the ability to learn not to take anything at face value. For years, email users have been mired in scams posing as Nigerian princes promising to share credit card numbers in exchange for huge rewards. But eventually, most people learn to think twice. As deceptions have become more sophisticated, many targets have become more devious. We need to build the same capability for deepfakes.
Another thing that I find hopeful is that artificial intelligence can help identify deepfakes and create deepfakes. For example, Intel has developed a deepfake detector, and government agency DARPA is developing technology to identify whether video or audio has been tampered with.
It will be an iterative process: Someone will find a way to detect counterfeiting, someone will figure out how to counter it, someone will develop countermeasures, and so on. It won't be perfect, but we won't be at our wits end either.
AI will make it easier to attack humans and governments
Today, when hackers hope to find an exploitable vulnerability in software, they do so by "brute force" -- writing code and attacking potential weaknesses until the vulnerability is discovered. This involves many detours, so it takes time and patience.
Security professionals who want to fight hackers must do the same. Every software patch you install on your phone or laptop represents hours of searching.
AI models will speed up this process by helping hackers write more efficient code. They are also able to exploit an individual's public information, such as where they work and friends, to develop more advanced phishing attacks than are currently available.
The good news is that AI is a double-edged sword. Security teams in government and the private sector need to have the latest tools to find and fix security vulnerabilities before criminals exploit them. I hope the software security industry will expand on the work they already do in this area, it should be their number one concern.
Of course, this is also why we should not try to temporarily prevent people from implementing new developments in artificial intelligence, as some have suggested. Cybercriminals won't stop making new tools. Those who want to use artificial intelligence to design nuclear weapons and bioterrorism attacks will not stop. Efforts to stop them need to continue at the same pace.
There is also a related risk at the global level: an arms race in artificial intelligence that could be used to design and launch cyberattacks against other states. The government of every country wants to have the most powerful technology available to deter adversary attacks. This "leaving no one first" motivation could spark a race to create increasingly dangerous cyber weapons. Things will get worse for everyone.
It's a scary idea, but we have history as a guide. As flawed as the world's nuclear nonproliferation regime is, it has prevented the all-out nuclear war that my generation grew up terrified of. Governments should consider creating a global AI agency similar to the International Atomic Energy Agency.
AI will take people's jobs
The main impact of artificial intelligence on work in the coming years will be in helping people do their jobs more efficiently. This is true whether you are working in a factory or in the office handling sales calls and accounts payable. Eventually, AI will be able to express itself well, compose emails and manage your inbox for you. By simply writing a request in English or any other language, you can get the PPT you want.
As I argued in my February article, increased productivity is good for society. It gives people more time to do other things at work and at home. The need for helpful people will never go away -- such as teaching, caring for the sick and caring for the elderly. But some workers do need support and retraining as we transition to an AI-driven workplace. That's the job of governments and businesses to manage so that workers are not left behind -- without the kind of disruption to people's lives that happens when American manufacturing jobs are lost.
Also, keep in mind that this isn't the first time new technologies have led to significant changes in the labor market. I don't think the impact of artificial intelligence will be as huge as the industrial revolution, but it will certainly be similar to the impact of the introduction of the personal computer. Word processing applications didn't eliminate office work, but they changed it forever. Employers and employees had to adapt, and they did. The transformation brought about by AI will be a bumpy transition, but there is every reason to believe we can reduce disruption to people’s lives and livelihoods.
AI will inherit our biases and make them up
Hallucinations — when an AI confidently makes claims that simply don’t match the truth — usually happen because the machine doesn’t understand your request. Ask an AI to write a short story about a vacation to the moon, and it might give you an imaginative answer. But if you ask AI to write your Tanzania travel plans for you, it might send you to a hotel that doesn't even exist.
Another risk of artificial intelligence is that it reflects or even reinforces people's prejudices about certain genders, races, ethnicities, etc.
To understand why hallucinations and bias occur, it's important to know how the most common AI models work today. They're essentially very sophisticated versions of code that let your email app predict the next word you're about to type: they scan vast amounts of text -- in some cases, nearly all text available on the web -- and then Analyze to find patterns in human language.
When you ask the AI a question, it looks at the words you use and then searches for chunks of text that are often associated with those words. If you write "list ingredients for pancakes," the AI might notice that words like "flour, sugar, salt, baking powder, milk, and eggs" often appear with that phrase. Then, based on what it knows about the order in which those words usually appear, it generates an answer. (AI models that work this way use so-called Transformers. GPT-4 is one such model).
This process explains why AI may hallucinate or be biased. It has no context to the question you asked or what you said. If you tell the AI it made a mistake, it might say "sorry, I made a typo". But that's just an illusion, in reality it doesn't input anything. It says this because it has scanned enough text to know that "sorry, I made a typo" is a phrase people often write after someone else has corrected them.
Likewise, AI models inherit biases embedded in the text they were trained on. If one reads a lot of articles about doctors, and the articles mostly mention male doctors, then its answer will assume that most doctors are men.
Although some researchers argue that hallucinations are an inherent problem, I disagree. I'm optimistic that over time, AI models can learn to distinguish fact from fiction. For example, OpenAI has done promising research in this area.
Other groups, including the Alan Turing Institute and the National Institute of Standards and Technology, are also working to address bias. One approach is to build human values and higher-level reasoning into AI. It's similar to how self-aware humans work: maybe you think most doctors are male, but you're conscious enough of that assumption to know you have to consciously fight it. Artificial intelligence can operate in a similar way, especially if the models are designed by people from different backgrounds.
In the end, everyone using AI needs to be aware of the issue of bias and be an informed user. The papers you ask an AI to draft may be full of bias and factual errors. You need to examine the AI's biases as well as your own.
Students will not learn to write because AI will do it for them
Many teachers worry that AI will disrupt their collaboration with students. In an age where anyone with an internet connection can use AI to write a decent first draft of a thesis, what's to stop a student from turning it in as their own?
There are already AI tools that can learn to tell whether an essay is written by a human or a computer, so teachers can tell when students are doing their own homework. But some teachers aren't trying to discourage students from using AI in their writing — they're actually encouraging it.
In January, a veteran English teacher named Cherie Shields wrote in Education Week about how she uses ChatGPT in her classroom. ChatGPT helps her students with everything from starting writing to writing outlines, and even provides feedback on their assignments.
"Teachers must embrace AI technology as another tool that students can use," she wrote. “Just as we once taught students how to do a good Google search, teachers should design clear lessons around how ChatGPT bots can assist with essay writing. Acknowledging the existence of AI and helping students use it could revolutionize the way we teach.” Not really Every teacher has time to learn and use new tools, but educators like Cherie Shields make a good argument that those who do have the time will benefit greatly.
This reminds me of the 1970s and 1980s when electronic calculators became popular. Some maths teachers feared that students would stop learning basic arithmetic, but others embraced the new technology and focused on the thinking behind arithmetic.
AI can also help with writing and critical thinking. Especially in the early days, when hallucinations and bias are still an issue, educators can have AI generate essays and then fact-check them with students. Educational nonprofits such as the Khan Academy, which I fund, and the OER Project provide teachers and students with free online tools that place a strong emphasis on the testing of claims. There is no skill more important than knowing how to tell real from fake.
We really need to make sure that educational software helps close the achievement gap, not make it worse. Today's software is primarily geared toward students who are already motivated to learn. It can create a study plan for you, point you to good resources, and test your knowledge. However, it does not yet know how to engage you in subjects that do not interest you yet. This is a problem that developers need to address so that all types of students can benefit from AI.
**What's next? **
I believe we have all the more reason to be optimistic that we can manage the risks of AI while maximizing its benefits. But we need to act fast.
Governments need to develop AI expertise in order to develop informed laws and regulations to deal with this new technology. They need to deal with misinformation and deepfakes, security threats, changes in the job market, and the impact on education. Just one example: the law needs to clarify which uses of deepfakes are legal, and how deepfakes are labeled so everyone understands that what they see or hear is fake.
Political leaders need to be able to engage in informed, thoughtful dialogue with constituents. They also need to decide how much to cooperate with other countries on these issues, rather than go it alone.
In the private sector, AI companies need to work safely and responsibly. This includes protecting people's privacy, ensuring AI models reflect fundamental human values, minimizing bias to benefit as many people as possible, and preventing technology from being exploited by criminals or terrorists. Companies across many sectors of the economy need to help their employees transition to an AI-centric workplace so no one is left behind. Customers should always know they are interacting with AI and not humans.
Finally, I encourage everyone to pay as much attention to the development of artificial intelligence as possible. This is the most transformative innovation we will see in our lifetime, and a healthy public debate will depend on everyone's understanding of this technology, its benefits and risks. The benefits of artificial intelligence will be enormous, and the best reason to believe we can control the risk is that we have done it.