🎉 Congratulations to the following users for winning in the #Gate CBO Kevin Lee# - 6/26 event!
KaRaDeNiZ, Sakura_3434, Anza01, asiftahsin, GateUser-d0654db3, milaluxury, Ryakpanda, 静.和, milaluxury, 币大亨1
💰 Each winner will receive $5 Points!
🎁 Rewards will be distributed within 14 working days. Please make sure to complete identity verification to be eligible.
📌 Event details: https://www.gate.com/post/status/11782130
🙏 Thank you all for your enthusiastic participation — more exciting events are on the way!
Artificial intelligence is creating Internet spam: Low-quality AI-generated websites are growing rapidly supported by advertising
Source: The Paper
Reporter Fang Xiao
"Websites create smooth, feature-rich platforms and open the door for anyone to join. They put boxes in front of us, we fill those boxes with words and pictures, and people come to see what's going on in those boxes." Content. These companies chase scale because once enough people gather anywhere, there’s usually a way to make money off them. But AI changes those assumptions.”
Even if the web is littered with AI crap, it may prove beneficial, spurring the development of better-funded platforms.
News website rating tool NewsGuard released misinformation monitoring results for June 2023, saying 141 brands are providing advertising revenue to low-quality artificial intelligence (AI)-generated websites to support the development of these unreliable websites.
The sites, which operate with little human oversight, generate an average of thousands of articles a day, including misinformation, especially medical and health information that misleads users.
The technology media The Verge stated that artificial intelligence is killing the old network, while the new network is struggling to be born, "The generative artificial intelligence model is changing the network economy, making it cheaper to generate low-quality content. We have only just begun to see these changes Impact."
217 unreliable AI-generated news and information sites found
The NewsGuard analysis found that the ads posted on the AI-generated content sites appeared to be generated programmatically, meaning that the companies did not choose to place their ads on these sites, but were instead automatically targeted by the systems that served them. Most advertisements are placed by Google's tools.
NewsGuard defines unreliable AI-generated news and information sites (UAINs) as sites that operate with little or no human oversight and publish articles written primarily or entirely by bots. Just last month, NewsGuard analysts updated the number of websites on the newly launched UAIN website tracker from 49 to 217.
While many advertisers and their ad agencies maintain “exclusion lists” of “brand-unsafe” sites, these lists are often not kept up to date and apparently have not kept pace with the proliferation of UAIN sites.
During May and June 2023, NewsGuard analysts identified 393 programmatic ads (the process of using technology to buy and sell digital advertising) from 141 major brands that appeared in 217 of the 217 locations identified by NewsGuard 55 of the UAIN sites. Ads discovered by NewsGuard were served to Internet analysts in four countries: the United States, Germany, France and Italy.
NewsGuard did not name the companies that delivered the ads, which included a variety of blue-chip advertisers: six large banks and financial services companies, four luxury department stores, three leading sportswear brands, three appliance makers, two The world's largest consumer technology company, two global e-commerce companies, two top US broadband providers, three streaming services from US broadcast networks, a Silicon Valley digital platform and a large European supermarket chain.
Programmatic advertising uses algorithms and an advanced auction process to deliver highly targeted digital ads directly to individual users rather than specific websites. This means that the ad effectively "follows" users as they browse the Internet. Because the process is so opaque, brands may not know that they are funding the spread of UAIN sites, as ads are bought through third parties and involve multiple intermediaries.
Fake Author Spreading Questionable Health Information
All 393 ads NewsGuard found appeared directly next to articles containing AI-generated misinformation.
Some UAIN sites that appear to feature advertisements for major brands appear to be using artificial intelligence tools to rewrite articles from mainstream news outlets. For example, an article published by AlaskaCommons.com appears to be an AI rewritten version of an article in the US edition of the British tabloid The Sun, with similar images and wording. Articles on AlaskaCommons.com frequently list the author as Ingrid Taylor, who has published 4,364 articles since the beginning of the year, including 108 on June 15, 2023 alone.
Some sites generate an average of more than 1,200 articles per day with little visible human editorial oversight. By comparison, The New York Times typically publishes about 150 articles per day, according to April 2022 data.
Most AI-generated websites are low quality, but don't spread misinformation. However, NewsGuard found that MedicalOutline.com promotes unproven and potentially harmful natural health remedies with headlines such as "Can lemons cure skin allergies?" "What are 5 natural remedies for ADHD?" and "How you can prevent cancer naturally .”
It is very simple for UAIN sites to quickly monetize their content. On the Google AdSense login page, Google says it's easy for websites to earn programmatic advertising revenue: "All you have to do is put the AdSense code on your website and it will start working immediately."
Since it first began tracking UAIN sites in May 2023, NewsGuard has identified about 25 new sites per week, and in early May, its report identified only 49 sites that "appeared to be almost entirely written by artificial intelligence software." And the total number of UAIN sites is likely much more than the 217 sites NewsGuard currently identifies.
NewsGuard considers a website unreliable AI-generated news and information if it meets all 4 of the following criteria: first, there is clear evidence that a significant portion of the site's content is produced by AI; Second, there is strong evidence that the content was published without human oversight; third, the site is presented in such a way that ordinary readers think its content was produced by human writers or journalists; and fourth, the site There is no explicit disclosure that its content is produced by artificial intelligence.
Not necessarily a bad thing
In recent months, several media outlets have reported that AI-generated content is polluting the internet. On June 26 local time, The Verge senior reporter James Vincent (James Vincent) published a strongly worded commentary. He writes: “ChatGPT is being used to generate sites full of crap. Etsy (artisanal e-commerce site) is flooded with “AI-generated crap.” Chatbots are citing each other in an ooze of misinformation. LinkedIn is using human Smart to stimulate tired users. Snapchat and Instagram want bots to talk to you while your friends are away... The Internet Archive is fighting data scrapers, and AI is tearing Wikipedia apart. The old web is dying, and the new The web is struggling to be born."
Vincent argues, of course, that the web has been dying for years, strangled by apps that divert traffic from sites or algorithms that reward “shortening attention spans.” But in 2023, it's dying again, and a new catalyst is at play: artificial intelligence.
He draws an analogy: “Websites create smooth, feature-rich platforms and open the door for anyone to join. They put boxes in front of us, we fill those boxes with words and pictures, and people come Look at the contents of those boxes. These companies chase scale because once enough people gather anywhere, there’s usually a way to make money from them. But AI changes those assumptions.”
Because, given funding and computing power, AI systems, especially the generative models that are popular these days, can scale effortlessly. They generate lots of text and images, and soon music and video. Its output has the potential to surpass or outperform the news, information and entertainment platforms people rely on today. But these systems are often of poor quality. “These models are trained on layers of data laid down in the last internet era, and their imperfect representations of that data. Companies scrape information from the open web and distill it into machine-generated content that generates Low cost, but poor reliability. Then this product competes with previous platforms and people for attention,” Vincent pointed out.
The most successful sites tend to be the ones that use scale to their advantage, either by increasing social connections or product selection, or by categorizing the vast swarms of information that make up the internet itself, but that scale relies on a large number of humans to create potential value, in Humans clearly cannot beat AI when it comes to mass production.
But Vincent also said at the end that this is not necessarily a bad thing. "Some would say it's just the way the world works, pointing out that the web itself kills what came before, and often for the better. Print encyclopedias, for example, are all but extinct, but I prefer Wikipedia's breadth and accessibility, rather than the heaviness and security of Encyclopaedia Britannica. As with all things AI-generated writing, there are ways to improve it too—from improved citation capabilities to more human oversight. Additionally , even if the web is littered with AI junk, it might prove beneficial, spurring the development of better-funded platforms. For example, if Google keeps giving you junk results in search, you might be more inclined to contribute Pay and access them from a source you trust."
At the end of the day, the changes AI is currently causing are just the latest in a long struggle in the history of the web. Essentially, it's a battle over information—about who makes it, how to get it, and who gets paid.