📢 Gate Square #Creator Campaign Phase 1# is now live – support the launch of the PUMP token sale!
The viral Solana-based project Pump.Fun ($PUMP) is now live on Gate for public sale!
Join the Gate Square Creator Campaign, unleash your content power, and earn rewards!
📅 Campaign Period: July 11, 18:00 – July 15, 22:00 (UTC+8)
🎁 Total Prize Pool: $500 token rewards
✅ Event 1: Create & Post – Win Content Rewards
📅 Timeframe: July 12, 22:00 – July 15, 22:00 (UTC+8)
📌 How to Join:
Post original content about the PUMP project on Gate Square:
Minimum 100 words
Include hashtags: #Creator Campaign
Going all out to build custom AWS chips Amazon is chasing Microsoft and Google in generative AI
Focus on
In two small rooms in a nondescript office building in Austin, Texas, several Amazon employees are designing two types of microchips for training and accelerating generative artificial intelligence. The two custom chips, code-named Inentia and Trainium, offer Amazon Web Services customers an alternative to Nvidia graphics processors for training large language models. Right now, it's getting harder and more expensive to source Nvidia's graphics processors.
"The whole world wants more chips for generative AI, whether it's graphics processing units or Amazon's own designs," AWS CEO Adam Selipsky said in a June interview. chips. I think we’re more likely than any other company in the world to give our customers this capability that everyone wants.”
Other companies, however, moved faster, poured more money, and borrowed from the AI boom. When OpenAI launched ChatGPT last November, Microsoft got a lot of attention for hosting the explosive AI chatbot. Microsoft reportedly invested $13 billion in OpenAI. Microsoft quickly added generative artificial intelligence models to its own products and integrated them into Bing in February.
That same month, Google launched its own large language model, Bard, and later invested $300 million in OpenAI competitor Anthropic.
It was not until April of this year that Amazon announced its own large language model, Titan, and at the same time launched a service called Bedrock to help developers use generative artificial intelligence to enhance software capabilities.
"Amazon isn't used to chasing markets, it's used to creating them," said Chirag Dekate, a Gartner vice president and analyst. "I think for the first time in a long time, they're seeing I was at a disadvantage and now I am trying to catch up."
Meta also recently released its own large language model Llama 2, an open-source ChatGPT competitor that is now available for testing on the Microsoft Azure public cloud.
Chip represents "True Differentiation"
In the long run, Amazon's custom chips could give it an edge in generative artificial intelligence, Decatur said. "I think the real difference is the technical capabilities they have, because Microsoft doesn't have Trainium or Interentia," he explained.
Back in 2013, AWS quietly began producing custom chips with a piece of specialized hardware called Nitro. Amazon revealed that Nitro is currently the AWS chip with the largest capacity, and there is at least one on each AWS server, with a total usage of more than 20 million.
In 2015, Amazon acquired Annapurna Labs, an Israeli chip startup. Then in 2018, Amazon launched Graviton, a server chip based on the British chip designer's Arm architecture, a competitor to x86 CPUs from giants like AMD and Nvidia.
"Arm chips could account for as much as 10% of total server sales, and a significant portion of that will come from Amazon," said Stacy Rasgon, a senior analyst at Bernstein Research. So on the CPU side, they've done a pretty good job."
Also in 2018, Amazon unveiled chips focused on artificial intelligence. Two years ago, Google released its first Tensor Processor Unit (TPU). Microsoft has yet to announce Athena, the artificial intelligence chip it is developing with AMD.
Amazon has a lab on a chip in Austin, Texas, where it develops and tests Trainium and Inferentia. Matt Wood, the company's vice president of products, explained what the two chips do.
He said: "Machine learning is divided into these two different stages. So, you need to train machine learning models, and then perform inference on these trained models. Compared with other ways of training machine learning models on AWS, Tradium in The price/performance ratio has been improved by about 50%.”
Trainium debuts in 2021, following the launch of the second-generation Interentia in 2019. Interentia allows customers to "provide low-cost, high-throughput, low-latency machine learning inference, which is all the predictions you get when you feed a cue into a generative AI model, all of that gets processed, and then you get a response," Wood said. "
For now, however, Nvidia's GPUs are still the undisputed king when it comes to training models. In July, AWS unveiled new AI acceleration hardware based on Nvidia's H100.
"In the past 15 years, Nvidia has built a huge software ecosystem around its chips that no other company has. Right now, the biggest winner in AI is Nvidia," Rasgon said.
Amazon has the cloud computing advantage
However, AWS' dominance in cloud computing is a big advantage for Amazon.
"Amazon doesn't need extra attention, the company already has a very strong cloud installed base. All they need to do is figure out how to use generative artificial intelligence to make the current There are customers expanding into value creation movements."
When choosing generative artificial intelligence between Amazon, Google and Microsoft, millions of AWS customers may be attracted to Amazon because they are already familiar with Amazon and run other applications and store data there.
“It’s a question of speed,” explained Mai-Lan Tomsen Bukovec, vice president of technology at AWS. “How quickly these companies can develop these generative AI applications, It's up to them to start with the data in AWS first and drive it with the compute and machine learning tools we provide."
According to data provided by Gartner, AWS is the world's largest cloud computing provider, accounting for 40% of the market in 2022. Although Amazon's operating profit has declined year-on-year for three consecutive quarters, AWS still accounted for 70% of Amazon's $7.7 billion operating profit in the second quarter. AWS has historically had much higher operating margins than Google Cloud.
In addition, AWS has a growing portfolio of developer tools focused on generative artificial intelligence. Swami Sivasubramanian, AWS vice president for databases, analytics, and machine learning, said: "Let's turn back the clock, even back to before ChatGPT. It's not like it happened after that. , we suddenly came up with a plan in a hurry, because you can’t design a new chip that quickly, let alone build a basic service in two to three months.”
Bedrock gives AWS customers access to large language models developed by Anthropic, Stability AI, AI21 Labs and Amazon Titan. "We don't believe that one model will rule the world, we want our customers to have state-of-the-art models from multiple vendors because they will choose the right tool for the right job," Sivasubramanian said.
One of Amazon's newest AI offerings is AWS HealthScribe, a service launched in July to help doctors draft summaries of patient visits using generative AI. Amazon also has a machine learning center, SageMaker, which provides algorithms, models and other services.
Another important tool is CodeWhisperer, which Amazon says enables developers to complete tasks an average of 57 percent faster. Last year, Microsoft also reported that its coding tool, GitHub Copilot, had boosted productivity.
In June of this year, AWS announced the establishment of a generative artificial intelligence innovation center for $100 million. AWS CEO Selipsky said: "We have many customers who want generative artificial intelligence technology, but they don't necessarily know what this means for them in the context of their own business. Therefore, we will introduce solutions Solution architects, engineers, strategists, and data scientists, working with them one-on-one."
CEO Jassy personally led the team to build a large language model
While AWS has so far focused primarily on developing tools rather than building a ChatGPT competitor, a recently leaked internal email revealed that Amazon CEO Andy Jassy is directly overseeing a new The central team, which is also building scalable large language models.
During the second-quarter earnings call, Jassy had said that a "substantial portion" of AWS's business is now driven by artificial intelligence and the more than 20 machine learning services it supports, whose customers include Philips, 3M, Old Mutual and HSBC.
The explosion of artificial intelligence has brought with it a host of security concerns, with companies worried about employees putting proprietary information into the training data used by public big language models.
"I can't tell you how many Fortune 500 companies I've talked to that have disabled ChatGPT," said Selipsky, AWS CEO. Anything you do, whatever model you use, it will be in your own isolated virtual private cloud environment. It will be encrypted, it will have the same AWS access controls."
For now, Amazon is only accelerating its push into generative AI, claiming that "more than 100,000" customers are currently using machine learning on AWS. While that's a fraction of AWS's millions of customers, analysts say that could change.
“We don’t see companies saying: Oh wait, Microsoft is already leading in generative AI, let’s go out there, let’s change our Infrastructure strategy, migrate everything to Microsoft. If you are already an Amazon customer, you are likely to explore the Amazon ecosystem more broadly." (Text / Jinlu)