OpenAI new model development has been frustrated, is sparsity the key to reducing the cost of large models?

OpenAI also has failed models.

Original source: Heart of the Machine

Image source: Generated by Unbounded AI

The training and running cost of large models is extremely high, and OpenAI has also tried to reduce costs, but unfortunately failed.

Late last year, when ChatGPT became a global sensation, OpenAI engineers began working on a new AI model, codenamed Arrakis. Arrakis aims to enable OpenAI to run chatbots at a lower cost.

But according to people familiar with the matter: In mid-2023, OpenAI has canceled the release of Arrakis because the model is not running as efficiently as the company expected.

This failure meant that OpenAI lost valuable time and needed to divert resources to developing different models.

For co-investment, Arrakis' R&D program will be invaluable for the two companies to complete the $10 billion investment and product deal negotiations. According to a Microsoft employee familiar with the matter, Arrakis' failure disappointed some Microsoft executives.

What's more, Arrakis' failure is a harbinger of the future of AI that could be fraught with pitfalls that are difficult to predict.

What kind of model is Arrakis? **

According to people familiar with the matter, OpenAI hopes that Arrakis is a model with comparable performance and higher operating efficiency than GPT-4. The key approach used in the Arrakis model is to take advantage of sparsity.

Sparsity is a machine learning concept that other AI developers such as Google also openly discuss and use. Google executive Jeff Dean has said: "Sparse computing will become an important trend in the future."

OpenAI started research on sparsity early on, and they introduced sparse computing kernels back in 2017. Arrakis could have allowed OpenAI to promote its technology more widely because the company could use a limited number of dedicated server chips to power its software.

Currently, a common way to increase sparsity is with the help of "hybrid expert systems (MoE)" technology. However, Ion Stoica, a professor of computer science at the University of California, Berkeley, has said, "In general, the greater the number of expert models, the sparser and more efficient the model, but it may lead to less accurate results generated by the model."

Around the spring of this year, OpenAI researchers began training Arrakis models, which involve using advanced computing hardware to help the models process large amounts of data. According to people familiar with the matter, the company expects training Arrakis to be much cheaper than training GPT-4. However, the research team soon realized that the model was not performing well enough to achieve the expected gains. After the research team spent about a month trying to solve the problem, OpenAI's senior leadership decided to stop training the model.

The good news is that OpenAI can integrate its work on Arrakis into other models, such as the upcoming multimodal large model Gobi.

Arrakis underperformed OpenAI's expectations because the company is trying to improve the sparsity of the model, which means that only a portion of the model will be used to generate responses, reducing running costs, two people familiar with the matter said. The reason why the model worked in early tests but later performed poorly is unknown.

It is worth mentioning that people familiar with the matter said that OpenAI's public name considered for Arrakis is GPT-4 Turbo.

**How important is it to reduce costs? **

For OpenAI, with growing concerns about the cost of the technology and the proliferation of open-source alternatives, making its models cheaper and more efficient is a top priority.

According to people familiar with the matter, Microsoft uses OpenAI's GPT model to power AI features in Office 365 applications and other services, and Microsoft had expected Arrakis to improve the performance of those features and reduce costs.

At the same time, Microsoft is starting to develop its own LLM, and its LLM may cost less to run than OpenAI's model.

Although this setback has not slowed down OpenAI's business development this year, OpenAI is also likely to decline on this track with increasing competition in the LLM field, especially the accelerated research and development of technology giants such as Google and Microsoft.

Original link:

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)