📢 Gate Square Exclusive: #WXTM Creative Contest# Is Now Live!
Celebrate CandyDrop Round 59 featuring MinoTari (WXTM) — compete for a 70,000 WXTM prize pool!
🎯 About MinoTari (WXTM)
Tari is a Rust-based blockchain protocol centered around digital assets.
It empowers creators to build new types of digital experiences and narratives.
With Tari, digitally scarce assets—like collectibles or in-game items—unlock new business opportunities for creators.
🎨 Event Period:
Aug 7, 2025, 09:00 – Aug 12, 2025, 16:00 (UTC)
📌 How to Participate:
Post original content on Gate Square related to WXTM or its
OpenAI's first developer conference: GPT-4 Turbo, GPT Store
Source: There is a new Newin
At the launch of OpenAI's first developer day, Sam Altman elaborated on a series of important developments for the company, not limited to GPT-4 Turbo, a new model with more power, longer context, and more control, as well as the launch of the Assistance API to simplify the experience for developers to build assisted agents, OpenAI highlighted that natural language will be an important way to interact with computers in the future, and also introduced GPT programming through conversation Altman announced the upcoming launch of the GPT Store, which will allow users to share and discover innovative GPT apps and provide revenue sharing incentives. Altman also talked about deepening the collaboration with Microsoft, showcasing the new text-to-speech model and improved feature calls, which can be summarized in the following sessions:
Here's all the full content from OpenAI's first developer day:
First, Altman recalls the November 30 release of ChatGPT as a low-key research preview and proudly announces the launch of GPT-4 in March, the most powerful model in the world right now.
Altman also introduced the speech and visual capabilities introduced to ChatGPT over the past few months, giving it the ability to see, hear, and speak, and announced that DALL· The launch of E 3, the world's most advanced image model and has been integrated into ChatGPT.
For enterprise-level customers, OpenAI launched ChatGPT Enterprise, which provides faster GPT access, longer context windows, and more enterprise-grade security and privacy protections. Altman revealed that about 2 million developers are using their APIs, more than 92% of Fortune 500 companies are building solutions based on their products, and ChatGPT now has about 100 million weekly active users. In particular, he noted that this achievement relies entirely on word-of-mouth, as users find the product useful and recommend it to friends. He concluded by noting that while the data is impressive, what is more important is how people are using these products and how they are leveraging AI, and then showed a video that visualizes these advances.
At OpenAI's developer conference, Sam Altman showed us how AI can profoundly impact people's personal lives and creativity in a video. One of the cases in the video tells of a man who uses ChatGPT in a non-romantic, but respectful and affectionate way, to tell his father his love and support in his father's native language – Tagalog – and a complex grammatical structure. This case demonstrates ChatGPT's ability to understand and apply cultural and linguistic nuances.
According to Altman, ChatGPT's creative applications are amazing and can help creators expand their thinking and boost their confidence. One example is someone who uses ChatGPT to assist them with everyday tasks like checking what is missing from the fridge, planning a vegetarian recipe, or even helping with creating spreadsheets and writing code, and the character in another video discovers ChatGPT's affinity, patience, knowledgeability, and responsiveness. For a 4.0 student and mother of four, ChatGPT's ability to provide answers to questions and explanations reduces her reliance on tutors and buys her more time with her family and herself. Finally, there is also a man in the video who tells how he was limited to the use of his left hand after undergoing spinal cord and brain surgery. Now, by using ChatGPT's voice input and conversational features, this user has been greatly aided and facilitated. These case stories illustrate ChatGPT's potential to help with daily life, support learning, and overcome obstacles, while also demonstrating how AI can connect and empower users on a global scale like never before.
Sam Altman then shared how people are leveraging their technology and emphasized that it's exactly what they do. He then announced a series of new developments. Altman says they've spent a lot of time talking to developers around the world and listening to their feedback, which has had a profound impact on what we're going to show today.
OpenAI has launched a new model, GPT-4 Turbo. This new model will address the needs of many developers. He detailed six major updates: the first is the context length, GPT-4 supports contexts of up to 8,000 tokens, and in some cases can reach 32,000. GPT-4 Turbo supports contexts of up to 128,000 tokens, which is equivalent to 300 pages of a standard book and 16 times longer than the context of 8,000 tokens. In addition to the longer context length, the accuracy of this model when dealing with long contexts has also been significantly improved.
The second update is more control. To give developers more control over model responses and outputs, they've introduced a new feature called JSON Schema to ensure that models respond to valid JSON, which will greatly simplify API calls. The model has also improved in terms of function calls, allowing multiple functions to be called at the same time and to follow instructions better. They also introduced a new feature called "Reproducible Output", which allows the model to return a consistent output by passing a seed parameter, which obviously provides a higher degree of control over the model's behavior. This feature is currently in beta.
In the coming weeks, they will be rolling out a new feature that will allow log probes to be viewed in the API. The third update is better knowledge of the world. To give the model access to more accurate world knowledge, they introduced a retrieval feature that allows knowledge to be extracted from external documents or databases. They have also updated the knowledge deadline, and GPT-4 Turbo's world knowledge has been updated to April 2023 and will continue to improve.
The fourth update is a new modality, DALL· E 3, GPT-4 Turbo's visual capabilities, and the new text-to-speech model are all coming to the API today, and a handful of customers are already using DALL· E 3 Programmatically generates images and designs. Today, Coca-Cola is launching a campaign to enable customers to use DALL· E 3 Generate Diwali greeting cards.
Of course, their security systems help developers prevent applications from being misused, and these tools can be used in APIs. GPT-4 Turbo can now accept image input via API and generate captions, classifications, and analysis. For example, Miis uses this technology to help people who are blind or have low vision with everyday tasks, such as identifying the product in front of them. And with the new text-to-speech model, you'll be able to generate natural-sounding audio from text in the API, with six preset sounds to choose from.
Altman played an example of a sound that showed the naturalness of their new text-to-speech model. This voice-to-voice technology makes the interaction of the app more natural and accessible, and unlocks many use cases like language learning and voice assistants.
Altman also announced the next version of their open-source speech recognition model, Whisper v3, and said it will be adding APIs soon. This version has improved performance in multiple languages, and he thinks developers will really like it.
Next, he discussed the issue of customization. Since the launch of GPT 3.5 a few months ago, the fine-tuning feature has performed well. Starting today, this will be extended to the 16K version of the model. They also invite users who actively use fine-tuning to apply for the GPT-4 Fine-Tuning Experiment Access Program. The fine-tuning API is ideal for improving the performance of a model with relatively little data in a variety of applications, whether it's learning entirely new areas of knowledge or working with large amounts of proprietary data.
In the fifth update, Altman announced a new initiative called "Custom Models," where OpenAI's researchers will work closely with the company to use their tools to build specialized custom models for specific use cases. This includes modifying each step of the model training process, doing domain-specific pre-training, customizing the reinforcement learning post-training process, and more. He admits that at first they will not be able to work with many companies, which will be a lot of work and not cheap, at least initially. But if there are businesses looking to push things to their current limits, reach out to them.
In addition, Altman announced higher rate limits. They will double the number of tokens per minute for all existing GPT-4 customers, making it easier to do more, and can request further rate limit and quota changes directly in the API account settings. In addition to these rate limits, they have also introduced Copyright Shield, which means that if a customer faces a legal claim for copyright infringement, OpenAI will step in to protect the customer and cover the costs incurred. This applies to both ChatGPT Enterprise and APIs. He clearly reminds everyone that they never use data from APIs or ChatGPT Enterprise for training.
Altman went on to talk about a developer request that was bigger than all the previous ones, and that was the issue of pricing. He announced that GPT-4 Turbo is not only smarter than GPT-4, but also cheaper, prompting a 3x reduction in token and a 2x reduction in completion token. The new pricing is $0.01 per 1,000 prompt tokens and $0.03 per 1,000 completion tokens. This results in GPT-4 Turbo's combined rate being more than 2.75 times cheaper than GPT-4. They've worked very hard to achieve this and hope everyone gets excited about it.
They had to choose between price and speed when deciding to prioritize the issue of price, but next they will work on increasing speed. He also announced a price cut for the GPT 3.5 Turbo 16K, with a 3x reduction in input tokens and a 2x reduction in output tokens, which means that GPT 3.516K is now cheaper than the previous GPT 3.54K model. The fine-tuned GPT 3.5 Turbo 16K version is also cheaper to run than the older fine-tuned 4K version, and he hopes the changes will address everyone's feedback and is excited to bring these improvements to everyone.
Introducing it all, he mentioned that OpenAI is lucky to have a partner who plays a vital role in making this possible. So he brought in a special guest, Satya Nadella, CEO of Microsoft.
Nadella recalls first encountering OpenAI where Altman asked him if he had some Azure credits available, and they've come a long way since then. He praised OpenAI for building something magical, and shared two of Microsoft's thoughts on the partnership: First, workloads, where they work together to build systems that support the models that OpenAI is building, from power to data centers, to racks and accelerators, to networks. Microsoft's goal is to build the best system so that OpenAI can build the best model and make it available to developers. Secondly, Microsoft is a developer itself and is building products. Nadella mentioned that when he first met GitHub Copilot and GPT, his belief in the entire generation of base models completely changed. They are committed to building their product on top of OpenAI's APIs and hope to make GitHub Copilot Enterprise available to all attendees in attendance to try it out.
Altman also asked Nadella for her thoughts on the future of partnerships and the future of AI. Nadella emphasized that Microsoft is fully committed to providing the systems and computing resources needed to support OpenAI in its bold progress on its roadmap. They are committed to providing the best training and inference systems, as well as the most computing resources, so that OpenAI can continue to push the cutting edge. Nadella believes that the true value of AI lies in its ability to empower people, which aligns with OpenAI and Microsoft's mission to empower every person and every organization on the planet to do more. He mentioned that security is a key focus in their collaboration, and that it is a focus of their joint efforts, not an afterthought. Nadella's words underscore the depth and purpose of OpenAI and Microsoft's partnership, demonstrating the two companies' shared vision to drive AI growth and adoption.
Altman then turned to the topic of the conference, and while it was a conference for developers, they made some improvements to ChatGPT. Now, ChatGPT uses GPT-4 Turbo and all the latest improvements, including the latest knowledge cut-off times, and will continue to be updated, which are already in effect on the same day. ChatGPT is now able to browse the web, write and run code, analyze data, generate images, and much more when needed. They have also heard user feedback that the model selector is extremely annoying and has therefore been removed. Starting today, users don't have to click in a drop-down menu, and ChatGPT will automatically know which feature to use when.
Altman points out that while price is an important issue, it's not the main thing in the developer's request. They believe that if people are given better tools, they will do amazing things. People want AI to be smarter, more personalized, more customizable, and able to do more on behalf of the user. Eventually, the user simply requests the PC and it does all of these tasks for you. In the field of AI, these capabilities are often referred to as "agents." To address the security of AI, OpenAI believes that a gradual and iterative deployment is the best approach, and believes that it is especially important to move cautiously towards the future of this "agent". This will require a lot of technical work and a lot of thoughtfulness on the part of society.
So, they took the first small step towards this future. Altman was thrilled to introduce GPT – the version of ChatGPT tailored for a specific purpose. You can build a customized version of ChatGPT of almost anything with instructions, extended knowledge, and actions, and then publish it for others to use. Because they combine instructions, extended knowledge, and action, they can be more useful, better suited to multiple contexts, and provide better control.
They will make it easier to accomplish various tasks or just make it more fun for you. You can use them directly in ChatGPT. Actually, you can program GPT in language by talking to it. It's easy to customize the behavior to suit your needs. This makes it very easy to build them and empowers everyone.
Altman went on to say that they will show what GPTs are, how to use them, how to build them, and then will discuss how they will be distributed and discovered. Then, for developers, they'll show how to build these agent-like experiences into their own apps.
He presented a few examples. code.org's partners are working to expand the school's computer science curriculum, and they have courses used by tens of millions of students around the world. Code.org has developed a lesson plan and GPT to help teachers provide a more engaging experience for middle school students. For example, if the teacher asks for a creative way to explain the loop, it will do so, and in this case, it will explain it with the way a video game character repeatedly picks up coins, which is very easy for an eighth-grader to understand. This GPT combines code.org's extensive curriculum and expertise, allowing teachers to quickly and easily adapt to their needs.
Next, Canva builds a GPT where you can start design work by describing the design you want in natural language. If you say, make a poster for the Dev Day reception this afternoon and provide some details, it will generate some starting options by calling Canva's API.
Altman notes that the concept may be familiar to some. They developed the plugin into a custom action for GPT. You can continue chatting with this one to see the different iterations, and when you see the one you like, you can click to travel to Canva for the full design experience.
Then, they wanted to show a GPT live. Zapier has built a GPT that lets you perform actions across 6000 apps, unlocking a wide range of integration possibilities. Altman introduced Jessica, Solutions Architect at OpenAI, who will be in charge of the presentation.
Solutions architect Jessica Shei took the stage and quickly began the demo, with Jessica pointing out that GPTs would be located in the top left corner of the interface and showing an example called Zapier AI actions. She showed off her calendar for the day and mentioned that she had connected GPTs to her calendar.
During the presentation, Jessica asked about the day's schedule. She emphasized that GPTs are built with security in mind, and the system asks for the user's permission before any action or data is shared. She allows GPTs to access her schedule and explains that GPTs are designed to take instructions from the user to decide which function to invoke to perform the appropriate action.
Next, Jessica showed how GPTs successfully connected to her calendar and extracted the event information. She also instructed GPTs to check for conflicts on the calendar and showed that it successfully identified one. She then demonstrates how to let a person named Sam know that she needs to leave early and switch to a conversation with Sam to request to do so.
When GPTs completed the request, Jessica asked Sam if she had received the notification, and Sam confirmed receipt. Jessica used this as an example to illustrate the potential of GPTs and expressed her anticipation of seeing what others would build.
Sam then went on to introduce more examples of GPT. He mentioned that in addition to the ones demonstrated, there are many GPTs that are being created and will be created soon. He realized that many people who wanted to build GPT couldn't program, so they made it possible for people to program GPT through conversation. Altman believes that natural language will be an important part of how people will use computers in the future, and sees this as an interesting early example.
Next, Altman shows how to build a GPT. He wanted to create a GPT that would help founders and developers provide advice when launching new projects. He goes into the GPT builder, tells it what he wants, and GPT starts building detailed instructions based on his description. It also comes up with a name "Startup Mentor" and starts populating with information and possible questions in preview mode. Altman uploaded a transcript of his previous speech on entrepreneurship to provide advice based on it, adding "concise and constructive feedback" to the directive.
Altman then tried this GPT in the preview tag and was pleased with the results. He decided to just post it to himself for the time being, so that he could further refine and share it later. He mentioned that he had always wanted to create such a robot and was now happy to be able to make it happen. Altman emphasized that GPTs allow people to easily share and discover interesting things they do with ChatGPT. People can create GPTs privately, or share their creations publicly via links, or make GPTs just for their company if they use ChatGPT Enterprise. They plan to launch a GPT store later this month, where people can list GPTs, and OpenAI will feature the best and most popular GPTs.
Altman also mentioned that OpenAI will ensure that GPTs in stores follow their policies, and that OpenAI values revenue sharing and will pay a portion of the revenue to those who build the most useful and popular GPTs. They look forward to fostering a vibrant ecosystem through the GPT Store and are excited about the more information that will be shared.
Altman also emphasized that this is a conference for developers who are bringing the same concepts to APIs. He mentioned that many people have already built proxy-like experiences on APIs, such as Shopify Sidekick and Discord's Collide, as well as my AI, a custom chatbot that can be added to group chats and provide recommendations. These experiences are great, but building them is often difficult, sometimes taking months and teams of dozens of engineers. To simplify this process, they've launched a new Assistance API.
The Assistance API includes persistent threads, a built-in retrieval code interpreter, a working Python interpreter and sandbox environment, and the improved function calling functionality they discussed earlier.
This was followed by Raman, Head of Developer Experience at OpenAI, showing how this works. Ramon says he's encouraged to see so many people incorporating AI into their applications. Ramon announced that not only are they introducing new patterns in the API, but they're also excited to improve the developer experience so that it's easier for people to build secondary agents. He then shows the build process directly.
Ramon introduced the travel app he is building called "wanderlust" for explorers around the world. He also showcased destination ideas generated with GPT-4, as well as the use of the new DALL· Illustrations generated programmatically by the E 3 API. Subsequently, Ramon showed how to enhance the app by adding a simple assistant. He switches to the new assistive tool playground, creates an assistant, gives it a name, provides initial instructions, selects the model, enables the code interpreter and retrieval functions, and then saves.
Ramon went on to explain how to integrate the assistant into the application, looking at some of the code and demonstrating how to create a new thread for each new user and add their messages to those threads. He also shows how to run the assistant at any time to return the response to the application.
Next, Ramon showed off the function call, a feature he particularly liked. Function calls now guarantee JSON output, and multiple functions can be called at the same time. He then demonstrated how the assistant knows the features to include labels on the map on the right and adds markers to the map in real time.
Ramon also discusses the retrieval feature, which is about giving assistants more knowledge than instant user messaging. He uploads a PDF file, which is read by the system and displayed on the screen. He then drags and drops Airbnb's booking information into the conversation as well.
Ramon emphasized that developers typically need to compute embeddings, set up chunking algorithms, and now all of this is handled by the new stateful API. He also shows the developer dashboard where you can see the steps taken by the tool, including the functions that were called and the PDF files that were uploaded.
Ramon then discussed a new feature that many developers have been waiting for: the code interpreter, which is now also available in the API. It enables AI to write and execute code and even generate files on the fly. He demonstrates how a code interpreter would work if you say a problem that requires currency conversion and days calculations. Finally, Ramon outlines how to quickly create an agent that can manage state for user conversations, leverage external tools such as knowledge retrieval and code interpreters, and call its own functions to implement functionality.
Ramon also introduced features that combine newly released patterns and feature calls, and he built a custom assistant for Dev Day. Moreover, he decided to use voice instead of the chat interface. He showed a simple Swift app that can receive microphone input and show what's happening in the background in the terminal logs. He used whisper to convert voice input to text, utilized GPT-4 Turbo's assistant, and used the new TTS API to make it sound.
Ramon also demonstrates how the assistant can connect to the internet and perform real-world actions for the user. He offered to have the assistant give $500 in OpenAI credits to five random Dev Day participants, and the assistant successfully completed the task.
Finally, in his closing remarks at OpenAI's Developer Day, Sam Altman said that the Assistive API is in beta testing and that he's excited to see how developers will use it. He emphasized that GPT and assistive tools are a pioneer on the way to more complex agents that will be able to plan and execute more complex tasks for users.
Altman reiterates the importance of gradual iterative deployments and encourages people to start using these agents now in order to adapt to a future world where they become more capable. He assured that OpenAI will continue to update the system based on user feedback, saying that OpenAI has an outstanding talent density, but it still takes a lot of effort and coordination to achieve all of this. He feels like he has the best colleagues in the world and is incredibly grateful to be able to work with them.
Here's why OpenAI's team is working so hard: they believe that AI will be part of a technological and social revolution that will change the world in many ways. Altman mentioned that they had discussed earlier that by giving people better tools, they could change the world. He believes that AI will bring unprecedented personal empowerment and agency scale, thereby elevating humanity to an unprecedented level. As intelligence becomes more ubiquitous, we all have superpowers at all times. He's excited about how you'll be using the technology and the new future we're building together.