OpenAI recently launched a new AI model of ChatGPT. The update brings GPT-4 to everyone, including free users, according to technology chief Mira Murati during a livestreamed event.
She highlighted that the new model, GPT-4o, is “much faster,” with enhanced capabilities in text, video, and audio. OpenAI also plans to allow users to video chat with ChatGPT eventually.
The “o” in GPT-4o stands for omni, and the new model enables ChatGPT to handle 50 different languages with improved speed and quality.
Murati mentioned that it will also be available via OpenAI’s API, allowing developers to build applications using the new model. She added that GPT-4o is twice as fast and half the cost of GPT-4 Turbo.
OpenAI researcher Mark Chen noted the model can “perceive your emotion,” and can handle users interrupting it. The team also tasked it with analysing a user’s facial expression to comment on their emotions.
The company plans to test Voice Mode in the coming weeks, offering early access to paid subscribers of ChatGPT Plus.
OpenAI stated that the new model can respond to users’ audio prompts “in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation.”
Additionally, OpenAI’s new model can function as a translator, even in audio mode. Chen demonstrated the tool’s ability to listen to Murati speaking Italian while he spoke English, translating into their respective languages as they conversed.
Team members also showcased the model’s ability to solve math equations and assist in writing code, positioning it as a stronger competitor to Microsoft’s own GitHub Copilot.