- The Advanced Voice Mode for ChatGPT that allows realistic conversations is being rolled out in a limited alpha to some paying users.
- ChatGPT can hold natural conversations, sense and respond to your emotions, and allow you to interrupt it in the middle of a response with the Advanced voice mode.
- The vision mode from the demos that allows users to share videos or their screens with ChatGPT will be missing in the alpha.
Remember the voice demo for GPT-4o from OpenAI from a couple of months ago that shook the world with its likeness to a human? You know, the one that could make you forget for a while that you were talking to an AI and even reminded many people of Scarlett Johansson's AI character from the movie Her (and also faced criticisms for the same).
OpenAI is finally starting to roll it out in Alpha to a small number of ChatGPT Plus users, as the company shared on X. While initially, the company had plans to start the rollout in June, it was delayed for safety reasons and to ensure that it could "reach [OpenAI's] bar to launch". As such, the company was busy improving the model with a team of external red teamers so that it would detect and refuse certain content.
Now, some paying users are starting to get access to the model in their ChatGPT app but right now, the access to the alpha version will stay small. The company says it plans to gradually roll out access to more users and bring it to all paying users later in the fall.
There doesn't seem to be any way you can request access to the Advanced Voice Mode. If you are on the list of the small number of users who'll get access to the alpha, you'll receive an email with instructions as well as a notification in your ChatGPT app for trying out the Voice Mode.
The Advanced Voice Mode will only be capable of speaking in the 4 preset voices – Juniper, Amber, Cove, and Breeze. Notably, OpenAI removed Sky – the one that sounded like Scarlett Johansson shortly after the demo was released at the Spring event and the actor sent letters to the company asking how the voice was made (to which OpenAI never responded apparently). Sky is still not part of the roster, it seems.
ChatGPT-4o also won't be able to provide outputs in any other voice other than the ones in the preset to protect user privacy. There are also guardrails in place to block requests for any violent or copyrighted content.
If you have access to the advanced Voice Mode in your ChatGPT app, go on and give it a try. While it won't be capable of everything shown in the demos until it also acquires its vision capabilities, it still seems quite impressive, according to some videos shared by users who have access.
Member discussion