OpenAI's ChatGPT to respond to video feeds in real time
New York, US: OpenAI has pulled off a feat! It's added a cool, new feature to its chatbot, ChatGPT. Now, it can see and interact with you through live video. They hinted at this about seven months ago, and it's finally here.
OpenAI's ChatGPT to respond to video feeds in real time
At a digital event, OpenAI showcased this new trick. The chatbot can now see objects with a smartphone camera and chat about what it sees! This can help people with things like replying to app messages or learning to make coffee. This new video ability will slowly become available to those who subscribe to ChatGPT Plus and Pro.
OpenAI plans to share it with business and school users around January, according to their official statement. Remember when OpenAI brought ChatGPT on the scene two years ago? That move caused many to invest in text chatbots. OpenAI and other companies have since been adding skills like responding to sounds, pictures, and now video to their chatbots.
These new, fancy tricks make their services more fun and helpful. A recent Bloomberg News report shared OpenAI’s big news. It's part of their 12-day series of live product events. Apart from the improved ChatGPT, OpenAI also rolled out an extra-perk ChatGPT Pro subscription and a new AI tool called Sora that makes videos.
In a recent demonstration on CNN’s "60 Minutes," OpenAI President Greg Brockman used Advanced Voice Mode with vision to test Anderson Cooper’s anatomy. As Cooper sketched body parts on a blackboard, ChatGPT could "make sense" of what he was drawing.
"It’s exactly in the right spot," ChatGPT said. ‘It’s all in the head. And as far as shape goes, it’s an early start. It’s a bit more of an oval brain.
When Advanced Voice Mode finally launched in early fall for some ChatGPT users, it didn’t include the visual analysis feature. Throughout the build-up to Thursday’s release, OpenAI has focused on delivering the voice-only Advanced Voice Mode experience to more platforms and users in the EU.