OpenAI’s ChatGPT will ‘see, hear and speak’ in a major update. It is coming closer to popular artificial intelligence (AI) assistants like Apple’s (AAPL.O) Siri due to a big update being implemented into OpenAI’s ChatGPT. The update will enable the chatbot to hold vocal chats with users and interact with pictures.
OpenAI noted in a blog post that the voice capability “opens doors to many creative and accessibility-focused applications” on Monday.
Similar artificial intelligence services, such as Siri, the voice assistant from Google (GOOGL.O), and Alexa from Amazon.com (AMZN.O), are linked with the devices on which they operate and are frequently used to set alarms and reminders, as well as provide information retrieved from the internet.
Since its introduction the year before, businesses have started using ChatGPT for various activities, from summarizing papers to developing computer code. This has sparked competition among large technology firms to introduce their products and services based on generative AI.
The new voice function of ChatGPT can read users’ input aloud and narrate bedtime stories, resolve arguments at the dinner table, and settle disagreements.
According to OpenAI, the technology enabling it is now being utilized by Spotify (SPOT.N) to enable the platform’s podcasters to translate their content into various languages.
With the addition of image functionality, users can capture photographs of the world around them and then ask the chatbot to do actions such as “explore the contents of your fridge to plan a meal,” “troubleshoot why your grill won’t start,” or “analyze a complex graph for work-related data.”
At the moment, the most well-known option for extracting information from pictures is Alphabet’s Google Lens.
Over the following two weeks, ChatGPT’s Plus and Enterprise subscription customers will be able to access the newly added capabilities as they become available.
Comment Template