Meta AI, the company's AI assistant, has a new voice mode, allowing it to respond to questions verbally across its platforms, including Instagram, Messenger, WhatsApp, and Facebook. Users can choose from various voices, including AI clones of celebrities like Awkwafina, Dame Judi Dench, John Cena, Keegan-Michael Key, and Kristen Bell.
Meta AI's voice mode is different from OpenAI's Advanced Voice Mode for ChatGPT, which boasts a more expressive and emotive tone. Instead, it resembles Google's Gemini Live, where speech is transcribed, answered by AI, and read aloud using a synthetic voice.
Meta is betting that these high-profile voices will attract users, with reports suggesting they paid millions for the use of these celebrity likenesses. However, the effectiveness of this strategy remains to be seen.
Beyond voice, Meta AI has new image analysis capabilities. Users can share photos of flowers or dishes, and the AI can identify them or provide instructions on how to cook the dish. While promising, this feature occasionally produces inaccurate results.
Meta is piloting a Meta AI translation tool that automatically translates voices in Instagram Reels. It dubs the creator's speech, auto-lip-syncs it, and simulates the voice in another language while maintaining matching lip movements. This feature is currently in small tests, focusing on Latin American creators in the U.S., translating between English and Spanish.
With the addition of voice mode, Meta AI aims to provide a comprehensive AI experience across its platforms, competing with ChatGPT and similar AI assistants. While the voice mode may not be as expressive as ChatGPT's, it provides a functional alternative. Meta's investment in celebrity voices and its expanding capabilities with image analysis and translation further strengthens its position in the AI landscape.
Ask anything...