Google Rolls Out Gemini's Real-Time AI Video Features
Sign up for ARPU: Stay ahead of the curve on tech business trends.
Google has begun rolling out real-time AI video features for its Gemini AI model, as confirmed by a company spokesperson to The Verge. These features, which include the ability to "see" and understand information from a user's computer screen and smartphone camera, are being released to Google One AI Premium subscribers.
The rollout marks a significant step forward for Google's AI capabilities, building on its Project Astra technology first showcased a year ago. A Reddit user first reported the new screen-reading ability of Gemini, which Google had previously stated would be released to advanced subscribers in late March. A video posted by the user demonstrates the AI's ability to extract information from a computer screen.
The second feature, live video, allows Gemini to interpret real-time video feeds from a smartphone camera and answer questions related to the content. Google's own demonstration video showcases a user using this feature to seek advice on choosing a paint color for pottery.
"Google’s rollout of these features is a fresh example of the company’s big AI assistant lead," The Verge reports. The company is currently ahead of competitors like Amazon, which is preparing a limited early access launch of its Alexa Plus upgrade, and Apple, which has delayed its upgraded Siri. Notably, Samsung's Bixby remains in use, although Gemini is the default assistant on Samsung phones.