Development efforts are underway to significantly expand the capabilities of Google's Gemini assistant, with a focus on proactive assistance, agent-based task management, and advanced screen automation. These features are being developed across multiple versions of the Google app beta and are anticipated to be demonstrated at the upcoming Google I/O 2025 conference.
Proactive Assistance and Daily Brief
Analysis of the Google app version 17.18 beta reveals code references to a new "Proactive Assistance" feature. The feature is designed to provide personalized suggestions based on user activity, utilizing data from applications such as Gmail and Calendar, as well as on-screen content and notifications.
Google has stated that data for this feature is processed in an encrypted space on the device and is not used for generative AI training or human review.
A demonstration at Google I/O 2025 showed Gemini checking a user's Calendar for an upcoming test and sending a notification with a link to a generated practice quiz. Concurrently, the existing "Your Day" proactive feed has been renamed to "Daily brief," which may be an initial implementation of the Proactive Assistance feature.
Gemini Agent and "Remy" Development
Internal documents and code analysis of the Google app version 17.20 beta indicate a significant upgrade to the "Gemini Agent" feature, which was launched as an experimental feature with Gemini 3 in November. According to a report by Business Insider, Google employees are testing an agent codenamed "Remy."
Internal descriptions characterize Remy as a "24/7 personal agent for work, school, and daily life, powered by Gemini." The agent is described as being "deeply integrated across Google" and capable of monitoring for important events, handling complex tasks, learning user preferences over time, and taking actions on the web and with connected apps. These actions may include communicating, sharing documents, and making purchases.
The agent accesses information from chats, connected apps, personal context, location, and uploaded files. The feature remains experimental, with warnings that users are responsible for supervising tasks and that the agent may make mistakes. Users can manage and delete data the agent learns from interactions.
Experimental Labs Features
The Gemini web application has updated its Tools menu to include a new "Experimental Labs" section. This section, identified by a "Labs" beaker badge, contains features under active development:
- Agent: Available with AI Ultra subscription
- Dynamic view or Visual layout: Available to all users
- Personal Intelligence: Available to all paid subscribers
The beta version of the Google app (version 17.2) also reveals upcoming "Labs" features for the Gemini Live service on Android, including:
- Live Thinking Mode: A version of Gemini Live that takes more time to process and provide detailed responses, potentially utilizing the Thinking or Pro models of Gemini.
- Live Experimental Features: Expected to include multimodal memory, improved noise handling, the ability to respond when it detects something visually, and personalized results from integrated Google apps. These capabilities are linked to Gemini 3 Flash and Pro.
- UI Control: Enables the Gemini agent to control the phone to complete specific tasks.
- Deep Research: Allows users to delegate complex research tasks to Gemini.
Screen Automation and Android Integration
Google is developing a "screen automation" feature for Gemini on Android devices, identified in the Google app 17.4 beta under the codename "bonobo." This functionality aims to allow Gemini to assist with tasks such as placing orders or booking rides within specific applications. Android 16 QPR3 is reportedly laying the groundwork for this integration.
Users retain the ability to interrupt Gemini's automation and assume manual control. Google has advised that users are accountable for actions performed on their behalf and recommends close supervision. Privacy protocols indicate that screenshots are subject to review by trained personnel for the purpose of improving Google services, provided "Keep Activity" is enabled.
Additionally, details have emerged regarding an Android desktop interface integration. The Google app beta (version 17.5) includes strings suggesting users will be able to access Gemini for tasks such as writing, planning, and brainstorming via a Gemini icon in the status bar or a keyboard shortcut.
Developer Frameworks
Google is introducing developer capabilities designed to connect applications with agentic apps and AI assistants:
AppFunctions: An Android 16 platform feature and Jetpack library that allows applications to expose specific functions that agents and AI assistants can access and execute directly on the device.
Use cases include task management, media playback, cross-app workflows, and calendar scheduling. The Samsung Gallery app on the Galaxy S26 and other devices running OneUI 8.5 and higher utilizes AppFunctions.
UI Automation: A framework for scenarios where dedicated app integrations are not yet available, enabling AI agents to execute generic tasks on installed applications. Android 17 is expected to broaden these capabilities.
Legacy Voice and Avatar Changes
The Google app beta (version 17.18) indicates that older "Legacy voices" (named Ursa, Nova, Vega, Pegasus, Orion, Eclipse, Capella, Lyra, Dipper, and Orbit) will no longer be available. The removal date has not been specified.
Google is also developing a feature for integrating 3D representations of users into generative content, now referenced as "Avatar" in recent app versions. The creation process involves using a phone's camera for a head scan, similar to the "Likenesses" feature introduced for Android XR. Users may be able to insert their avatars into Gemini-generated content using prompts such as "@me."
Data and Privacy
Across these features, Google has emphasized that certain data processing occurs entirely on-device in an encrypted environment and is not used for generative AI training or human review. Users can toggle features on or off and select which apps contribute data.
A "Personalize chat when helpful" toggle has been added, enabling users to manage whether Gemini utilizes their connected apps for the current conversation.
Note: This information is based on analysis of decompiled APK code and internal documents, and may not reflect final shipping features.