The voice agent is currently in AI Preview. It works well with clear, direct commands and gets better with every update. Right now it handles one command at a time. Back-and-forth conversations and more natural understanding are coming as training continues.
Using the voice agent
Speak naturally
Describe your reminder in plain language. For example: “Remind me to call the dentist tomorrow at 2pm.”
What it can do
The voice agent handles more than just creating reminders. It can manage your reminders and categories end to end. During the preview period, the agent works best with simple, direct commands. You don’t need to use exact wording, but being clear about what you want helps it get things right. As training continues, it’ll get smarter at understanding less precise, more conversational requests.Reminders
- Create: “Remind me to pick up groceries at 5pm today”
- Edit: “Change my dentist reminder to next Thursday at 2pm”
- Delete: “Delete the grocery reminder”
- Complete: “Mark the dentist appointment as done”
- Incomplete: “Unmark the grocery reminder”
- Snooze: “Snooze the vet reminder for 30 minutes”
Categories
- Create: “Create a category called Work”
- Edit: “Rename my Work category to Office”
- Delete: “Delete the Personal category”
Recurring reminders
- “Remind me to take out the trash every Tuesday at 7pm”
- “Set a daily reminder to take my medication at 9am”
How it works
Your voice is processed by Gemini to understand what you want and pull out the right details, like the title, date, and time. Those details are used to create your reminder automatically.Voice recordings are processed in real time and never stored. Your audio is discarded immediately after processing. See the Privacy Policy for details.
Water Drops
every voice command uses one Water Drop. think of them as fuel for the voice agent. they’re included with your subscription and reset regularly. if you run through them faster than expected, you can grab more anytime from the app.Where it’s headed
The voice agent is just getting started. The goal is to make it feel like an actual assistant. you say whatever’s on your mind and it figures out the rest, no careful prompting required. Right now it handles clear, direct commands reliably. With continued training, it’ll get progressively smarter at understanding vague requests, messy phrasing, and the way people actually talk when they’re not thinking about how to word things for a computer. Here’s what’s on the roadmap:- Smarter understanding: Less need to be precise. Say “push the groceries thing to later” and it’ll know what you mean
- Multi-step commands: Handle several actions in one go, like “Remind me to call the vet tomorrow at 3pm and also set a weekly grocery run for Saturdays”
- Back-and-forth conversations: Ask follow-up questions, make changes, and refine your reminders through natural dialogue
- Context awareness: Better understanding of recurring patterns, relative dates, and your personal habits
