The query "create my dream house" used to return something average. Now Gemini will check your Gmail, Google Photos and Calendar before generating — and try to guess what exactly you mean. This is convenient. And that's exactly why it's worth understanding how it works.
What changed technically
Google added image generation based on Nano Banana to the Personal Intelligence feature, which allows Gemini to pull personal context from Gmail, Photos, Calendar, Drive and other services. If Google Photos has tags of people or animals, you can write "draw me and my family doing our favorite activity" — and Gemini will generate an image without uploading reference photos.
The technical advantage that Google cites is that Nano Banana uses the language understanding of the Gemini model for more accurate query interpretation — unlike separate image generators where language and image are separate.
Who has access and when
Image generation is available to Google AI Plus, Pro and Ultra subscribers in the US — and will appear in them over the coming days. Google also plans to expand it to the browser version of Gemini in Chrome and other users later. The feature remains unavailable in the EU, UK and Japan due to stricter privacy regulations.
Where Google draws the line — and how clear it is
The key question is not "is it convenient," but "what exactly does Google do with photos." The answer is official, but with a nuance.
"Gemini app does not train the model directly on your private Google Photos library. We train on limited information — including specific queries in Gemini and model responses."
— official Google statement
The technical difference between "training on data" and "using data for inference" is real — but it doesn't mean the data isn't processed. If you connect Google Photos to another Google service, that connection is subject to that service's policy, not Photos — and Google may train on certain data within that connection.
In parallel, Google is developing local image generation using Gemini Nano on Pixel and Android devices — without sending data to the cloud. This is a different approach: fast, private, no servers. But for now — only in the pipeline.
What you can control right now
- Connecting Google Photos to Personal Intelligence — optional, not automatic.
- The "Sources" button shows which exact photo Gemini selected as reference for generation.
- Google warns that Gemini may choose the wrong context, and offers a feedback mechanism.
- You can delete activity in the Gemini Apps Activity section — but this doesn't undo already processed requests.
The real test for this feature will come not when it draws the "right" dream house. But when it draws something the user didn't expect — and it becomes clear what data underlies it. If Google shows the full chain: which signal, from which source, with which weight — that will be a real argument for trust. If not, the difference between "reads" and "learns" will remain rhetorical.