Gemini’s AI saree trend is viral on Instagram – but how can you keep your photos safe?
If you’ve been active on social media recently, chances are you’ve either experimented with Google’s Nano Banana or at least come across the viral vintage saree AI edits filling social media feeds.
The “Nano Banana” craze comes from an AI photo-editing option powered by Google’s Gemini Nano model. It reimagines selfies as toy-like 3D figurines, complete with shiny plastic-like skin, oversized eyes, and exaggerated cartoon features. Riding on its popularity, users also began generating vintage saree portraits — stylised images that depict mostly women in traditional sarees, often set against old-school or film-inspired backdrops.
But like many viral AI fads, the trend has also sparked debates around digital safety and the privacy of personal photos.
Also read | Gemini AI saree photos: Best prompts to get a vintage Bollywood vibe
How Safe Is the Gemini Nano Banana Tool?
Tech firms such as Google and OpenAI say they provide safeguards for uploaded content, but experts stress that safety also depends on personal practices and the intentions of those accessing the images.
Google’s Nano Banana images carry an invisible digital watermark called SynthID, along with metadata tags, which are designed to identify the content as AI-generated.
Read | Woman shares ‘creepy’ experience with Google Gemini Nano Banana AI saree trend: ‘Make sure you stay safe’
“All images created or edited with Gemini 2.5 Flash Image include an invisible SynthID digital watermark to clearly identify them as AI-generated. Build with confidence and provide transparency for your users,” information on aistudio.google.com states.
Does Watermarking Really Work?
Although SynthID is invisible to the eye, it can confirm whether AI was involved in creating or editing content when checked with specific detection tools, according to spielcreative.com. This gives platforms and individuals a way to trace an image’s origin.
But the tool to detect the watermark is not publicly available. So while the watermark exists, most everyday users cannot confirm it, Tatler Asia reports.
Read | Forget Nano Banana! 15 Vintage retro-style AI portrait prompts to catch viral Instagram trend with Google Gemini
Critics also highlight weaknesses. Watermarks, they say, can be faked, removed, or ignored. Wired quoted Ben Colman, CEO of Reality Defender, saying that their “real-world applications fail from the onset.”
Experts agree watermarking alone is not enough. “Nobody thinks watermarking alone will be sufficient,” UC Berkeley professor Hany Farid told Wired. He added that combining watermarking with other technologies could make it harder to produce convincing fakes.
How Can You Keep Your Photos Safe?
Be selective: Avoid uploading sensitive or private images.
Strip metadata: Remove location tags and device details before uploading.
Check privacy settings: Limit who can see your content online.
Keep originals: Retain copies of your images to spot changes or misuse.
Read terms: Understand if the platform gains rights to your images or uses them for training AI.
Post Comment