Gmail has a nifty feature that automatically generates a summary of your email using Gemini. (Image Source: Google)
Google has been steadily adding new AI features to the mobile Gmail app. Earlier this year, in June, the company rolled out a new feature that used Gemini to show a summarised version of emails or long threads. While the functionality is useful, a newly found security flaw suggests that Gmail’s AI email summaries can be exploited to show harmful instructions and inject links to malicious websites.
According to Mozilla’s GenAI Bug Bounty Programs Manager, Marco Figueroa, a security researcher demonstrated how a prompt injection vulnerability in Google Gemini for Workspace allowed hackers to “hide malicious instructions inside an email”, which were activated when users clicked on the “Summarize this email” option in Gmail.
How does this work?
The process involved threat actors creating an email with invisible instructions for Gemini that were hidden in the body at the end of the message using HTML and CSS by setting the font size to zero and changing the text colour to white.
As there are no attachments in these emails, the message is highly likely to bypass Google’s spam filters and reach the target’s inbox. When the recipient opened their email and asked Gemini to generate a summarised version of the email, the AI tool was found to obey these hidden instructions.