en
ua
ru
de
pt
es
pl
fr
tr
fi
da
no
sv
en
EGW-NewsOthersGoogle Uncovers Hackers Misusing Gemini AI
Google Uncovers Hackers Misusing Gemini AI
87
0
0

Google Uncovers Hackers Misusing Gemini AI

Google has identified numerous state-sponsored hacking groups attempting to exploit its Gemini AI platform for malicious purposes, including aiding in malware development. However, the tech giant reports that these efforts have not resulted in significant cyber threats, emphasizing that AI remains a tool rather than a game-changer for cybercriminals.

Chicken.gg
Free gems, plus daily, weekly, & monthly boosts!
Chicken.gg
CS:GO
Claim bonus
Rain.gg
3 FREE Cases & 5% Deposit Bonus
Rain.gg
CS:GO
Claim bonus
Clash GG
5% deposit bonus up to 100 gems
Clash GG
CS:GO
Claim bonus

According to Google’s findings, hackers from Iran, North Korea, China, and Russia have leveraged Gemini for tasks such as translating content, refining phishing attacks, and writing computer code. The company traced this activity to over 10 Iranian hacking groups, 20 Chinese government-backed groups, and nine North Korean threat actors.

“Iranian APT (advanced persistent threat) actors were the heaviest users of Gemini, employing it for research on defense organizations, vulnerability analysis, and crafting content for disinformation campaigns,” Google noted in a blog post on Wednesday.

Despite these attempts, Google maintains that Gemini has primarily provided these actors with productivity enhancements rather than direct hacking capabilities. “At present, they primarily use AI for research, troubleshooting code, and creating and localizing content,” the company stated.

Google Uncovers Hackers Misusing Gemini AI 1

Image credit: Jaque Silva | NurPhoto

Google also highlighted the limitations hackers faced when trying to use Gemini for more advanced malicious tasks. While the AI-assisted them in understanding complex topics and generating basic code, its security safeguards prevented the misuse of cyberattacks. Attempts to exploit Gemini for advanced phishing techniques, coding malicious software, or bypassing Google’s security measures were unsuccessful.

“Some malicious actors unsuccessfully attempted to prompt Gemini for guidance on abusing Google products, such as advanced phishing techniques for Gmail, assistance coding a Chrome infostealer, and methods to bypass Google's account creation verification methods,” the company reported. “These attempts were unsuccessful. Gemini did not produce malware or other content that could plausibly be used in a successful malicious campaign.”

However, Google acknowledged that AI tools like Gemini could accelerate cyber threat actors’ workflows, allowing them to operate more efficiently and at a greater scale. For instance, an Iranian-based propaganda operation used Gemini to enhance translations, ensuring their disinformation campaigns reached broader audiences. Meanwhile, North Korean-linked hackers relied on the chatbot to draft cover letters and gather job-seeking advice for LinkedIn—potentially aiding their efforts to secure remote IT positions at U.S. companies, a growing concern among federal investigators.

“The group also used Gemini for information about overseas employee exchanges. Many of the topics would be common for anyone researching and applying for jobs,” Google noted.

Google’s findings echo similar discoveries by OpenAI. A year ago, OpenAI detected multiple state-sponsored groups attempting to misuse ChatGPT. However, its investigation revealed that these hackers were primarily using the chatbot for minor productivity boosts rather than executing advanced cyberattacks.

In response to these threats, Google has reinforced its AI security protocols. The company continually tests Gemini’s defenses to prevent misuse and collaborates with law enforcement when necessary. “We investigate abuse of our products, services, users, and platforms, including malicious cyber activities by government-backed threat actors, and work with law enforcement when appropriate,” Google stated.

Additionally, Google actively works to disrupt these cyber threats by removing suspected malicious actors from its platforms. This proactive stance underscores the company’s commitment to keeping its AI tools secure while acknowledging the evolving risks posed by state-sponsored hackers.

Leave comment
Did you like the article?
0
0

Comments

BRING TO TOP
FREE SUBSCRIPTION ON EXCLUSIVE CONTENT
Receive a selection of the most important and up-to-date news in the industry.
*
*Only important news, no spam.
SUBSCRIBE
LATER
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic.
Customize
OK