top of page
Search

OpenAI says prompt injections that can trick AI browsers like ChatGPT Atlas may never be fully solved—experts say risks are a feature not a bug

  • lastmansurfing
  • Dec 24, 2025
  • 1 min read



OpenAI has said that some attack methods against AI browsers like ChatGPT Atlas are likely here to stay, raising questions about whether AI agents can ever safely operate across the open web. 


The main issue is a type of attack called “prompt injection,” where hackers hide malicious instructions in websites, documents, or emails that can trick the AI agent into doing something harmful.

 

For example, an attacker could embed hidden commands in a webpage—perhaps in text that is invisible to the human eye but looks legitimate to an AI—that override a user’s instructions and tell an agent to share a user’s emails, or drain someone’s bank account.


Read more | FORTUNE




 
 
  • Twitter
  • Instagram

© 2026 Unmissable OpenAI

bottom of page