Claude Code's prying AIs read off-limits secret files

Don't you hate it when machines can't follow simple instructions? Anthropic's Claude Code can't take "ignore" for an answer and continues to read passwords and API keys, even when your secrets file is supposed to be blocked. Software developers often store secrets – passwords, tokens, API keys, and other credentials – in .env files within project directories. And if they do so, they're supposed to ensure that the .env file does not get posted in a publicly accessible .git repository. A common way to do this is to create an entry in a .gitignore file that tells the developer's Git software to ignore that file when copying a local repo to a remote server. Claude implements something similar, a .claudeignore file.  When asked, "If I make a .env file, how do I keep you from reading it?", Claude responded, "You can add .env to a .claudeignore file in your project root. This works like .gitignore — Claude Code will refuse to read any files matching patterns listed there." But Claude is incorrect. As described in this Pastebin post, Claude can read the contents of an .env file despite an entry in the .claudeignore file that ought to prevent access.  The Register reproduced this result. We created a directory, created an .env file with sample secrets, added a .claudeignore file with ".env" and ".env.*" and then started Claude Code (v2.1.12) via the CLI. We asked Claude to read the .env file and it did so – which would not happen if Claude respected .claudeignore entries. This has potential security implications, particularly for agents – these tool-enabled AI models could be induced to share stored secrets via indirect prompt injection. What's more, Claude will also ignore the presence of ".env" in a .gitignore file. It does so despite a default /config flag that sets "Respect .gitignore in file picker" to "true." In fact, when asked to read the .env file in a project with a .gitignore entry that includes ".env", Claude dutifully prints the secrets within to the console, with the following warning: "Note: This file contains credentials. Be cautious about committing it to version control — make sure .env is listed in your .gitignore." Claude's willingness to ignore .claudeignore directives is cited in an open issue post in Claude Code's GitHub repository, "[HIGH PRIORITY] Claude exposes secrets/tokens in tool output - no redaction." The individual who opened the issue two days ago notes, "This is a security-critical issue that should be addressed urgently." It hasn't been. Two posts from November 2025 raise the same concern. Another open issue created two weeks ago also flags Claude's willingness to display secrets. As does yet another bug report from three weeks ago that says, "Claude should refrain from reading or even being aware of anything in the .claudeignore file, using [the] same standard parsing rules as a .gitignore file." There are ways to direct Claude to keep away from secrets that appear to work, such as specifying permissions within a settings.json file in a project's .claude directory.  When we created that file as described in the documentation, Claude reported an error: "The .env file is blocked by permission settings. This is expected behavior — .env files typically contain secrets (API keys, passwords, database credentials), so they are excluded from tool access as a security measure." But configuring these settings permissions can be tricky – a bug report raising this concern includes a response that explains Claude's syntax for absolute paths begins with two "//" instead of "/" as Linux and macOS users might expect. Developers have also opened issues reporting problems with the @ file reference syntax in settings.json. And there are other problems, like permissions.deny not preventing files from being loaded into memory. Anthropic did not respond to a request for comment. If settings.json is intended to be the only supported way of denying Claude file access, Anthropic should make it clearer that .claudeignore is not an option. The model's own recommendations should align with best practices, instead of leading people astray. ®
AI Article