How I Built an Open-Source AI Safety Tool That Stops Agents From Accessing Your Secrets
AI coding agents can accidentally access sensitive files like .env. Samar creates a shadow workspace so agents can edit code safely without ever seeing your secrets.
The name Samar comes from the Indonesian word meaning vague, obscured, or disguised. In local language usage it evokes something that is purposely hidden or not fully revealed — which is exactly the goal of the tool: to disguise your sensitive files from prying AI agents while still letting them work on your code.
Background: Why AI Security Matters
AI coding assistants like Claude, Gemini, or Devin are rapidly becoming everyday development tools. They streamline tasks, suggest changes, and even fix bugs automatically. But there’s a hidden assumption that these agents “won’t read your sensitive files.” In reality, most agents are goal-based systems: if reading a secret file helps them complete a task, they will try to access it unless it’s technically blocked. This means environment variables, API keys, and private config files could be at risk—despite prompts telling the model not to read them.
This isn’t a theoretical risk—it’s a real gap in how AI agents interact with your filesystem.
Here's the proof of why I made Samar.

The image shows I explicitly asked Gemini-CLI to explicitly read the secret files.
But here's an interesting part. In another project, I asked the agent to fix something. As I mentioned, an agent is a goal-based system. If they should read the secret files, they will read the secrets.
Here I attached another proof to validate the contradiction of the first image:

The agent even mentioned "but I can if you explicitly ask". Yes it respect the secrets by default. I'm a technical person. What if some non-technical vibe coders out there just blindly accept every suggestion the agent provides?
