Skip to main content
PROMPT SPACE
S
$15.00developer-toolsUniversal

Secret Leak Guard

Locally scan your repository for leaked API keys, tokens, and secrets before committing or publishing code.

skill install https://www.promptspace.in/skills/secret-leak-guard

What it does

Secret Leak Guard provides a high-performance, local-first scanning layer to prevent credentials, API keys, and sensitive tokens from ever leaving your machine. It identifies high-entropy strings and known secret patterns within your codebase before they are pushed to remote repositories.

Why use this skill

Relying on generic LLM prompts for security is risky; they can miss subtle patterns or fail to prioritize leaked data correctly. This skill automates the detection process with precision, providing masked findings to ensure the AI agent itself doesn't inadvertently log the full secret. It is essential for maintaining compliance and preventing 0-day credential leaks.

Supported Environment

  • Works across any local directory or Git repository
  • Supports modern development environments like Cursor, Claude Code, and VS Code
  • Optional JSON output for integration into CI/CD pipelines or custom developer workflows
  • Zero network calls for scanning, ensuring complete data privacy

The Output

You receive a clean, actionable report of potential leaks, categorized by file and line number, with sensitive values safely masked for review.

Use cases

  • Prevent accidental exposure of API keys and credentials in public repos
  • Audit local projects for hardcoded secrets before deployment
  • Generate structured reports of sensitive data findings in JSON format
  • Enforce security compliance during the local development lifecycle

Example

Prompt

Scan my current directory for leaked secrets before I push this commit.

Sample output preview is available after purchase.

Frequently asked questions

Secret Leak Guard performs high-entropy scanning and pattern matching locally on your machine, preventing sensitive credentials from being sent to the LLM or pushed to remote repositories.