Skip to content

A sample Python project that demonstrates how to use Pangea's Secure Audit Log and Redact services to capture and filter what users are sending to LLMs.

License

Notifications You must be signed in to change notification settings

pangeacyber/python-prompt-protection

Repository files navigation

Prompt Protection in Python

An example CLI tool in Python that demonstrates how to use Pangea's Secure Audit Log and Redact services to capture and filter what users are sending to LLMs:

  • Secure Audit Log — Create an event when a human prompt is received.
  • Redact — Remove sensitive information from prompts.

Prerequisites

Setup

git clone https://github.com/pangeacyber/python-prompt-protection.git
cd python-prompt-protection

If using pip:

python -m venv .venv
source .venv/bin/activate
pip install .

Or, if using uv:

uv sync
source .venv/bin/activate

Then the app can be executed with:

python prompt_protection.py "Give me information on John Smith."

Usage

Usage: prompt_protection.py [OPTIONS] PROMPT

Options:
  --model TEXT             OpenAI model.  [default: gpt-4o-mini; required]
  --audit-token TEXT       Pangea Secure Audit Log API token. May also be set
                           via the `PANGEA_AUDIT_TOKEN` environment variable.
                           [required]
  --audit-config-id TEXT   Pangea Secure Audit Log configuration ID.
  --redact-token TEXT      Pangea Redact API token. May also be set via the
                           `PANGEA_REDACT_TOKEN` environment variable.
                           [required]
  --redact-config-id TEXT  Pangea Redact configuration ID.
  --pangea-domain TEXT     Pangea API domain. May also be set via the
                           `PANGEA_DOMAIN` environment variable.  [default:
                           aws.us.pangea.cloud; required]
  --openai-api-key TEXT    OpenAI API key. May also be set via the
                           `OPENAI_API_KEY` environment variable.  [required]
  --help                   Show this message and exit.

Example

With the "Person" Redact rule enabled, names will be redacted. So a prompt like the following:

python prompt_protection.py "Give me information on John Smith."

Will result in "Give me information on <PERSON>." being sent to the LLM instead. This redaction will also be logged in Secure Audit Log like so:

{
  "envelope": {
    "event": {
      "message": "Received and redacted prompt to LLM.",
      "old": "This is a test message about John Smith.",
      "new": "This is a test message about <PERSON>."
    }
  }
}

About

A sample Python project that demonstrates how to use Pangea's Secure Audit Log and Redact services to capture and filter what users are sending to LLMs.

Resources

License

Security policy

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •  

Languages