Welcome to prompt-injection-lab, a project focused on prompt injection lab.
The prompt-injection-lab project aims to provide a hands-on learning experience for prompt injection techniques. Prompt injection is a security vulnerability that occurs when an application fails to properly sanitize user input before using it in a command prompt or shell. This can lead to various security risks, such as command injection, remote code execution, or privilege escalation.
In this project, we are exploring different techniques to enhance the security of a web application against prompt injection. Initially, the web app is vulnerable to Server-Side Template Injection (SSTI) in the response output.
By default this lab uses Google Gemini. Why? cuz it's free 😂
- Lab 1: Vulnerable LLM App
- Lab 2: Vulnerable LLM App with Rebuff - LLM Prompt Injection Detector
- Lab 3: Vulnerable LLM App with Guardrails AI - Adding guardrails to LLM (OpenAI)
- Lab 4: Vulnerable LLM App with Guardrails AI - Adding guardrails to LLM (Google Gemini)
Please note that for the Rebuff
and Guardrails-AI
labs, some dependencies and official documentation may be outdated. However, I have made every effort to ensure that they still work.
To ensure a smooth experience, please refer to the README.md
and requirements.txt
files for each lab. They contain important information and instructions to help you get started.
If you encounter any issues or have any questions, please don't hesitate to reach out for assistance.
To get started with prompt-injection-lab, please follow the instructions below:
- Check out each lab folders
README.md
- Enjoy!
We welcome contributions from the community to enhance the prompt-injection-lab project. If you have any suggestions, bug reports, or would like to contribute code, please refer to our Contribution Guidelines.
prompt-injection-lab is released under the MIT License.