This project is currently taken down. My apologies.
forked from PaulPauls/llama3_interpretability_sae
-
Notifications
You must be signed in to change notification settings - Fork 0
stanley-fork/llama3_interpretability_sae
About
A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- Python 100.0%