Llama-v2-7B-Chat: State-of-the-art large language model useful on a variety of language understanding and generation tasks
Llama 2 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to 4-bit weights and 16-bit activations making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-KVCache-Quantized's latency.
This is based on the implementation of Llama-v2-7B-Chat found here. This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found here.
Sign up for early access to run these models on a hosted Qualcomm® device.
- The license for the original implementation of Llama-v2-7B-Chat can be found here.
- The license for the compiled assets for on-device deployment can be found here.
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.