Skip to content

Latest commit

 

History

History
31 lines (19 loc) · 2.04 KB

File metadata and controls

31 lines (19 loc) · 2.04 KB

Qualcomm® AI Hub Models

Llama 2 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to 4-bit weights and 16-bit activations making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-KVCache-Quantized's latency.

This is based on the implementation of Llama-v2-7B-Chat found here. This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found here.

Sign up for early access to run these models on a hosted Qualcomm® device.

License

  • The license for the original implementation of Llama-v2-7B-Chat can be found here.
  • The license for the compiled assets for on-device deployment can be found here.

References

Community