llama3.2 - Error: cannot find tensor lm_head.weight (HF/Candle Rust) #2514
Unanswered
louis030195
asked this question in
Q&A
Replies: 1 comment
-
Yes the new llama models share the lm-head with the embedding matrix. The github version of candle has been modified to handle this so you should probably update ot use it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
hi i'm trying to integrate this https://github.com/huggingface/candle/tree/main/candle-examples/examples/llama w llama323b in my code but getting
Error: cannot find tensor lm_head.weight
😦
the example works for me
i use
--features metal
(mac book pro m3 max 32 gb ram)this is my code https://github.com/mediar-ai/screenpipe/blob/main/screenpipe-core/src/llama.rs
i just copy pasted the example 99% similar but somehow getting this error
any idea why?
Beta Was this translation helpful? Give feedback.
All reactions