Skip to content

Commit facb8b5

Browse files
convert : fix autoawq gemma (ggml-org#6704)
* fix autoawq quantized gemma model convert error using autoawq to quantize gemma model will include a lm_head.weight tensor in model-00001-of-00002.safetensors. it result in this situation that convert-hf-to-gguf.py can't map lm_head.weight. skip loading this tensor could prevent this error. * change code to full string match and print necessary message change code to full string match and print a short message to inform users that lm_head.weight has been skipped. --------- Co-authored-by: Zheng.Deng <32841220+CUGfred@users.noreply.github.com>
1 parent 532c173 commit facb8b5

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

convert-hf-to-gguf.py

+6
Original file line numberDiff line numberDiff line change
@@ -2458,6 +2458,12 @@ def write_tensors(self):
24582458
tensor_map = gguf.get_tensor_name_map(self.model_arch, block_count)
24592459

24602460
for name, data_torch in self.get_tensors():
2461+
# lm_head is not used in llama.cpp, while autoawq will include this tensor in model
2462+
# To prevent errors, skip loading lm_head.weight.
2463+
if name == "lm_head.weight":
2464+
print(f"Skipping get tensor {name!r} in safetensors so that convert can end normally.")
2465+
continue
2466+
24612467
old_dtype = data_torch.dtype
24622468

24632469
# convert any unsupported data types to float32

0 commit comments

Comments
 (0)