Skip to content

Segmentation fault (core dumped) appearing randomly #2005

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
AleefBilal opened this issue Apr 25, 2025 · 2 comments
Open

Segmentation fault (core dumped) appearing randomly #2005

AleefBilal opened this issue Apr 25, 2025 · 2 comments

Comments

@AleefBilal
Copy link

AleefBilal commented Apr 25, 2025

Description:
I'm experiencing random assertion failures and segmentation faults when streaming responses from a fine-tuned Llama3.1 70B GGUF model. The error occurs in the GGML matrix multiplication validation.
Sometimes, it gives this GGML error, but most of the times, it just gives Segmentation fault (core dumped) and my pipeline crashes.

Environment:

  • llama_cpp_python version: 0.3.4
  • GPU: NVIDIA A40
  • Model: Custom fine-tuned Llama3.1 70B GGUF (originally fine-tuned with Unsloth at 4k context, running at 16k n_ctx)
  • OS: Ubuntu
  • Python version: 3.11

Error Log:

llama-cpp-python/llama-cpp-python/vendor/llama.cpp/ggml/src/ggml.c:3513: GGML_ASSERT(a->ne[2] == b->ne[0]) failed
Segmentation fault (core dumped)

Reproduction Steps:

  1. Load fine-tuned 70B GGUF model with:
    llm = Llama(
        model_path="llama3.1_70B_finetuned.Q4_K_M.gguf",
        n_ctx=16384,
        n_gpu_layers=-1,
        logits_all = True
    )
  2. Start streaming chat completion:
    for chunk in llm.create_chat_completion(
        messages=[...],
        stream=True,
        max_tokens=1000
    ):
        print(chunk)
  3. Error occurs randomly during streaming (usually after several successful chunks)

Additional Context:

  • The model was fine-tuned using Unsloth with 4k context length
  • Converted to GGUF using llama.cpp's convert script
  • Works fine for non-streaming inference
  • Error appears more frequently with longer context (>8k tokens)
  • Memory usage appears normal before crash (~80GB GPU mem for 70B Q4_K_M)

Debugging Attempts:

  1. Tried different n_ctx values (4096, 8192, 16384)
  2. Verified model integrity with llama.cpp's main example
  3. Added thread locking around model access (no effect)

System Info:

Cuda 12.2
python 3.11

Request:
Could you help investigate:

  1. Potential causes for the GGML tensor dimension mismatch
  2. Whether this relates to the context length difference between fine-tuning (4k) and inference (16k)
  3. Any known issues with streaming large (70B) models
@shamitv
Copy link
Contributor

shamitv commented Apr 26, 2025

Is it reproducible with latest version ? (0.3.8)

@AleefBilal
Copy link
Author

@shamitv
It is possible, but so far I've tested it with 0.2.9 and 0.3.4.
I just want to know what could be the reasons behind this error.
I've been using llama-cpp-python in many projects and for a long time, but it just occurs in one project where i am getting the output in a stream and calling the model again and again very fast (my use case is to get output from llama 70B as quick as possible.)

@AleefBilal AleefBilal changed the title Random GGML assertion failure and segfault during streaming with fine-tuned 70B model Segmentation fault (core dumped) appearing randomly May 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants