-
Notifications
You must be signed in to change notification settings - Fork 143
Issues: b4rtaz/distributed-llama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Inconsistent struct layouts can break cross-architecture usage
#190
opened Mar 22, 2025 by
antoine-sac
In chat mode, the LLM agent seems to keep talking to itself without stopping.
#186
opened Mar 10, 2025 by
Marvin-BW
Not able see Scaling performance with NuC (12th Gen) with deepseek_r1_distill_llama_8b_q40
#179
opened Mar 3, 2025 by
deepaks2
Weird bug where malformed API request causes model to analyze error message
#151
opened Jan 22, 2025 by
jkeegan
[New Feature] Add new route for dllama api for embeding models
#96
opened Jul 1, 2024 by
testing0mon21
Unknown header key's while converting llama 3 70b to distributed format
#40
opened May 8, 2024 by
DifferentialityDevelopment
Previous Next
ProTip!
no:milestone will show everything without a milestone.