Skip to content

llama.cpp server-cuda-b5174 Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:server-cuda-b5174

Recent tagged image versions

  • Published 4 days ago · Digest
    sha256:a5c1fdef90559aba62e5fe1a881ef069be5e174a96e62de5bd78e0e399991c11
    1,347 Version downloads
  • Published 4 days ago · Digest
    sha256:81aa219cd1a01b638fcacd73c5ba2588baa6e76e238f141669bd52907dd9b05c
    342 Version downloads
  • Published 4 days ago · Digest
    sha256:abe8330b48241b3df134bee4e02fcd5133f1e1852fc546172190fbd41fa47782
    406 Version downloads
  • Published 4 days ago · Digest
    sha256:2c2157e65b09987af0644d08e9f0eab3adbc0d5207277bc3749fc5d9740702f3
    354 Version downloads
  • Published 4 days ago · Digest
    sha256:a2b96f4838141f9496abe9d82ec8e494c305f61ea04a50d8f3dce9e5fb17358b
    342 Version downloads

Loading

Details


Last published

4 days ago

Discussions

2.2K

Issues

768

Total downloads

192K