Skip to content

Actions: fairydreaming/llama.cpp

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
247 workflow run results
247 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Nix aarch64 builds
Nix aarch64 builds #27: Scheduled
June 14, 2024 12:46 4m 5s master
June 14, 2024 12:46 4m 5s
llama : add method for getting padding token id
flake8 Lint #32: Commit 442e276 pushed by fairydreaming
June 14, 2024 12:07 19s t5
t5
June 14, 2024 12:07 19s
llama : add method for getting padding token id
Code Coverage #32: Commit 442e276 pushed by fairydreaming
June 14, 2024 12:07 2m 8s t5
t5
June 14, 2024 12:07 2m 8s
convert-hf : correct wrong variable name
flake8 Lint #31: Commit b885765 pushed by fairydreaming
June 14, 2024 11:52 17s t5
t5
June 14, 2024 11:52 17s
convert-hf : correct wrong variable name
Python check requirements.txt #16: Commit b885765 pushed by fairydreaming
June 14, 2024 11:52 3m 20s t5
t5
June 14, 2024 11:52 3m 20s
convert-hf : correct wrong variable name
Code Coverage #31: Commit b885765 pushed by fairydreaming
June 14, 2024 11:52 2m 2s t5
t5
June 14, 2024 11:52 2m 2s
llama : move encoder output from llama_batch to llama_context, add is…
Code Coverage #30: Commit b6694e2 pushed by fairydreaming
June 14, 2024 10:53 2m 3s t5
t5
June 14, 2024 10:53 2m 3s
t5
June 14, 2024 10:53 17s
llama, convert-hf : correct tensor dimensions to support T5WithLMHead…
Code Coverage #29: Commit 2bd023d pushed by fairydreaming
June 14, 2024 07:57 2m 3s t5
t5
June 14, 2024 07:57 2m 3s
llama, convert-hf : correct tensor dimensions to support T5WithLMHead…
Python check requirements.txt #15: Commit 2bd023d pushed by fairydreaming
June 14, 2024 07:57 3m 18s t5
t5
June 14, 2024 07:57 3m 18s
t5
June 14, 2024 07:57 19s
Benchmark
Benchmark #26: Scheduled
June 14, 2024 02:36 3s master
June 14, 2024 02:36 3s
Close inactive issues
Close inactive issues #23: Scheduled
June 14, 2024 01:44 12s master
June 14, 2024 01:44 12s
Add initial support for T5ForConditionalGeneration.
Code Coverage #28: Commit 3d96a66 pushed by fairydreaming
June 13, 2024 20:08 2m 6s t5
t5
June 13, 2024 20:08 2m 6s
Add initial support for T5ForConditionalGeneration.
Python check requirements.txt #14: Commit 3d96a66 pushed by fairydreaming
June 13, 2024 20:08 3m 18s t5
t5
June 13, 2024 20:08 3m 18s
Add initial support for T5ForConditionalGeneration.
flake8 Lint #28: Commit 3d96a66 pushed by fairydreaming
June 13, 2024 20:08 19s t5
t5
June 13, 2024 20:08 19s
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
flake8 Lint #27: Commit 172c825 pushed by fairydreaming
June 13, 2024 19:53 17s t5
t5
June 13, 2024 19:53 17s
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
Code Coverage #27: Commit 172c825 pushed by fairydreaming
June 13, 2024 19:53 1m 55s t5
t5
June 13, 2024 19:53 1m 55s
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
Benchmark #25: Commit 172c825 pushed by fairydreaming
June 13, 2024 19:21 7s master
June 13, 2024 19:21 7s
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
EditorConfig Checker #3: Commit 172c825 pushed by fairydreaming
June 13, 2024 19:21 24s master
June 13, 2024 19:21 24s
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
flake8 Lint #26: Commit 172c825 pushed by fairydreaming
June 13, 2024 19:21 31s master
June 13, 2024 19:21 31s
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
Nix CI #3: Commit 172c825 pushed by fairydreaming
June 13, 2024 19:21 3m 42s master
June 13, 2024 19:21 3m 42s
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
Python check requirements.txt #13: Commit 172c825 pushed by fairydreaming
June 13, 2024 19:21 3m 47s master
June 13, 2024 19:21 3m 47s
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
CI #3: Commit 172c825 pushed by fairydreaming
June 13, 2024 19:21 56m 45s master
June 13, 2024 19:21 56m 45s
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
Publish Docker image #3: Commit 172c825 pushed by fairydreaming
June 13, 2024 19:21 1h 8m 45s master
June 13, 2024 19:21 1h 8m 45s