Skip to content

Actions: gtygo/llama.cpp

Server

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
8 workflow runs
8 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

cuda : optimize argmax (#10441)
Server #8: Commit a5e4759 pushed by gtygo
November 22, 2024 04:16 8m 31s master
November 22, 2024 04:16 8m 31s
delete unused white space
Server #7: Commit 804ddd7 pushed by gtygo
August 10, 2024 09:03 9m 32s master
August 10, 2024 09:03 9m 32s
Reuse querybatch to reduce frequent memory allocation
Server #6: Commit 88105b7 pushed by gtygo
August 9, 2024 17:44 1h 7m 4s master
August 9, 2024 17:44 1h 7m 4s
retrieval
Server #5: Commit fe6dc61 pushed by gtygo
August 9, 2024 17:12 33m 38s master
August 9, 2024 17:12 33m 38s
llama : add support for lora adapters in T5 model (#8938)
Server #4: Commit 6afd1a9 pushed by gtygo
August 9, 2024 17:07 13m 17s master
August 9, 2024 17:07 13m 17s
make : fix llava obj file race (#8946)
Server #3: Commit 272e3bd pushed by gtygo
August 9, 2024 16:11 9m 37s master
August 9, 2024 16:11 9m 37s
sync : ggml
Server #2: Commit 4305b57 pushed by gtygo
August 9, 2024 08:44 11m 19s master
August 9, 2024 08:44 11m 19s
llama-bench : add support for getting cpu info on Windows (#8824)
Server #1: Commit 506122d pushed by gtygo
August 7, 2024 05:48 26m 30s master
August 7, 2024 05:48 26m 30s