Skip to content

Commit b88ec9f

Browse files
committed
docs: change usage code
1 parent 513a6d4 commit b88ec9f

File tree

1 file changed

+8
-29
lines changed

1 file changed

+8
-29
lines changed

README.md

Lines changed: 8 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -57,41 +57,20 @@ An example of Ruby code that generates sentences with the quantization model is
5757
```ruby
5858
require 'llama_cpp'
5959

60-
model_params = LLaMACpp::ModelParams.new
61-
model = LLaMACpp::Model.new(model_path: '/home/user/llama.cpp/models/open_llama_7b/ggml-model-q4_0.bin', params: model_params)
60+
LlamaCpp.ggml_backend_load_all
6261

63-
context_params = LLaMACpp::ContextParams.new
64-
context_params.seed = 42
65-
context = LLaMACpp::Context.new(model: model, params: context_params)
62+
model_params = LlamaCpp::LlamaModelParams.new
63+
model = LlamaCpp::llama_model_load_from_file('/home/user/llama.cpp/models/open_llama_7b/ggml-model-q4_0.bin', model_params)
6664

67-
puts LLaMACpp.generate(context, 'Hello, World.')
68-
```
69-
70-
## Examples
71-
There is a sample program in the [examples](https://github.com/yoshoku/llama_cpp.rb/tree/main/examples) directory that allow interactvie communication like ChatGPT.
72-
73-
```sh
74-
$ git clone https://github.com/yoshoku/llama_cpp.rb.git
75-
$ cd examples
76-
$ bundle install
77-
$ ruby chat.rb --model /home/user/llama.cpp/models/open_llama_7b/ggml-model-q4_0.bin --seed 2023
78-
...
79-
User: Who is the originator of the Ruby programming language?
80-
Bob: The originator of the Ruby programming language is Mr. Yukihiro Matsumoto.
81-
User:
82-
```
83-
84-
![llama_cpp_chat_example](https://github.com/yoshoku/llama_cpp.rb/assets/5562409/374ae3d8-63a6-498f-ae6e-5552b464bdda)
65+
context_params = LlamaCpp::LlamaContextParams.new
66+
context = LlamaCpp.llama_init_from_model(model, context_params)
8567

86-
Japanse chat is also possible using the [Vicuna model on Hugging Face](https://huggingface.co/CRD716/ggml-vicuna-1.1-quantized).
68+
puts LLaMACpp.generate(context, 'Hello, World.')
8769

88-
```sh
89-
$ wget https://huggingface.co/CRD716/ggml-vicuna-1.1-quantized/resolve/main/ggml-vicuna-7b-1.1-q4_0.bin
90-
$ ruby chat.rb --model ggml-vicuna-7b-1.1-q4_0.bin --file prompt_jp.txt
70+
LlamaCpp.llama_free(context)
71+
LlamaCpp.llama_model_free(model)
9172
```
9273

93-
![llama_cpp rb-jpchat](https://github.com/yoshoku/llama_cpp.rb/assets/5562409/526ff18c-2bb2-4b06-8933-f72960024033)
94-
9574
## Contributing
9675

9776
Bug reports and pull requests are welcome on GitHub at https://github.com/yoshoku/llama_cpp.rb.

0 commit comments

Comments
 (0)