Skip to content

Commit f272457

Browse files
authored
Update README.md
Add recommendations for VLMs.
1 parent 87060f7 commit f272457

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ NB: The phone numbers are placeholders for the actual phone numbers.
88
You need some VRAM to run this project. You can get VRAM from [here](https://vast.ai/)
99
We recommend 400MB-8GB of VRAM for this project. It can run on CPU however, I recommend smaller models for this.
1010

11-
[Mistral 7B](https://ollama.com/library/mistral), **llama 3.2 3B/1B**, [**Qwen 2.5: 0.5/1.5B**](https://ollama.com/library/qwen2.5:1.5b), [nemotron-mini 4b](https://ollama.com/library/nemotron-mini) and [llama3.1 8B](https://ollama.com/library/llama3.1) are the recommended models for this project.
11+
[Mistral 7B](https://ollama.com/library/mistral), **llama 3.2 3B/1B**, [**Qwen 2.5: 0.5/1.5B**](https://ollama.com/library/qwen2.5:1.5b), [nemotron-mini 4b](https://ollama.com/library/nemotron-mini) and [llama3.1 8B](https://ollama.com/library/llama3.1) are the recommended models for this project. As for the VLM's (Vision Language Models), in the workflow consider using [llama3.2-vision](https://ollama.com/library/llama3.2-vision) or [Moondream2](https://ollama.com/library/moondream).
1212

1313
Ensure ollama is installed on your laptop/server and running before running this project. You can install ollama from [here](ollama.com)
1414
Learn more about tool calling <https://gorilla.cs.berkeley.edu/leaderboard.html>

0 commit comments

Comments
 (0)