Skip to content

How do I configure llama.cpp to use my iGPU instead of the GPU? #12443

Answered by 0cc4m
ZecaStevenson asked this question in Q&A

You must be logged in to vote

You can set the Vulkan device(s) to be used with GGML_VK_VISIBLE_DEVICES in a similar way to how it works with CUDA. In your case you would use GGML_VK_VISIBLE_DEVICES=0 for your iGPU, GGML_VK_VISIBLE_DEVICES=1 for your dGPU (which is also what it defaults to), or GGML_VK_VISIBLE_DEVICES=0,1 or even 1,0 for both.

Replies: 1 comment 1 reply

You must be logged in to vote
1 reply
@ZecaStevenson

Answer selected by ZecaStevenson
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants