You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to use local AI model in my local server. Can some one point me how can I connect the Ollama or LM Studio use with TestSpark?
I found the following python example. I think that I can use LM Studio like this with TestSpark. Is my understanding correct?
Are there any documentations for this?
# Example: reuse your existing OpenAI setupfromopenaiimportOpenAI# Point to the local serverclient=OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
completion=client.chat.completions.create(
model="TheBloke/Mistral-7B-Instruct-v0.1-GGUF",
messages=[
{"role": "system", "content": "Always answer in rhymes."},
{"role": "user", "content": "Introduce yourself."}
],
temperature=0.7,
)
print(completion.choices[0].message)
The text was updated successfully, but these errors were encountered:
almas
changed the title
[Question] How to use Local LLM like Ollama, LM Studio with TestSpark?
[Question] How to use Local LLM tool like Ollama, LM Studio with TestSpark?
Feb 3, 2025
Involved Module
Description
I would like to use local AI model in my local server. Can some one point me how can I connect the Ollama or LM Studio use with TestSpark?
I found the following python example. I think that I can use LM Studio like this with TestSpark. Is my understanding correct?
Are there any documentations for this?
The text was updated successfully, but these errors were encountered: