Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] How to use Local LLM tool like Ollama, LM Studio with TestSpark? #438

Open
1 of 5 tasks
almas opened this issue Feb 3, 2025 · 0 comments
Open
1 of 5 tasks

Comments

@almas
Copy link

almas commented Feb 3, 2025

Involved Module

  • UI
  • EvoSuite
  • LLM
  • Kex
  • Other (please explain)

Description

I would like to use local AI model in my local server. Can some one point me how can I connect the Ollama or LM Studio use with TestSpark?
I found the following python example. I think that I can use LM Studio like this with TestSpark. Is my understanding correct?
Are there any documentations for this?

# Example: reuse your existing OpenAI setup
from openai import OpenAI

# Point to the local server
client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")

completion = client.chat.completions.create(
  model="TheBloke/Mistral-7B-Instruct-v0.1-GGUF",
  messages=[
    {"role": "system", "content": "Always answer in rhymes."},
    {"role": "user", "content": "Introduce yourself."}
  ],
  temperature=0.7,
)

print(completion.choices[0].message)
@almas almas changed the title [Question] How to use Local LLM like Ollama, LM Studio with TestSpark? [Question] How to use Local LLM tool like Ollama, LM Studio with TestSpark? Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant