🦙 Run Ollama large language models in GitHub Actions.
# .github/workflows/ollama.yml
on: push
jobs:
ollama:
runs-on: ubuntu-latest
steps:
- name: Run LLM
uses: ai-action/ollama-action@v1
id: llm
with:
model: llama3.2
prompt: Explain the basics of machine learning.
- name: Get response
env:
response: ${{ steps.llm.outputs.response }}
run: echo "$response"
Run a prompt against a model:
- uses: ai-action/ollama-action@v1
id: explanation
with:
model: tinyllama
prompt: "What's a large language model?"
- name: Get response
env:
response: ${{ steps.explanation.outputs.response }}
run: echo "$response"
See action.yml
Required: The language model to use.
- uses: ai-action/ollama-action@v1
with:
model: llama3.2
Required: The input prompt to generate the text from.
- uses: ai-action/ollama-action@v1
with:
prompt: Tell me a joke.
To set a multiline prompt:
- uses: ai-action/ollama-action@v1
with:
prompt: |
Tell me
a joke.
Optional: The Ollama version to use. See available versions.
- uses: ai-action/ollama-action@v1
with:
version: 0.5.11
The generated response message.
- uses: ai-action/ollama-action@v1
id: answer
with:
model: llama3.2
prompt: What's 1+1?
- name: Get response
env:
response: ${{ steps.answer.outputs.response }}
run: echo "$response"
The environment variable is wrapped in double quotes to preserve newlines.