Skip to content

Commit c0ee3b9

Browse files
committed
ensure valid link to docs with latest/ in url
1 parent d1b8728 commit c0ee3b9

File tree

4 files changed

+9
-9
lines changed

4 files changed

+9
-9
lines changed

Diff for: README.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -21,11 +21,11 @@ Made with ❤👷️ by the team at [.txt](https://dottxt.co).
2121
pip install outlines
2222
```
2323

24-
First time here? Go to our [setup guide](https://dottxt-ai.github.io/outlines/welcome)
24+
First time here? Go to our [setup guide](https://dottxt-ai.github.io/outlines/latest/welcome/)
2525

2626
## Features
2727

28-
- [x] 🤖 [Multiple model integrations](https://dottxt-ai.github.io/outlines/installation): OpenAI, transformers, llama.cpp, exllama2, mamba
28+
- [x] 🤖 [Multiple model integrations](https://dottxt-ai.github.io/outlines/latest/installation): OpenAI, transformers, llama.cpp, exllama2, mamba
2929
- [x] 🖍️ Simple and powerful prompting primitives based on the [Jinja templating engine](https://jinja.palletsprojects.com/)
3030
- [x] 🚄 [Multiple choices](#multiple-choices), [type constraints](#type-constraint) and dynamic stopping
3131
- [x] ⚡ Fast [regex-structured generation](#efficient-regex-structured-generation)
@@ -35,7 +35,7 @@ First time here? Go to our [setup guide](https://dottxt-ai.github.io/outlines/we
3535
- [x] 💾 Caching of generations
3636
- [x] 🗂️ Batch inference
3737
- [x] 🎲 Sample with the greedy, multinomial and beam search algorithms (and more to come!)
38-
- [x] 🚀 [Serve with vLLM](https://dottxt-ai.github.io/outlines/reference/serve/vllm), with official Docker image, [`outlinesdev/outlines`](https://hub.docker.com/r/outlinesdev/outlines)!
38+
- [x] 🚀 [Serve with vLLM](https://dottxt-ai.github.io/outlines/latest/reference/serve/vllm), with official Docker image, [`outlinesdev/outlines`](https://hub.docker.com/r/outlinesdev/outlines)!
3939

4040

4141
Outlines has new releases and features coming every week. Make sure to ⭐ star and 👀 watch this repository, follow [@dottxtai][dottxt-twitter] to stay up to date!
@@ -338,7 +338,7 @@ answer = outlines.generate.text(model)(prompt, max_tokens=100)
338338
## Join us
339339

340340
- 💡 **Have an idea?** Come chat with us on [Discord][discord]
341-
- 🔨 **Want to contribute?** Consult our [contribution guide](https://dottxt-ai.github.io/outlines/community/contribute/).
341+
- 🔨 **Want to contribute?** Consult our [contribution guide](https://dottxt-ai.github.io/outlines/latest/community/contribute/).
342342
- 🐞 **Found a bug?** Open an [issue](https://github.com/dottxt-ai/outlines/issues)
343343

344344

@@ -353,7 +353,7 @@ answer = outlines.generate.text(model)(prompt, max_tokens=100)
353353
}
354354
```
355355

356-
[documentation]: https://dottxt-ai.github.io/outlines/welcome/
356+
[documentation]: https://dottxt-ai.github.io/outlines/latest/welcome/
357357
[documentation-badge]: https://img.shields.io/readthedocs/outlines
358358
[contributors]: https://github.com/dottxt-ai/outlines/graphs/contributors
359359
[contributors-badge]: https://img.shields.io/github/contributors/dottxt-ai/outlines?style=flat-square&logo=github&logoColor=white&color=ECEFF4

Diff for: docs/cookbook/simtom.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,9 @@ SimToM calls an LLM with two consecutive prompts:
1717

1818
To implement SimToM with Outlines, we will need to:
1919

20-
1. Write the prompts with [prompt functions](https://dottxt-ai.github.io/outlines/reference/prompting/).
20+
1. Write the prompts with [prompt functions](https://dottxt-ai.github.io/outlines/latest/reference/prompting/).
2121
2. Define the JSON object each prompt will return using Pydantic.
22-
3. Generate responses with a Mistral model using the [transformers integration](https://dottxt-ai.github.io/outlines/reference/models/transformers/).
22+
3. Generate responses with a Mistral model using the [transformers integration](https://dottxt-ai.github.io/outlines/latest/reference/models/transformers/).
2323

2424
Let's dive into it!
2525

Diff for: outlines/fsm/guide.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ def __init__(self, cfg_string: str, tokenizer):
107107
"""
108108
warnings.warn(
109109
"Outlines' public *community-contributed* CFG structured generation is experimental. "
110-
"Please review https://dottxt-ai.github.io/outlines/reference/cfg#disclaimer"
110+
"Please review https://dottxt-ai.github.io/outlines/latest/reference/generation/cfg#disclaimer"
111111
)
112112

113113
self.cfg_string = cfg_string

Diff for: outlines/models/exllamav2.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -302,7 +302,7 @@ def exl2(
302302
raise ImportError(
303303
"The `exllamav2`, `transformers` and `torch` libraries needs to be installed in order to use `exllamav2` models. "
304304
"Please run `pip install transformers torch git+https://github.com/lapp0/exllamav2@sampler-logits-processor` "
305-
"Documentation: https://dottxt-ai.github.io/outlines/reference/models/exllamav2/"
305+
"Documentation: https://dottxt-ai.github.io/outlines/latest/reference/models/exllamav2/"
306306
)
307307
config = ExLlamaV2Config(model_path)
308308
if max_chunk_size is not None:

0 commit comments

Comments
 (0)