Skip to content

Commit

Permalink
add doc
Browse files Browse the repository at this point in the history
  • Loading branch information
NathanHB committed Feb 10, 2025
1 parent e5b63ef commit 9a7ace0
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 1 deletion.
File renamed without changes
File renamed without changes
6 changes: 5 additions & 1 deletion docs/source/use-litellm-as-backend.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ Lighteval allows to use litellm, a backend allowing you to call all LLM APIs
using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure,
OpenAI, Groq etc.].

Documentation for available APIs and compatible endpoints can be found [here](https://docs.litellm.ai/docs/).

## Quick use

```bash
Expand Down Expand Up @@ -35,11 +37,13 @@ model:
frequency_penalty: 0.0
```
## Use huggingface inference API
With this you can also access HuggingFace Inference servers, let's look at how to evaluate DeepSeek-R1-Distill-Qwen-32B.
First, let's look at how to acess the model, we can find this from [the model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B).
![Step 1]("../assets/litellm-guide-1.png")
![Step 1]("/imgs/litellm-guide-1.png")
Great ! Now we can simply copy paste the base_url and our api key to eval our model.
Expand Down

0 comments on commit 9a7ace0

Please sign in to comment.