diff --git a/docs/assets/litellm-guide-1.png b/docs/imgs/litellm-guide-1.png similarity index 100% rename from docs/assets/litellm-guide-1.png rename to docs/imgs/litellm-guide-1.png diff --git a/docs/assets/litellm-guide-2.png b/docs/imgs/litellm-guide-2.png similarity index 100% rename from docs/assets/litellm-guide-2.png rename to docs/imgs/litellm-guide-2.png diff --git a/docs/source/use-litellm-as-backend.mdx b/docs/source/use-litellm-as-backend.mdx index eb3bfb3b..2d941707 100644 --- a/docs/source/use-litellm-as-backend.mdx +++ b/docs/source/use-litellm-as-backend.mdx @@ -4,6 +4,8 @@ Lighteval allows to use litellm, a backend allowing you to call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]. +Documentation for available APIs and compatible endpoints can be found [here](https://docs.litellm.ai/docs/). + ## Quick use ```bash @@ -35,11 +37,13 @@ model: frequency_penalty: 0.0 ``` +## Use huggingface inference API + With this you can also access HuggingFace Inference servers, let's look at how to evaluate DeepSeek-R1-Distill-Qwen-32B. First, let's look at how to acess the model, we can find this from [the model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B). -![Step 1]("../assets/litellm-guide-1.png") +![Step 1]("/imgs/litellm-guide-1.png") Great ! Now we can simply copy paste the base_url and our api key to eval our model.