Skip to content

Commit a45ef1f

Browse files
committed
Point to v0.7.1 vllm docs
1 parent f592649 commit a45ef1f

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/user_guides/mlops/serving/predictor.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ To create your own it is recommended to [clone](../../projects/python/python_env
9696
!!! note
9797
Only available for LLM deployments.
9898

99-
You can select a configuration file to be added to the [artifact files](deployment.md#artifact-files). If a predictor script is provided, this configuration file will be available inside the model deployment at the local path stored in the `CONFIG_FILE_PATH` environment variable. If a predictor script is **not** provided, this configuration file will be directly passed to the vLLM server. You can find all configuration parameters supported by the vLLM server in the [vLLM documentation](https://docs.vllm.ai/en/v0.6.4/serving/openai_compatible_server.html).
99+
You can select a configuration file to be added to the [artifact files](deployment.md#artifact-files). If a predictor script is provided, this configuration file will be available inside the model deployment at the local path stored in the `CONFIG_FILE_PATH` environment variable. If a predictor script is **not** provided, this configuration file will be directly passed to the vLLM server. You can find all configuration parameters supported by the vLLM server in the [vLLM documentation](https://docs.vllm.ai/en/v0.7.1/serving/openai_compatible_server.html).
100100

101101
<p align="center">
102102
<figure>
@@ -283,7 +283,7 @@ Hopsworks Model Serving supports deploying models with a Flask server for python
283283
| Flask | ✅ | python-based (scikit-learn, xgboost, pytorch...) |
284284
| TensorFlow Serving | ✅ | keras, tensorflow |
285285
| TorchServe | ❌ | pytorch |
286-
| vLLM | ✅ | vLLM-supported models (see [list](https://docs.vllm.ai/en/v0.6.4/models/supported_models.html)) |
286+
| vLLM | ✅ | vLLM-supported models (see [list](https://docs.vllm.ai/en/v0.7.1/models/supported_models.html)) |
287287

288288
## Serving tool
289289

@@ -330,7 +330,7 @@ Depending on the model server, a **server configuration file** can be selected t
330330
The configuration file can be of any format, except in vLLM deployments **without a predictor script** for which a YAML file is ==required==.
331331

332332
!!! note "Passing arguments to vLLM via configuration file"
333-
For vLLM deployments **without a predictor script**, the server configuration file is ==required== and it is used to configure the vLLM server. For example, you can use this configuration file to specify the chat template or LoRA modules to be loaded by the vLLM server. See all available parameters in the [official documentation](https://docs.vllm.ai/en/v0.6.4/serving/openai_compatible_server.html#command-line-arguments-for-the-server).
333+
For vLLM deployments **without a predictor script**, the server configuration file is ==required== and it is used to configure the vLLM server. For example, you can use this configuration file to specify the chat template or LoRA modules to be loaded by the vLLM server. See all available parameters in the [official documentation](https://docs.vllm.ai/en/v0.7.1/serving/openai_compatible_server.html#command-line-arguments-for-the-server).
334334

335335
### Environment variables
336336

0 commit comments

Comments
 (0)