You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this guide you will learn how to export a Torch model and register it in the Model Registry.
6
+
7
+
8
+
## Code
9
+
10
+
### Step 1: Connect to Hopsworks
11
+
12
+
=== "Python"
13
+
```python
14
+
import hopsworks
15
+
16
+
project = hopsworks.login()
17
+
18
+
# get Hopsworks Model Registry handle
19
+
mr = project.get_model_registry()
20
+
```
21
+
22
+
### Step 2: Train
23
+
24
+
Define your Torch model and run the training loop.
25
+
26
+
=== "Python"
27
+
```python
28
+
# Define the model architecture
29
+
class Net(nn.Module):
30
+
def __init__(self):
31
+
super().__init__()
32
+
self.conv1 = nn.Conv2d(3, 6, 5)
33
+
...
34
+
35
+
def forward(self, x):
36
+
x = self.pool(F.relu(self.conv1(x)))
37
+
...
38
+
return x
39
+
40
+
# Instantiate the model
41
+
net = Net()
42
+
43
+
# Run the training loop
44
+
for epoch in range(n):
45
+
...
46
+
```
47
+
48
+
### Step 3: Export to local path
49
+
50
+
Export the Torch model to a directory on the local filesystem.
51
+
52
+
=== "Python"
53
+
```python
54
+
model_dir = "./model"
55
+
56
+
torch.save(net.state_dict(), model_dir)
57
+
```
58
+
59
+
### Step 4: Register model in registry
60
+
61
+
Use the `ModelRegistry.torch.create_model(..)` function to register a model as a Torch model. Define a name, and attach optional metrics for your model, then invoke the `save()` function with the parameter being the path to the local directory where the model was exported to.
You can attach an [Input Example](../input_example.md) and a [Model Schema](../model_schema.md) to your model to document the shape and type of the data the model was trained on.
Copy file name to clipboardexpand all lines: docs/user_guides/mlops/serving/deployment.md
+14-9
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
In this guide, you will learn how to create a new deployment for a trained model.
6
6
7
7
!!! warning
8
-
This guide assumes that a model has already been trained and saved into the Model Registry. To learn how to create a model in the Model Registry, see [Model Registry Guide](../registry/frameworks/tf.md)
8
+
This guide assumes that a model has already been trained and saved into the Model Registry. To learn how to create a model in the Model Registry, see [Model Registry Guide](../registry/index.md#exporting-a-model)
9
9
10
10
Deployments are used to unify the different components involved in making one or more trained models online and accessible to compute predictions on demand. For each deployment, there are four concepts to consider:
11
11
@@ -41,8 +41,8 @@ After selecting the model, the rest of fields are filled automatically. We pick
41
41
!!! notice "Deployment name validation rules"
42
42
A valid deployment name can only contain characters a-z, A-Z and 0-9.
43
43
44
-
!!! info "Predictor script for Python models and LLMs"
45
-
For Python models and LLMs, you must select a custom [predictor script](#predictor) that loads and runs the trained model by clicking on `From project` or `Upload new file`, to choose an existing script in the project file system or upload a new script, respectively.
44
+
!!! info "Predictor script for Python models"
45
+
For Python models, you must select a custom [predictor script](#predictor) that loads and runs the trained model by clicking on `From project` or `Upload new file`, to choose an existing script in the project file system or upload a new script, respectively.
46
46
47
47
If you prefer, change the name of the deployment, model version or [artifact version](#model-artifact). Then, click on `Create new deployment` to create the deployment for your model.
48
48
@@ -76,10 +76,10 @@ You will be redirected to a full-page deployment creation form where you can see
Once you are done with the changes, click on `Create new deployment` at the bottom of the page to create the deployment for your model.
85
85
@@ -174,7 +174,12 @@ Inside a model deployment, the local path to the model files is stored in the `M
174
174
175
175
## Artifact Files
176
176
177
-
Artifact files are files involved in the correct startup and running of the model deployment. The most important files are the **predictor** and **transformer scripts**. The former is used to load and run the model for making predictions. The latter is typically used to transform model inputs at inference time.
177
+
Artifact files are files involved in the correct startup and running of the model deployment. The most important files are the **predictor** and **transformer scripts**. The former is used to load and run the model for making predictions. The latter is typically used to apply transformations on the model inputs at inference time before making predictions. Predictor and transformer scripts run on separate components and, therefore, scale independently of each other.
178
+
179
+
!!! tip
180
+
Whenever you provide a predictor script, you can include the transformations of model inputs in the same script as far as they don't need to be scaled independently from the model inference process.
181
+
182
+
Additionally, artifact files can also contain a **server configuration file** that helps detach configuration used within the model deployment from the model server or the implementation of the predictor and transformer scripts. Inside a model deployment, the local path to the configuration file is stored in the `CONFIG_FILE_PATH` environment variable (see [environment variables](../serving/predictor.md#environment-variables)).
178
183
179
184
Every model deployment runs a specific version of the artifact files, commonly referred to as artifact version. ==One or more model deployments can use the same artifact version== (i.e., same predictor and transformer scripts). Artifact versions are unique for the same model version.
180
185
@@ -189,7 +194,7 @@ Inside a model deployment, the local path to the artifact files is stored in the
189
194
All files under `/Models` are managed by Hopsworks. Changes to artifact files cannot be reverted and can have an impact on existing model deployments.
190
195
191
196
!!! tip "Additional files"
192
-
Currently, the artifact files only include predictor and transformer scripts. Support for additional files (e.g., configuration files or other resources) is coming soon.
197
+
Currently, the artifact files can only include predictor and transformer scripts, and a configuration file. Support for additional files (e.g., other resources) is coming soon.
@@ -85,7 +86,22 @@ To create your own it is recommended to [clone](../../projects/python/python_env
85
86
</figure>
86
87
</p>
87
88
88
-
### Step 5 (Optional): Enable KServe
89
+
90
+
### Step 5 (Optional): Select a configuration file
91
+
92
+
!!! note
93
+
Only available for LLM deployments.
94
+
95
+
You can select a configuration file to be added to the [artifact files](deployment.md#artifact-files). If a predictor script is provided, this configuration file will be available inside the model deployment at the local path stored in the `CONFIG_FILE_PATH` environment variable. If a predictor script is **not** provided, this configuration file will be directly passed to the vLLM server. You can find all configuration parameters supported by the vLLM server in the [vLLM documentation](https://docs.vllm.ai/en/v0.6.4/serving/openai_compatible_server.html).
96
+
97
+
<palign="center">
98
+
<figure>
99
+
<img style="max-width: 78%; margin: 0 auto" src="../../../../assets/images/guides/mlops/serving/deployment_simple_form_vllm_conf_file.png" alt="Server configuration file in the simplified deployment form">
100
+
<figcaption>Select a configuration file in the simplified deployment form</figcaption>
101
+
</figure>
102
+
</p>
103
+
104
+
### Step 6 (Optional): Enable KServe
89
105
90
106
Other configuration such as the serving tool, is part of the advanced options of a deployment. To navigate to the advanced creation form, click on `Advanced options`.
91
107
@@ -105,7 +121,7 @@ Here, you change the [serving tool](#serving-tool) for your deployment by enabli
105
121
</figure>
106
122
</p>
107
123
108
-
### Step 6 (Optional): Other advanced options
124
+
### Step 7 (Optional): Other advanced options
109
125
110
126
Additionally, you can adjust the default values of the rest of components:
111
127
@@ -143,50 +159,71 @@ Once you are done with the changes, click on `Create new deployment` at the bott
143
159
144
160
def __init__(self):
145
161
""" Initialization code goes here"""
146
-
pass
162
+
# Model files can be found at os.environ["MODEL_FILES_PATH"]
163
+
# self.model = ... # load your model
147
164
148
165
def predict(self, inputs):
149
166
""" Serve predictions using the trained model"""
150
-
pass
167
+
# Use the model to make predictions
168
+
# return self.model.predict(inputs)
151
169
```
152
-
=== "Generate (vLLM deployments only)"
170
+
=== "Predictor (vLLM deployments only)"
153
171
``` python
154
-
from typing import Iterable, AsyncIterator, Union
155
-
156
-
from vllm import LLM
157
-
172
+
import os
173
+
from vllm import __version__, AsyncEngineArgs, AsyncLLMEngine
174
+
from typing import Iterable, AsyncIterator, Union, Optional
158
175
from kserve.protocol.rest.openai import (
159
176
CompletionRequest,
160
177
ChatPrompt,
161
178
ChatCompletionRequestMessage,
162
179
)
163
180
from kserve.protocol.rest.openai.types import Completion
181
+
from kserve.protocol.rest.openai.types.openapi import ChatCompletionTool
| vLLM | ✅ | vLLM-supported models (see [list](https://docs.vllm.ai/en/latest/models/supported_models.html)) |
282
+
| vLLM | ✅ | vLLM-supported models (see [list](https://docs.vllm.ai/en/v0.6.4/models/supported_models.html)) |
246
283
247
284
## Serving tool
248
285
@@ -279,7 +316,17 @@ The predictor script needs to implement a given template depending on the model
279
316
| | TensorFlow Serving | ❌ |
280
317
| KServe | Fast API | ✅ (only required for artifacts with multiple models) |
281
318
| | TensorFlow Serving | ❌ |
282
-
| | vLLM | ✅ (required) |
319
+
| | vLLM | ✅ (optional) |
320
+
321
+
### Server configuration file
322
+
323
+
Depending on the model server, a **server configuration file** can be selected to help detach configuration used within the model deployment from the model server or the implementation of the predictor and transformer scripts. In other words, by modifying the configuration file of an existing model deployment you can adjust its settings without making changes to the predictor or transformer scripts. Inside a model deployment, the local path to the configuration file is stored in the `CONFIG_FILE_PATH` environment variable (see [environment variables](#environment-variables)).
324
+
325
+
!!! warning "Configuration file format"
326
+
The configuration file can be of any format, except in vLLM deployments **without a predictor script** for which a YAML file is ==required==.
327
+
328
+
!!! note "Passing arguments to vLLM via configuration file"
329
+
For vLLM deployments **without a predictor script**, the server configuration file is ==required== and it is used to configure the vLLM server. For example, you can use this configuration file to specify the chat template or LoRA modules to be loaded by the vLLM server. See all available parameters in the [official documentation](https://docs.vllm.ai/en/v0.6.4/serving/openai_compatible_server.html#command-line-arguments-for-the-server).
283
330
284
331
### Environment variables
285
332
@@ -291,6 +338,7 @@ A number of different environment variables is available in the predictor to eas
| | vLLM | ✅ | `vllm-inference-pipeline` or `vllm-openai` | any `inference-pipeline` image |
312
360
313
361
!!! note
314
362
The selected Python environment is used for both predictor and transformer. Support for selecting a different Python environment for the predictor and transformer is coming soon.
0 commit comments