Skip to content

Commit 7318007

Browse files
committed
rename misc to pandas
1 parent 4717e21 commit 7318007

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

docs/concepts/dev/inside.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@ Hopsworks provides a Jupyter notebook development environment for programs writt
1010

1111
Hopsworks provides source code control support using Git (GitHub, GitLab or BitBucket). You can securely checkout code into your project and commit and push updates to your code to your source code repository.
1212

13-
### Bundled FTI Pipeline Environments
13+
### FTI Pipeline Environments
1414

1515
Hopsworks assumes that an ML system consists of three independently developed and operated ML pipelines.
1616

1717
* Feature pipeline: takes as input raw data that it transforms into features (and labels)
1818
* Training pipeline: takes as input features (and labels) and outputs a trained model
1919
* Inference pipeline: takes new feature data and a trained model and makes predictions
2020

21-
In order to facilitate the development of these pipelines Hopsworks bundles several python environments containing necessary dependencies. Each environment may then also be customized further by installing additional dependencies from PyPi, Conda, Wheel files, GitHub repos or a custom Dockerfile. Internal compute such as Jobs and Jupyter is run in one of these environments and changes are applied transparently when you install new libraries using our APIs. That is, there is no need to write a Dockerfile, users install libraries directly in one or more of the environments. You can setup custom development and production environments by creating separate projects.
21+
In order to facilitate the development of these pipelines Hopsworks bundles several python environments containing necessary dependencies. Each of these environments may then also be customized further by cloning it and installing additional dependencies from PyPi, Conda channels, Wheel files, GitHub repos or a custom Dockerfile. Internal compute such as Jobs and Jupyter is run in one of these environments and changes are applied transparently when you install new libraries using our APIs. That is, there is no need to write a Dockerfile, users install libraries directly in one or more of the environments. You can setup custom development and production environments by creating separate projects or creating multiple clones of an environment within the same project.
2222

2323
### Jobs
2424

docs/user_guides/projects/python/python_env_overview.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -39,15 +39,15 @@ The `MODEL TRAINING` environments can be used in [Jupyter notebooks](../jupyter/
3939

4040
* `tensorflow-training-pipeline` to train TensorFlow models
4141
* `torch-training-pipeline` to train PyTorch models
42-
* `misc-training-pipeline` to train XGBoost, Catboost and SkLearn models
42+
* `pandas-training-pipeline` to train XGBoost, Catboost and SkLearn models
4343

4444
### Model inference
4545

4646
The `MODEL INFERENCE` environments can be used in a deployment using a custom predictor script.
4747

4848
* `tensorflow-inference-pipeline` to train TensorFlow models
4949
* `torch-inference-pipeline` to train PyTorch models
50-
* `misc-inference-pipeline` to train XGBoost, Catboost and SkLearn models
50+
* `pandas-inference-pipeline` to train XGBoost, Catboost and SkLearn models
5151
* `minimal-inference-pipeline` to install your own custom framework
5252

5353
## Next steps

0 commit comments

Comments
 (0)