Skip to content

Commit 27744e1

Browse files
committedOct 2, 2024
add environment override in doc example
1 parent 30b5675 commit 27744e1

File tree

3 files changed

+15
-4
lines changed

3 files changed

+15
-4
lines changed
 

‎docs/user_guides/projects/jobs/notebook_job.md

+5-1
Original file line numberDiff line numberDiff line change
@@ -138,16 +138,20 @@ uploaded_file_path = dataset_api.upload("notebook.ipynb", "Resources")
138138

139139
### Step 2: Create Jupyter Notebook job
140140

141-
In this snippet we get the `JobsApi` object to get the default job configuration for a `PYTHON` job, set the Jupyter Notebook script to run and create the `Job` object.
141+
In this snippet we get the `JobsApi` object to get the default job configuration for a `PYTHON` job, set the jupyter notebook file and override the environment to run in, and finally create the `Job` object.
142142

143143
```python
144144

145145
jobs_api = project.get_jobs_api()
146146

147147
notebook_job_config = jobs_api.get_configuration("PYTHON")
148148

149+
# Set the application file
149150
notebook_job_config['appPath'] = uploaded_file_path
150151

152+
# Override the python job environment
153+
notebook_job_config['environmentName'] = "python-feature-pipeline"
154+
151155
job = jobs_api.create_job("notebook_job", notebook_job_config)
152156

153157
```

‎docs/user_guides/projects/jobs/pyspark_job.md

+5-1
Original file line numberDiff line numberDiff line change
@@ -175,16 +175,20 @@ uploaded_file_path = dataset_api.upload("script.py", "Resources")
175175

176176
### Step 2: Create PySpark job
177177

178-
In this snippet we get the `JobsApi` object to get the default job configuration for a `PYSPARK` job, set the python script to run and create the `Job` object.
178+
In this snippet we get the `JobsApi` object to get the default job configuration for a `PYSPARK` job, set the pyspark script and override the environment to run in, and finally create the `Job` object.
179179

180180
```python
181181

182182
jobs_api = project.get_jobs_api()
183183

184184
spark_config = jobs_api.get_configuration("PYSPARK")
185185

186+
# Set the application file
186187
spark_config['appPath'] = uploaded_file_path
187188

189+
# Override the python job environment
190+
spark_config['environmentName'] = "spark-feature-pipeline"
191+
188192
job = jobs_api.create_job("pyspark_job", spark_config)
189193

190194
```

‎docs/user_guides/projects/jobs/python_job.md

+5-2
Original file line numberDiff line numberDiff line change
@@ -123,19 +123,22 @@ uploaded_file_path = dataset_api.upload("script.py", "Resources")
123123

124124
```
125125

126-
127126
### Step 2: Create Python job
128127

129-
In this snippet we get the `JobsApi` object to get the default job configuration for a `PYTHON` job, set the python script to run and create the `Job` object.
128+
In this snippet we get the `JobsApi` object to get the default job configuration for a `PYTHON` job, set the python script and override the environment to run in, and finally create the `Job` object.
130129

131130
```python
132131

133132
jobs_api = project.get_jobs_api()
134133

135134
py_job_config = jobs_api.get_configuration("PYTHON")
136135

136+
# Set the application file
137137
py_job_config['appPath'] = uploaded_file_path
138138

139+
# Override the python job environment
140+
py_job_config['environmentName'] = "python-feature-pipeline"
141+
139142
job = jobs_api.create_job("py_job", py_job_config)
140143

141144
```

0 commit comments

Comments
 (0)