Skip to content

Commit 35173c4

Browse files
Update Deployment Docs (#170)
* Update docs
1 parent 8ce60e9 commit 35173c4

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

deploy_ai_search_indexes/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ The associated scripts in this portion of the repository contains pre-built scri
1414
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**
1515

1616
3. Adjust `image_processing.py` with any changes to the index / indexer. The `get_skills()` method implements the skills pipeline. Make any adjustments here in the skills needed to enrich the data source.
17-
4. Run `deploy.py` with the following args:
17+
4. Run `uv run deploy.py` with the following args:
1818
- `index_type image_processing`. This selects the `ImageProcessingAISearch` sub class.
1919
- `enable_page_wise_chunking True`. This determines whether page wise chunking is applied in ADI, or whether the inbuilt skill is used for TextSplit. This suits documents that are inheritely page-wise e.g. pptx files.
2020
- `rebuild`. Whether to delete and rebuild the index.
@@ -34,7 +34,7 @@ The associated scripts in this portion of the repository contains pre-built scri
3434
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**
3535

3636
3. Adjust `text_2_sql_schema_store.py` with any changes to the index / indexer. The `get_skills()` method implements the skills pipeline. Make any adjustments here in the skills needed to enrich the data source.
37-
4. Run `deploy.py` with the following args:
37+
4. Run `uv run deploy.py` with the following args:
3838

3939
- `index_type text_2_sql_schema_store`. This selects the `Text2SQLSchemaStoreAISearch` sub class.
4040
- `rebuild`. Whether to delete and rebuild the index.
@@ -53,7 +53,7 @@ The associated scripts in this portion of the repository contains pre-built scri
5353
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**
5454

5555
3. Adjust `text_2_sql_column_value_store.py` with any changes to the index / indexer.
56-
4. Run `deploy.py` with the following args:
56+
4. Run `uv run deploy.py` with the following args:
5757

5858
- `index_type text_2_sql_column_value_store`. This selects the `Text2SQLColumnValueStoreAISearch` sub class.
5959
- `rebuild`. Whether to delete and rebuild the index.
@@ -71,7 +71,7 @@ The associated scripts in this portion of the repository contains pre-built scri
7171
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**
7272

7373
3. Adjust `text_2_sql_query_cache.py` with any changes to the index. **There is an optional provided indexer or skillset for this cache. You may instead want the application code will write directly to it. See the details in the Text2SQL README for different cache strategies.**
74-
4. Run `deploy.py` with the following args:
74+
4. Run `uv run deploy.py` with the following args:
7575

7676
- `index_type text_2_sql_query_cache`. This selects the `Text2SQLQueryCacheAISearch` sub class.
7777
- `rebuild`. Whether to delete and rebuild the index.

text_2_sql/data_dictionary/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,7 @@ To generate a data dictionary, perform the following steps:
232232

233233
2. Package and install the `text_2_sql_core` library. See [build](https://docs.astral.sh/uv/concepts/projects/build/) if you want to build as a wheel and install on an agent. Or you can run from within a `uv` environment and skip packaging.
234234
- Install the optional dependencies if you need a database connector other than TSQL. `uv sync --extra <DATABASE ENGINE>`
235-
235+
236236
3. Run `uv run data_dictionary <DATABASE ENGINE>`
237237
- You can pass the following command line arguements:
238238
- `-- output_directory` or `-o`: Optional directory that the script will write the output files to.
@@ -242,7 +242,7 @@ To generate a data dictionary, perform the following steps:
242242
- `entities`: A list of entities to extract. Defaults to None.
243243
- `excluded_entities`: A list of entities to exclude.
244244
- `excluded_schemas`: A list of schemas to exclude.
245-
245+
246246
4. Upload these generated data dictionaries files to the relevant containers in your storage account. Wait for them to be automatically indexed with the included skillsets.
247247

248248
> [!IMPORTANT]

0 commit comments

Comments
 (0)