You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy_ai_search_indexes/README.md
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ The associated scripts in this portion of the repository contains pre-built scri
14
14
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**
15
15
16
16
3. Adjust `image_processing.py` with any changes to the index / indexer. The `get_skills()` method implements the skills pipeline. Make any adjustments here in the skills needed to enrich the data source.
17
-
4. Run `deploy.py` with the following args:
17
+
4. Run `uv run deploy.py` with the following args:
18
18
-`index_type image_processing`. This selects the `ImageProcessingAISearch` sub class.
19
19
-`enable_page_wise_chunking True`. This determines whether page wise chunking is applied in ADI, or whether the inbuilt skill is used for TextSplit. This suits documents that are inheritely page-wise e.g. pptx files.
20
20
-`rebuild`. Whether to delete and rebuild the index.
@@ -34,7 +34,7 @@ The associated scripts in this portion of the repository contains pre-built scri
34
34
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**
35
35
36
36
3. Adjust `text_2_sql_schema_store.py` with any changes to the index / indexer. The `get_skills()` method implements the skills pipeline. Make any adjustments here in the skills needed to enrich the data source.
37
-
4. Run `deploy.py` with the following args:
37
+
4. Run `uv run deploy.py` with the following args:
38
38
39
39
-`index_type text_2_sql_schema_store`. This selects the `Text2SQLSchemaStoreAISearch` sub class.
40
40
-`rebuild`. Whether to delete and rebuild the index.
@@ -53,7 +53,7 @@ The associated scripts in this portion of the repository contains pre-built scri
53
53
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**
54
54
55
55
3. Adjust `text_2_sql_column_value_store.py` with any changes to the index / indexer.
56
-
4. Run `deploy.py` with the following args:
56
+
4. Run `uv run deploy.py` with the following args:
57
57
58
58
-`index_type text_2_sql_column_value_store`. This selects the `Text2SQLColumnValueStoreAISearch` sub class.
59
59
-`rebuild`. Whether to delete and rebuild the index.
@@ -71,7 +71,7 @@ The associated scripts in this portion of the repository contains pre-built scri
71
71
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**
72
72
73
73
3. Adjust `text_2_sql_query_cache.py` with any changes to the index. **There is an optional provided indexer or skillset for this cache. You may instead want the application code will write directly to it. See the details in the Text2SQL README for different cache strategies.**
74
-
4. Run `deploy.py` with the following args:
74
+
4. Run `uv run deploy.py` with the following args:
75
75
76
76
-`index_type text_2_sql_query_cache`. This selects the `Text2SQLQueryCacheAISearch` sub class.
77
77
-`rebuild`. Whether to delete and rebuild the index.
Copy file name to clipboardExpand all lines: text_2_sql/data_dictionary/README.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -232,7 +232,7 @@ To generate a data dictionary, perform the following steps:
232
232
233
233
2. Package and install the `text_2_sql_core` library. See [build](https://docs.astral.sh/uv/concepts/projects/build/) if you want to build as a wheel and install on an agent. Or you can run from within a `uv` environment and skip packaging.
234
234
- Install the optional dependencies if you need a database connector other than TSQL. `uv sync --extra <DATABASE ENGINE>`
235
-
235
+
236
236
3. Run `uv run data_dictionary <DATABASE ENGINE>`
237
237
- You can pass the following command line arguements:
238
238
-`-- output_directory` or `-o`: Optional directory that the script will write the output files to.
@@ -242,7 +242,7 @@ To generate a data dictionary, perform the following steps:
242
242
-`entities`: A list of entities to extract. Defaults to None.
243
243
-`excluded_entities`: A list of entities to exclude.
244
244
-`excluded_schemas`: A list of schemas to exclude.
245
-
245
+
246
246
4. Upload these generated data dictionaries files to the relevant containers in your storage account. Wait for them to be automatically indexed with the included skillsets.
0 commit comments