Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shenxiaohu jira new #1998

Open
wants to merge 30 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
bdbc6eb
string-function_dev
cool-joker Jan 17, 2025
e08c0a0
string_function_dev_2
cool-joker Jan 17, 2025
41591c5
tstring_function_dev_3
cool-joker Jan 17, 2025
0cf619e
string_function_total
cool-joker Jan 17, 2025
9678e5b
Merge branch 'apache:master' into shenxiaohu-jira_new
cool-joker Jan 18, 2025
5c328cb
string_function_total_2
cool-joker Jan 18, 2025
5e0ebf3
Merge branch 'shenxiaohu-jira_new' of https://github.com/cool-joker/d…
cool-joker Jan 18, 2025
3471dd7
json_function_part2_zh
cool-joker Jan 18, 2025
f418b0a
json_function_en_dev
cool-joker Jan 18, 2025
9f4fc93
json_function_en_3.0
cool-joker Jan 18, 2025
02cc51d
josn_function_en_2.1
cool-joker Jan 18, 2025
99701b1
changes_fix_new_0121
cool-joker Jan 21, 2025
9aa1f52
Merge branch 'shenxiaohu-jira_new' of https://github.com/cool-joker/d…
cool-joker Jan 21, 2025
d7a66f6
en_fix
cool-joker Jan 22, 2025
b2604e5
Merge branch 'apache:master' into shenxiaohu-jira_new
cool-joker Jan 22, 2025
a1403b7
zh_fix
cool-joker Jan 22, 2025
8ed2bc2
Merge branch 'shenxiaohu-jira_new' of https://github.com/cool-joker/d…
cool-joker Jan 22, 2025
4a8b326
Refresh cluster management_dev&3.0
cool-joker Jan 23, 2025
ef77238
Merge branch 'apache:master' into shenxiaohu-jira_new
cool-joker Jan 23, 2025
2aed7b5
zh_fix_parameters
cool-joker Jan 23, 2025
9ea3af7
Merge branch 'shenxiaohu-jira_new' of https://github.com/cool-joker/d…
cool-joker Jan 23, 2025
8669621
zh_fix_0123
cool-joker Jan 23, 2025
2d5f583
Merge branch 'apache:master' into shenxiaohu-jira_new
cool-joker Jan 23, 2025
3be6f43
Merge branch 'master' into shenxiaohu-jira_new
cool-joker Feb 6, 2025
a0b248d
changes-trim
cool-joker Feb 6, 2025
e85b218
Merge branch 'apache:master' into shenxiaohu-jira_new
cool-joker Feb 6, 2025
e5dfe75
en_part_01
cool-joker Feb 8, 2025
a23b92a
en_part_02
cool-joker Feb 8, 2025
56992b9
en_changs
cool-joker Feb 12, 2025
275ac4a
Merge remote-tracking branch 'origin' into shenxiaohu-jira_new
cool-joker Feb 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
{
"title": "ALTER STORAGE POLICY",
"title": "ALTER-STORAGE-POLICY",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

去掉标题中的减号

"language": "en"
}
---
Expand All @@ -26,29 +26,42 @@ under the License.

## Description

This statement is used to modify an existing cold and hot separation migration strategy. Only root or admin users can modify resources.
This statement is used to modify an existing hot-cold tiered migration policy. Only root or admin users can modify resources.

## Syntax
```sql
ALTER STORAGE POLICY 'policy_name'
PROPERTIES ("key"="value", ...);
ALTER STORAGE POLICY '<policy_name>' PROPERTIE ("<key>"="<value>"[, ... ]);
```

## Example
## Required Parameters
| Parameter Name | Description |
|-------------------|--------------------------------------------------------------|
| `<policy_name>` | The name of the storage policy. This is the unique identifier of the storage policy you want to modify, and an existing policy name must be specified. |

1. Modify the name to coolown_datetime Cold and hot separation data migration time point:
## Optional Parameters

| Parameter Name | Description |
|-------------------|--------------------------------------------------------------|
| `retention_days` | Data retention period. Defines the duration for which the data is kept in storage. Data exceeding this period will be automatically deleted. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

语句的参数,不是使用表格的形式,参考其他文档,如:https://doris.apache.org/docs/sql-manual/sql-statements/job/CREATE-JOB

| `redundancy_level`| Redundancy level. Defines the number of data replicas to ensure high availability and fault tolerance. For example, a value of 2 means each data block has two replicas. |
| `storage_type` | Storage type. Specifies the storage medium used, such as SSD, HDD, or hybrid storage. This affects performance and cost. |
| `cooloff_time` | Cool-off time. The time interval between when data is marked for deletion and when it is actually deleted. This helps prevent data loss due to accidental operations. |
| `location_policy` | Geographical location policy. Defines the geographical placement of data, such as cross-region replication for disaster recovery. |
Comment on lines +41 to +49
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这些都是 PROPERTIE ("<key>"="<value>"[, ... ]) 中 key 的候选。所以这里这样组织

  1. 在必选参数中,增加
`PROPERTIE ("<key>"="<value>"[, ... ])` 
  1. 然后这个表格跟在必选参数的表格后面,


## Examples

1. Modify the cooldown_datetime for hot-cold tiered data migration:
```sql
ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES("cooldown_datetime" = "2023-06-08 00:00:00");
```
2. Modify the name to coolown_countdown of hot and cold separation data migration of ttl
2. Modify the cooldown_ttl for hot-cold tiered data migration countdown:

```sql
ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = "10000");
```
```sql
ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = "1h");
ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = "3d");
```
## Keywords

```sql
ALTER, STORAGE, POLICY
ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = "3d");
```

## Best Practice
Original file line number Diff line number Diff line change
Expand Up @@ -26,65 +26,71 @@ specific language governing permissions and limitations
under the License.
-->

## CREATE-STORAGE-VAULT
## Description

### Description
This command is used to create a storage vault. The topic of this document describes the syntax for creating a self-managed storage vault in Doris.

This command is used to create a storage vault. The subject of this document describes the syntax for creating Doris self-maintained storage vault.

## Syntax

```sql
CREATE STORAGE VAULT [IF NOT EXISTS] vault
[properties]
CREATE STORAGE VAULT [IF NOT EXISTS] <`vault_name`> [ <`properties`> ]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
CREATE STORAGE VAULT [IF NOT EXISTS] <`vault_name`> [ <`properties`> ]
CREATE STORAGE VAULT [IF NOT EXISTS] <vault_name> [ <properties> ]

```


#### properties

| param | is required | desc |
|:-------|:------------|:-------------------------------------------------------|
| `type` | required | Only two types of vaults are allowed: `S3` and `HDFS`. |

##### S3 Vault

| param | is required | desc |
|:----------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `s3.endpoint` | required | The endpoint used for object storage. <br/>**Notice**, please don't provide the endpoint with any `http://` or `https://`. And for Azure Blob Storage, the endpoint should be `blob.core.windows.net`. |
| `s3.region` | required | The region of your bucket.(Not required when you'r using GCP or AZURE). |
| `s3.root.path` | required | The path where the data would be stored. |
| `s3.bucket` | required | The bucket of your object storage account. (StorageAccount if you're using Azure). |
| `s3.access_key` | required | The access key of your object storage account. (AccountName if you're using Azure). |
| `s3.secret_key` | required | The secret key of your object storage account. (AccountKey if you're using Azure). |
| `provider` | required | The cloud vendor which provides the object storage service. The supported values include `COS`, `OSS`, `S3`, `OBS`, `BOS`, `AZURE`, `GCP` |
| `use_path_style` | optional | Indicate using `path-style URL`(private environment recommended) or `virtual-hosted-style URL`(public cloud recommended), default `true` (`path-style`) |

##### HDFS Vault

| param | is required | desc |
|:---------------------------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `fs.defaultFS` | required | Hadoop configuration property that specifies the default file system to use. |
| `path_prefix` | optional | The path prefix to where the data would be stored. It would be the root_path of your Hadoop user if you don't provide any prefix. |
| `hadoop.username` | optional | Hadoop configuration property that specifies the user accessing the file system. It would be the user starting Hadoop process if you don't provide any user. |
| `hadoop.security.authentication` | optional | The authentication way used for hadoop. If you'd like to use kerberos you can provide with `kerboros`. |
| `hadoop.kerberos.principal` | optional | The path to your kerberos principal. |
| `hadoop.kerberos.keytab` | optional | The path to your kerberos keytab. |

### Example

1. create a HDFS storage vault.
## Required Parameters

| Parameter | Description |
|-------|-----------------------|
| `vault_name` | The name of the storage vault. This is the unique identifier for the new storage vault you are creating. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| `vault_name` | The name of the storage vault. This is the unique identifier for the new storage vault you are creating. |
| `<vault_name>` | The name of the storage vault. This is the unique identifier for the new storage vault you are creating. |


## Optional Parameters
| Parameter | Description |
|-------------------|--------------------------------------------------------------|
| `[IF NOT EXISTS]` | If the specified storage vault already exists, the creation operation will not be executed, and no error will be thrown. This prevents duplicate creation of the same storage vault. |
| `PROPERTIES` | A set of key-value pairs used to set or update specific properties of the storage vault. Each property consists of a key (<key>) and a value (<value>), separated by an equals sign (=). Multiple key-value pairs are separated by commas (,). |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| `PROPERTIES` | A set of key-value pairs used to set or update specific properties of the storage vault. Each property consists of a key (<key>) and a value (<value>), separated by an equals sign (=). Multiple key-value pairs are separated by commas (,). |
| `<properties>` | A set of key-value pairs used to set or update specific properties of the storage vault. Each property consists of a key (`<key>`) and a value (`<value>`), separated by an equals sign (=). Multiple key-value pairs are separated by commas (,). |


### S3 Vault

| Parameter | Required | Description |
|:----------------|:-----|:--------------------------------------------------------------------------------------------------------|
| `s3.endpoint` | Required | The endpoint for object storage.
Note: Do not provide a link starting with http:// or https://. For Azure Blob Storage, the endpoint is fixed as blob.core.windows.net.。 |
| `s3.region` | Required | The region of your storage bucket. (Not required if using GCP or AZURE). |
| `s3.root.path` | Required | The path to store data. |
| `s3.bucket` | Required | The bucket of your object storage account. (For Azure, this is the StorageAccount). |
| `s3.access_key` | Required | The access key for your object storage account. (For Azure, this is the AccountName). |
| `s3.secret_key` | Required | The secret key for your object storage account. (For Azure, this is the AccountKey). |
| `provider` | Required | The cloud provider offering the object storage service. Supported values are `COS`,`OSS`,`S3`,`OBS`,`BOS`,`AZURE`,`GCP` |
| `use_path_style` | Optional | Use `path-style URL (for private deployment environments) or `virtual-hosted-style URL`(recommended for public cloud environments). Default value is true (path-style). |

### HDFS vault

| Parameter | Required | Description |
|:---------------------------------|:-----|:------------------------------------------------------|
| `fs.defaultFS` |Required| Hadoop configuration property specifying the default file system to use. |
| `path_prefix` |Optional| The prefix path for storing data. If not specified, the default path under the user account will be used. |
| `hadoop.username` |Optional| Hadoop configuration property specifying the user to access the file system. If not specified, the user who started the Hadoop process will be used. |
| `hadoop.security.authentication` |Optional| The authentication method for Hadoop. If you want to use Kerberos, you can specify kerberos. |
| `hadoop.kerberos.principal` |Optional| The path to your Kerberos principal. |
| `hadoop.kerberos.keytab` |Optional| The path to your Kerberos keytab. |

## Examples

1. Create HDFS storage vault。
```sql
CREATE STORAGE VAULT IF NOT EXISTS hdfs_vault_demo
PROPERTIES (
"type" = "hdfs", -- required
"fs.defaultFS" = "hdfs://127.0.0.1:8020", -- required
"path_prefix" = "big/data", -- optional
"path_prefix" = "big/data", -- optional, generally fill in according to the business name
"hadoop.username" = "user" -- optional
"hadoop.security.authentication" = "kerberos" -- optional
"hadoop.kerberos.principal" = "hadoop/127.0.0.1@XXX" -- optional
"hadoop.kerberos.keytab" = "/etc/emr.keytab" -- optional
);
```

2. create a S3 storage vault using OSS.
2. Create OSS storage vault
```sql
CREATE STORAGE VAULT IF NOT EXISTS oss_demo_vault
PROPERTIES (
Expand All @@ -96,11 +102,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
"s3.root.path" = "oss_demo_vault_prefix", -- required
"s3.bucket" = "xxxxxx", -- required, Your OSS bucket name
"provider" = "OSS", -- required
"use_path_style" = "false" -- optional, OSS suggest setting `false`
"use_path_style" = "false" -- optional, OSS recommended to set false
);
```

3. create a S3 storage vault using COS.
3. Create COS storage vault
```sql
CREATE STORAGE VAULT IF NOT EXISTS cos_demo_vault
PROPERTIES (
Expand All @@ -112,11 +118,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
"s3.root.path" = "cos_demo_vault_prefix", -- required
"s3.bucket" = "xxxxxx", -- required, Your COS bucket name
"provider" = "COS", -- required
"use_path_style" = "false" -- optional, COS suggest setting `false`
"use_path_style" = "false" -- optional, COS recommended to set false
);
```

4. create a S3 storage vault using OBS.
4. Create OBS storage vault
```sql
CREATE STORAGE VAULT IF NOT EXISTS obs_demo_vault
PROPERTIES (
Expand All @@ -128,11 +134,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
"s3.root.path" = "obs_demo_vault_prefix", -- required
"s3.bucket" = "xxxxxx", -- required, Your COS bucket name
"provider" = "OBS", -- required
"use_path_style" = "false" -- optional, OBS suggest setting `false`
"use_path_style" = "false" -- optional, OBS recommended to set false
);
```

5. create a S3 storage vault using BOS.
5. Create BOS storage vault
```sql
CREATE STORAGE VAULT IF NOT EXISTS obs_demo_vault
PROPERTIES (
Expand All @@ -144,11 +150,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
"s3.root.path" = "bos_demo_vault_prefix", -- required
"s3.bucket" = "xxxxxx", -- required, Your BOS bucket name
"provider" = "BOS", -- required
"use_path_style" = "false" -- optional, BOS suggest setting `false`
"use_path_style" = "false" -- optional, BOS recommended to set false
);
```

6. create a S3 storage vault using AWS.
6. Create S3 storage vault
```sql
CREATE STORAGE VAULT IF NOT EXISTS s3_demo_vault
PROPERTIES (
Expand All @@ -160,10 +166,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
"s3.root.path" = "s3_demo_vault_prefix", -- required
"s3.bucket" = "xxxxxx", -- required, Your s3 bucket name
"provider" = "S3", -- required
"use_path_style" = "false" -- optional, S3 suggest setting `false`
"use_path_style" = "false" -- optional, S3 recommended to set false
);
```
7. create a S3 storage vault using MinIO.

7. Create MinIO storage vault。
```sql
CREATE STORAGE VAULT IF NOT EXISTS minio_demo_vault
PROPERTIES (
Expand All @@ -175,11 +182,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
"s3.root.path" = "minio_demo_vault_prefix", -- required
"s3.bucket" = "xxxxxx", -- required, Your minio bucket name
"provider" = "S3", -- required
"use_path_style" = "true" -- required, minio suggest setting `true`
"use_path_style" = "true" -- required, minio recommended to set false
);
```

8. create a S3 storage vault using AZURE.
8. Create AZURE storage vault
```sql
CREATE STORAGE VAULT IF NOT EXISTS azure_demo_vault
PROPERTIES (
Expand All @@ -194,7 +201,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
);
```

9. create a S3 storage vault using GCP.
9. Create GCP storage vault
```sql
CREATE STORAGE VAULT IF NOT EXISTS gcp_demo_vault
PROPERTIES (
Expand All @@ -208,7 +215,3 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
"provider" = "GCP" -- required
);
```

### Keywords

CREATE, STORAGE VAULT
Original file line number Diff line number Diff line change
Expand Up @@ -26,28 +26,28 @@ under the License.

## Description

This statement is used to set the default storage vault in Doris. The default storage vault is used to store data for internal or system tables. If the default storage vault is not set, Doris will not function properly. Once the default storage vault is set, it cannot be removed.
This statement is used to set the default storage vault in Doris. The default storage vault is used to store data for internal or system tables. If the default storage vault is not set, Doris will not be able to operate normally. Once a default storage vault is set, it cannot be removed.


## Syntax

```sql
SET vault_name DEFAULT STORAGE VAULT
SET <vault_name> DEFAULT STORAGE VAULT
```

> Note:
>
## Required Parameters

| Parameter Name | Description |
|-------------------|--------------------------------------------------------------|
| `<vault_name>` | The name of the storage vault. This is the unique identifier of the vault you want to set as the default storage vault. |

## Usage Notes:
> 1. Only ADMIN users can set the default storage vault.

## Example
## Examples

1. Set the storage vault named 's3_vault' as the default storage vault.
1. Set the storage vault named s3_vault as the default storage vault.

```sql
SET s3_vault AS DEFAULT STORAGE VAULT;
```

## Related Commands

## Keywords

SET, DEFAULT, STORAGE, VAULT
Original file line number Diff line number Diff line change
Expand Up @@ -26,36 +26,34 @@ under the License.

## Description

This statement is used to display the hotspot information of file cache.
This statement is used to display hot spot information for file caches.

## Syntax

```sql
SHOW CACHE HOTSPOT '/[compute_group_name/table_name]';
SHOW CACHE HOTSPOT '/[<compute_group_name>/<table_name>]';
```

## Parameters

1. compute_group_name : Name of compute group.
2. table_name : Name of table.
| Parameter Name | Description |
|---------------------------|--------------------------------------------------------------|
| <compute_group_name> | The name of the compute group. |
| <table_name> | The name of the table. |
## Examples

## Example
1. Display cache hot spot information for the entire system:

1. View the table creation statement of a table

```sql
SHOW CACHE HOTSPOT '/';
```
```sql
SHOW CACHE HOTSPOT '/';
```

## Related Commands
2. Display cache hot spot information for a specific compute group my_compute_group:

- [WARMUP CACHE](../Database-Administration-Statements/WARM-UP-COMPUTE-GROUP.md)
```sql
SHOW CACHE HOTSPOT '/my_compute_group/';
```

## References

- [MANAGING FILE CACHE](../../../compute-storage-decoupled/file-cache.md)

## Keywords

SHOW, CACHE, HOTSPOT

- [WARMUP CACHE](../Database-Administration-Statements/WARM-UP-COMPUTE-GROUP.md)
- [MANAGING FILE CACHE](../../../compute-storage-decoupled/file-cache.md)
Loading