Skip to content

Commit

Permalink
spellcheck
Browse files Browse the repository at this point in the history
  • Loading branch information
jovezhong committed Feb 9, 2025
1 parent 283d531 commit 431fa36
Show file tree
Hide file tree
Showing 8 changed files with 78 additions and 30 deletions.
4 changes: 2 additions & 2 deletions docs/enterprise-v2.6.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,10 @@ Compared to the [2.5.12](/enterprise-v2.5#2_5_12) release:
* Monitoring and Management:
* Added [system.stream_state_log](/system-stream-state-log) and [system.stream_metric_log](/system-stream-metric-log) system streams for comprehensive resource monitoring.
* Implemented Kafka offset tracking in [system.stream_state_log](/system-stream-state-log), exportable via [timeplus diag](/cli-diag) command.
* A `_tp_sn` column is added to each stream (except external streams or random streams), as the sequence number in the unified streaming and historical storages. This column is used for data replication among the cluster. By default, it is hidden in the query results. You can show it by setting `SETTINGS asterisk_include_tp_sn_column=true`. This setting is required when you use `INSERT..SELECT` SQL to copy data between streams: `INSERT INTO stream2 SELECT * FROM stream1 SETTINGS asterisk_include_tp_sn_column=true`.
* A `_tp_sn` column is added to each stream (except external streams or random streams), as the sequence number in the unified streaming and historical storage. This column is used for data replication among the cluster. By default, it is hidden in the query results. You can show it by setting `SETTINGS asterisk_include_tp_sn_column=true`. This setting is required when you use `INSERT..SELECT` SQL to copy data between streams: `INSERT INTO stream2 SELECT * FROM stream1 SETTINGS asterisk_include_tp_sn_column=true`.
* New Features:
* Support for continuous data writing to remote Timeplus deployments via setting a [Timeplus external stream](/timeplus-external-stream) as the target in a materialized view.
* New [EMIT PERIODIC .. REPEAT](/query-syntax#emit_periodic_repeat) syntax for emiting the last aggregation result even when there is no new event.
* New [EMIT PERIODIC .. REPEAT](/query-syntax#emit_periodic_repeat) syntax for emitting the last aggregation result even when there is no new event.
* Able to create or drop databases via SQL in a cluster. The web console will be enhanced to support different databases in the next release.
* Historical data of a stream can be removed by `TRUNCATE STREAM stream_name`.
* Able to add new columns to a stream via `ALTER STREAM stream_name ADD COLUMN column_name data_type`, in both a single node or multi-node cluster.
Expand Down
4 changes: 2 additions & 2 deletions docs/k8s-helm.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,10 +196,10 @@ helm -n $NS upgrade -f values.yaml $RELEASE timeplus/timeplus-enterprise
Due to the [limitation of Kubernetes Statefulset](https://github.com/kubernetes/kubernetes/issues/68737), you will need to manually update the PV size for timeplusd. Notice that this will cause downtime of Timeplus Enterprise.

1. Make sure the `global.pvcDeleteOnStsDelete` is not set or is set to be `false`. You can double check this by running command `kubectl -n <ns> get sts timeplusd -ojsonpath='{.spec.persistentVolumeClaimRetentionPolicy}'` and make sure both `whenDeleted` and `whenScaled` are `retain`. This is extremely important otherwise your PV may be deleted and all the data will be lost.
1. Run `kubectl -n <ns> delete sts/timeplusd` to temporarily delete the statefulset. Wait until all timeplusd pods are terminated. This step is neccesary to workaround the Kubernetes limitation.
1. Run `kubectl -n <ns> delete sts/timeplusd` to temporarily delete the statefulset. Wait until all timeplusd pods are terminated. This step is necessary to workaround the Kubernetes limitation.
1. Run `kubectl -n <ns> get pvc` to list all the PVCs and their corresponding PVs. For each PV you want to resize, run command `kubectl -n edit pvc <pvc>` to update the `spec.resources.requests.storage`. Notice that all timeplusd replicas need to have the same storage size so please make sure all updated PVCs have the same storage size.
1. Run `kubectl get pv <pv> -o=jsonpath='{.spec.capacity.storage}'` to make sure all corresponding PVs have been updated. It takes a while before Kubernetes update the capacity field of the PVC so as long as you can see the underlying storage size gets updated, you can process to the next step.
1. Update the the `timeplusd.storage.stream.size` and/or `timeplusd.storage.stream.history.size` in `values.yaml` that you used to deploy Timeplus Enterprise.
1. Update the `timeplusd.storage.stream.size` and/or `timeplusd.storage.stream.history.size` in `values.yaml` that you used to deploy Timeplus Enterprise.
1. Run helm upgrade command to upgrade the deployment. New statefulset will be created to pick up the PV size changes.

### Upgrade Timeplus Enterprise
Expand Down
2 changes: 1 addition & 1 deletion docs/sql-create-mutable-stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Supported column types for primary key:
* integers (int8/16/32/64, uint8/16/32/64)
* floating point (float32, float64)
* date and datetime types (date, date32, datetime, datetime64)
* string and fixedstring
* string and fixed_string
* enum8, enum16
* decimal32, decimal64
* bool
Expand Down
2 changes: 1 addition & 1 deletion docs/sql-drop-function.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Example:
DROP FUNCTION test_add_five_5;
```

## Drop a function forcely {#force}
## Drop a function forcefully {#force}
If the UDF or UDAF is used in other queries, you can force to drop it.

```sql
Expand Down
2 changes: 1 addition & 1 deletion docs/timeplus-external-stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,5 +67,5 @@ SELECT * FROM local_stream WHERE http_code>=400;
* [window functions](/functions_for_streaming) like tumble/hop are not working yet.
* can't read virtual columns on remote streams.
* [table function](/functions_for_streaming#table) is not supported in timeplusd 2.3.21 or earlier version. This has been enhanced since timeplusd 2.3.22.
* Timeplus Proton eariler than 1.6.9 doesn't support the Timeplus external stream.
* Timeplus Proton earlier than 1.6.9 doesn't support the Timeplus external stream.
* In Timeplus Proton, if your materialized view queries a Timeplus external stream, the checkpoint of the external stream may not be properly persisted. No such issue for Timeplus Enterprise and we are working on the fix.
2 changes: 1 addition & 1 deletion docs/v2-release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ We released a patch release for Timeplus Enterprise [v2.4](/enterprise-v2.4#2_4_
If you are running Timeplus Enterprise v2.4, we recommend upgrading to this version.

### Timeplus Grafana plugin v2.1.1
The new version of Grafana plugin imporoved the batching strategy to render results from streaming queries. If there is any error in the SQL syntax, the error message will be shown in the Grafana query panel.
The new version of Grafana plugin improved the batching strategy to render results from streaming queries. If there is any error in the SQL syntax, the error message will be shown in the Grafana query panel.

## Jan 20, 2025

Expand Down
55 changes: 33 additions & 22 deletions spellchecker/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,37 +6,48 @@ reports:
- spellchecker/report.json
- spellchecker/report.junit.xml
ignore:
- key\d
- "[A-Z0-9]{6}"
- "\\b\\w+=\\w+\\b"
- "\\b\\w+\\."
- .*_logstore_.*
- .*_port
- .*_sn
- 1f71acbf-59fc-427d-a634-1679b48029a9
- 2a02:aa08:e000:3100
- c\d+
- u\d+
- t\d+
- \d+GB
- \d+Gi
- \d+MB
- \d+ms
- \ds
- array\d
- array_.*
- to_.*
- json_extract_.*
- b\d{5}
- c\d+
- changelog_[_\d]+
- cloud_step\d
- datestring\d
- dateString\d
- exp\d+
- ingest_.*
- json\d
- json_extract_.*
- key\d
- kvstore_.*
- log\d+
- logstore_.*
- metastore_.*
- mv\d+
- mv_.*
- node\d+
- number\d
- option\d
- user\d
- role\d
- s3_.*
- scalar\d
- \d+ms
- "[A-Z0-9]{6}"
- b\d{5}
- step\d
- array\d
- number\d
- string\d
- datestring\d
- json\d
- dateString\d
- changelog_[_\d]+
- t\d+
- to_.*
- u\d+
- user\d
- v[\.\d]+
- cloud_step\d
- role\d
- mv\d+
- node\d+
- "\\b\\w+\\."
- "\\b\\w+=\\w+\\b"
37 changes: 37 additions & 0 deletions spellchecker/dic.txt
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ activemq
ActiveMQ
AdminClient
adv_udaf
air-gapped
Airbyte
AirByte
aiven
Expand Down Expand Up @@ -48,6 +49,7 @@ arrayName
asin
ASOF
asof
AssumeRole
async
Async
atan
Expand Down Expand Up @@ -85,6 +87,7 @@ Benthos
Benthos-based
beta1
beta2
big-endian
bigint
BigTable
Binance
Expand All @@ -107,11 +110,13 @@ Changelog
changelog_2_4_15
changelog_2_4_16
changelog_kv
ChatGPT
checkpointing
CICD
cid
cidr
CIDR
ckpt_sn
classpath
cli
CLI
Expand All @@ -127,11 +132,14 @@ clickstream
closable
cloud_sla
Cloudflare
CloudFront
cloudtrail
Cloudtrail
CloudTrail
CloudWatch
CockroachDB
codec
Codec
codegen
Codegen
codename
Expand Down Expand Up @@ -161,6 +169,7 @@ CTEs
Ctrl
customizable
Databricks
Datadog
dataframe
DataGrip
datalineage
Expand Down Expand Up @@ -190,7 +199,10 @@ ddl
de-serializing
Debezium
debezium
decimal32
decimal64
decode_url_component
dedicatedly
dedup
deduplication
defaultValue
Expand All @@ -215,6 +227,8 @@ dropdown
dropdowns
dsn
EBS
EC2
ECS
EKS
Emet-Labs
emit_after_wm
Expand Down Expand Up @@ -252,6 +266,7 @@ from_unix_timestamp64_micro
from_unix_timestamp64_milli
from_unix_timestamp64_nano
frontend
fsync
Fullscreen
fullscreen
GCS
Expand Down Expand Up @@ -291,8 +306,11 @@ Guo
gzip
GZip
Gzip
half_md5
hardcoded
hardcoding
HashiCorp
HashIndex
highlevel
homebrew
Homebrew
Expand All @@ -309,7 +327,9 @@ https
Huatai
hubspot
HubSpot
IAM
Idempotency
idempotently
influxdb
InfluxDB
infograph
Expand Down Expand Up @@ -367,6 +387,8 @@ joda
Joda
js
JS
js-udaf
js-udf
JSON
json
json_has
Expand Down Expand Up @@ -414,6 +436,8 @@ livepeer
Livepeer
log_dir
logics
logstore
Logstore
logstore_retention_bytes
logstore_retention_ms
lookups
Expand Down Expand Up @@ -442,6 +466,8 @@ microservices
Minfeng
minikube
Minio
minio
modularized
mouseover
MQ
mqtt
Expand Down Expand Up @@ -504,6 +530,7 @@ onprem
openjdk
openssl
OPENSSLDIR
OpenTelemetry
os
oss
OSS
Expand Down Expand Up @@ -535,6 +562,8 @@ pre-existing
pre-filled
pre-made
pre-processor
preallocate
prebuilt
preconfigured
prefetch_count
PrettyCompact
Expand All @@ -555,6 +584,7 @@ PulsarSlackWebhookTimeplus
PV
pvc
PVCs
PVs
Q1
quantile
queryable
Expand Down Expand Up @@ -591,6 +621,8 @@ restPath
RHEL
roadmap
roadmaps
RocksDB
RocNet
rpconnect
s-agg-recent
s-downsampling
Expand Down Expand Up @@ -655,6 +687,8 @@ start_lon
startCondtion
stateful
StatefulSet
Statefulset
statefulset
stochasticLinearRegression
str
stream_ttl
Expand All @@ -666,6 +700,7 @@ StreamNATSCSV
strictnesses
sub-commmands
subcommand
subdirectories
sublicense
subnet
subqueries
Expand Down Expand Up @@ -700,6 +735,7 @@ Tigergraph
Timeplus
timeplus
timeplus-appserver
timeplus-connector
timeplus-io
Timeplus-native-jdbc
timeplus-native-jdbc
Expand Down Expand Up @@ -820,6 +856,7 @@ WIP
workspaceID
workspaces
x64
x86_64
Xie
xirr
xz
Expand Down

0 comments on commit 431fa36

Please sign in to comment.