From e59a29450a886c8d998b61158d8d77a4b44248e4 Mon Sep 17 00:00:00 2001
From: Jialin Ma <281648921@qq.com>
Date: Tue, 10 Dec 2024 17:03:58 +0800
Subject: [PATCH 1/6] New Table Model Deployment and Operations Document
---
.../Cluster-Deployment_timecho.md | 377 ++++++++++
.../Database-Resources.md | 194 +++++
.../Environment-Requirements.md | 191 +++++
.../IoTDB-Package_timecho.md | 42 ++
.../Monitoring-panel-deployment.md | 680 +++++++++++++++++
.../Stand-Alone-Deployment_timecho.md | 244 +++++++
.../Cluster-Deployment_timecho.md | 362 ++++++++++
.../Database-Resources.md | 193 +++++
.../Environment-Requirements.md | 205 ++++++
.../IoTDB-Package_timecho.md | 45 ++
.../Monitoring-panel-deployment.md | 682 ++++++++++++++++++
.../Stand-Alone-Deployment_timecho.md | 217 ++++++
12 files changed, 3432 insertions(+)
create mode 100644 src/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
create mode 100644 src/UserGuide/Master/Table/Deployment-and-Maintenance/Database-Resources.md
create mode 100644 src/UserGuide/Master/Table/Deployment-and-Maintenance/Environment-Requirements.md
create mode 100644 src/UserGuide/Master/Table/Deployment-and-Maintenance/IoTDB-Package_timecho.md
create mode 100644 src/UserGuide/Master/Table/Deployment-and-Maintenance/Monitoring-panel-deployment.md
create mode 100644 src/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
create mode 100644 src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
create mode 100644 src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Database-Resources.md
create mode 100644 src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Environment-Requirements.md
create mode 100644 src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/IoTDB-Package_timecho.md
create mode 100644 src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Monitoring-panel-deployment.md
create mode 100644 src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
diff --git a/src/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
new file mode 100644
index 000000000..19e2f6f63
--- /dev/null
+++ b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
@@ -0,0 +1,377 @@
+
+# Cluster Deployment
+
+This section describes how to manually deploy an instance that includes 3 ConfigNodes and 3 DataNodes, commonly known as a 3C3D cluster.
+
+
+
+
+
+
+## Note
+
+1. Before installation, ensure that the system is complete by referring to [System Requirements](./Environment-Requirements.md)
+
+2. It is recommended to prioritize using `hostname` for IP configuration during deployment, which can avoid the problem of modifying the host IP in the later stage and causing the database to fail to start. To set the host name, you need to configure /etc/hosts on the target server. For example, if the local IP is 192.168.1.3 and the host name is iotdb-1, you can use the following command to set the server's host name and configure the `cn_internal_address` and `dn_internal_address` of IoTDB using the host name.
+
+ ``` shell
+ echo "192.168.1.3 iotdb-1" >> /etc/hosts
+ ```
+
+3. Some parameters cannot be modified after the first startup. Please refer to the "Parameter Configuration" section below for settings.
+
+4. Whether in linux or windows, ensure that the IoTDB installation path does not contain Spaces and Chinese characters to avoid software exceptions.
+
+5. Please note that when installing and deploying IoTDB (including activating and using software), it is necessary to use the same user for operations. You can:
+
+- Using root user (recommended): Using root user can avoid issues such as permissions.
+- Using a fixed non root user:
+ - Using the same user operation: Ensure that the same user is used for start, activation, stop, and other operations, and do not switch users.
+ - Avoid using sudo: Try to avoid using sudo commands as they execute commands with root privileges, which may cause confusion or security issues.
+
+6. It is recommended to deploy a monitoring panel, which can monitor important operational indicators and keep track of database operation status at any time. The monitoring panel can be obtained by contacting the business department,The steps for deploying a monitoring panel can refer to:[Monitoring Panel Deployment](./Monitoring-panel-deployment.md)
+
+## Preparation Steps
+
+1. Prepare the IoTDB database installation package: timechodb-{version}-bin.zip(The installation package can be obtained from:[IoTDB-Package](./IoTDB-Package_timecho.md))
+2. Configure the operating system environment according to environmental requirements(The system environment configuration can be found in:[Environment Requirement](./Environment-Requirements.md))
+
+## Installation Steps
+
+Assuming there are three Linux servers now, the IP addresses and service roles are assigned as follows:
+
+| Node IP | Host Name | Service |
+| ------------- | --------- | -------------------- |
+| 11.101.17.224 | iotdb-1 | ConfigNode、DataNode |
+| 11.101.17.225 | iotdb-2 | ConfigNode、DataNode |
+| 11.101.17.226 | iotdb-3 | ConfigNode、DataNode |
+
+### Set Host Name
+
+On three machines, configure the host names separately. To set the host names, configure `/etc/hosts` on the target server. Use the following command:
+
+```Bash
+echo "11.101.17.224 iotdb-1" >> /etc/hosts
+echo "11.101.17.225 iotdb-2" >> /etc/hosts
+echo "11.101.17.226 iotdb-3" >> /etc/hosts
+```
+
+### Configuration
+
+Unzip the installation package and enter the installation directory
+
+```Plain
+unzip timechodb-{version}-bin.zip
+cd timechodb-{version}-bin
+```
+
+#### Environment script configuration
+
+- `./conf/confignode-env.sh` configuration
+
+ | **Configuration** | **Description** | **Default** | **Recommended value** | **Note** |
+ | :---------------- | :----------------------------------------------------------- | :---------- | :----------------------------------------------------------- | :---------------------------------- |
+ | MEMORY_SIZE | The total amount of memory that IoTDB ConfigNode nodes can use | - | Can be filled in as needed, and the system will allocate memory based on the filled in values | Restarting the service takes effect |
+
+- `./conf/datanode-env.sh` configuration
+
+ | **Configuration** | **Description** | **Default** | **Recommended value** | **Note** |
+ | :---------------- | :----------------------------------------------------------- | :---------- | :----------------------------------------------------------- | :---------------------------------- |
+ | MEMORY_SIZE | The total amount of memory that IoTDB DataNode nodes can use | - | Can be filled in as needed, and the system will allocate memory based on the filled in values | Restarting the service takes effect |
+
+#### General Configuration(./conf/iotdb-system.properties)
+
+- Cluster Configuration
+
+ | **Configuration** | **Description** | 11.101.17.224 | 11.101.17.225 | 11.101.17.226 |
+ | ------------------------- | ------------------------------------------------------------ | -------------- | -------------- | -------------- |
+ | cluster_name | Cluster Name | defaultCluster | defaultCluster | defaultCluster |
+ | schema_replication_factor | The number of metadata replicas, the number of DataNodes should not be less than this number | 3 | 3 | 3 |
+ | data_replication_factor | The number of data replicas should not be less than this number of DataNodes | 2 | 2 | 2 |
+
+#### ConfigNode Configuration
+
+| **Configuration** | **Description** | **Default** | **Recommended value** | 11.101.17.224 | 11.101.17.225 | 11.101.17.226 | Note |
+| ------------------- | ------------------------------------------------------------ | --------------- | ------------------------------------------------------------ | ------------- | ------------- | ------------- | ---------------------------------------- |
+| cn_internal_address | The address used by ConfigNode for communication within the cluster | 127.0.0.1 | The IPV4 address or host name of the server where it is located, and it is recommended to use host name | iotdb-1 | iotdb-2 | iotdb-3 | Cannot be modified after initial startup |
+| cn_internal_port | The port used by ConfigNode for communication within the cluster | 10710 | 10710 | 10710 | 10710 | 10710 | Cannot be modified after initial startup |
+| cn_consensus_port | The port used for ConfigNode replica group consensus protocol communication | 10720 | 10720 | 10720 | 10720 | 10720 | Cannot be modified after initial startup |
+| cn_seed_config_node | The address of the ConfigNode that the node connects to when registering to join the cluster, `cn_internal_address:cn_internal_port` | 127.0.0.1:10710 | The first CongfigNode's `cn_internal-address: cn_internal_port` | iotdb-1:10710 | iotdb-1:10710 | iotdb-1:10710 | Cannot be modified after initial startup |
+
+#### Datanode Configuration
+
+| **Configuration** | **Description** | **Default** | **Recommended value** | 11.101.17.224 | 11.101.17.225 | 11.101.17.226 | Note |
+| ------------------------------- | ------------------------------------------------------------ | --------------- | ------------------------------------------------------------ | ------------- | ------------- | ------------- | ---------------------------------------- |
+| dn_rpc_address | The address of the client RPC service | 127.0.0.1 | Recommend using the **IPV4 address or hostname** of the server where it is located | iotdb-1 | iotdb-2 | iotdb-3 | Restarting the service takes effect |
+| dn_rpc_port | The port of the client RPC service | 6667 | 6667 | 6667 | 6667 | 6667 | Restarting the service takes effect |
+| dn_internal_address | The address used by DataNode for communication within the cluster | 127.0.0.1 | The IPV4 address or host name of the server where it is located, and it is recommended to use host name | iotdb-1 | iotdb-2 | iotdb-3 | Cannot be modified after initial startup |
+| dn_internal_port | The port used by DataNode for communication within the cluster | 10730 | 10730 | 10730 | 10730 | 10730 | Cannot be modified after initial startup |
+| dn_mpp_data_exchange_port | The port used by DataNode to receive data streams | 10740 | 10740 | 10740 | 10740 | 10740 | Cannot be modified after initial startup |
+| dn_data_region_consensus_port | The port used by DataNode for data replica consensus protocol communication | 10750 | 10750 | 10750 | 10750 | 10750 | Cannot be modified after initial startup |
+| dn_schema_region_consensus_port | The port used by DataNode for metadata replica consensus protocol communication | 10760 | 10760 | 10760 | 10760 | 10760 | Cannot be modified after initial startup |
+| dn_seed_config_node | The address of the ConfigNode that the node connects to when registering to join the cluster, i.e. `cn_internal-address: cn_internal_port` | 127.0.0.1:10710 | The first CongfigNode's cn_internal-address: cn_internal_port | iotdb-1:10710 | iotdb-1:10710 | iotdb-1:10710 | Cannot be modified after initial startup |
+
+> ❗️Attention: Editors such as VSCode Remote do not have automatic configuration saving function. Please ensure that the modified files are saved persistently, otherwise the configuration items will not take effect
+
+### Start ConfigNode
+
+Start the first confignode of IoTDB-1 first, ensuring that the seed confignode node starts first, and then start the second and third confignode nodes in sequence
+
+```Bash
+cd sbin
+
+./start-confignode.sh -d #"- d" parameter will start in the background
+```
+
+If the startup fails, please refer to [Common Questions](#common-questions).
+
+### Start DataNode
+
+ Enter the `sbin` directory of iotdb and start three datanode nodes in sequence:
+
+```Go
+cd sbin
+
+./start-datanode.sh -d #"- d" parameter will start in the background
+```
+
+### Activate Database
+
+#### Method 1: Activate file copy activation
+
+- After starting three Confignode Datanode nodes in sequence, copy the `activation` folder of each machine and the `system_info` file of each machine to the Timecho staff;
+
+- The staff will return the license files for each ConfigNode Datanode node, where 3 license files will be returned;
+
+- Put the three license files into the `activation` folder of the corresponding ConfigNode node;
+
+#### Method 2: Activate Script Activation
+
+- Retrieve the machine codes of three machines in sequence, enter them into the CLI of the IoTDB tree model (./start-cli.sh-sql-dialect table/start-cli.bat - sql-dialect table), and execute the following tasks:
+ - Note: When sql-dialect is a table, it is temporarily not supported to use
+
+```Bash
+show system info
+```
+
+- The following information is displayed, where the machine code of one machine is displayed:
+
+```Bash
++--------------------------------------------------------------+
+| SystemInfo|
++--------------------------------------------------------------+
+|01-TE5NLES4-UDDWCMYE,01-GG5NLES4-XXDWCMYE,01-FF5NLES4-WWWWCMYE|
++--------------------------------------------------------------+
+Total line number = 1
+It costs 0.030s
+```
+
+- The other two nodes enter the CLI of the IoTDB tree model in sequence, execute the statement, and copy the machine codes of the three machines obtained to the Timecho staff
+
+- The staff will return three activation codes, which normally correspond to the order of the three machine codes provided. Please paste each activation code into the CLI separately, as prompted below:
+
+ - Note: The activation code needs to be marked with a 'symbol before and after, as shown in
+
+ ```Bash
+ IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA==='
+ ```
+
+### Verify Activation
+
+When the status of the 'Result' field is displayed as' success', it indicates successful activation
+
+![](https://alioss.timecho.com/docs/img/%E9%9B%86%E7%BE%A4-%E9%AA%8C%E8%AF%81.png)
+
+## Node Maintenance Steps
+
+### ConfigNode Node Maintenance
+
+ConfigNode node maintenance is divided into two types of operations: adding and removing ConfigNodes, with two common use cases:
+
+- Cluster expansion: For example, when there is only one ConfigNode in the cluster, and you want to increase the high availability of ConfigNode nodes, you can add two ConfigNodes, making a total of three ConfigNodes in the cluster.
+
+- Cluster failure recovery: When the machine where a ConfigNode is located fails, making the ConfigNode unable to run normally, you can remove this ConfigNode and then add a new ConfigNode to the cluster.
+
+> ❗️Note, after completing ConfigNode node maintenance, you need to ensure that there are 1 or 3 ConfigNodes running normally in the cluster. Two ConfigNodes do not have high availability, and more than three ConfigNodes will lead to performance loss.
+
+#### Adding ConfigNode Nodes
+
+Script command:
+
+```shell
+# Linux / MacOS
+# First switch to the IoTDB root directory
+sbin/start-confignode.sh
+
+# Windows
+# First switch to the IoTDB root directory
+sbin/start-confignode.bat
+```
+
+#### Removing ConfigNode Nodes
+
+First connect to the cluster through the CLI and confirm the internal address and port number of the ConfigNode you want to remove by using `show confignodes`:
+
+```Bash
+IoTDB> show confignodes
++------+-------+---------------+------------+--------+
+|NodeID| Status|InternalAddress|InternalPort| Role|
++------+-------+---------------+------------+--------+
+| 0|Running| 127.0.0.1| 10710| Leader|
+| 1|Running| 127.0.0.1| 10711|Follower|
+| 2|Running| 127.0.0.1| 10712|Follower|
++------+-------+---------------+------------+--------+
+Total line number = 3
+It costs 0.030s
+```
+
+Then use the script to remove the DataNode. Script command:
+
+```Bash
+# Linux / MacOS
+sbin/remove-confignode.sh [confignode_id]
+
+#Windows
+sbin/remove-confignode.bat [confignode_id]
+
+```
+
+### DataNode Node Maintenance
+
+There are two common scenarios for DataNode node maintenance:
+
+- Cluster expansion: For the purpose of expanding cluster capabilities, add new DataNodes to the cluster
+
+- Cluster failure recovery: When a machine where a DataNode is located fails, making the DataNode unable to run normally, you can remove this DataNode and add a new DataNode to the cluster
+
+> ❗️Note, in order for the cluster to work normally, during the process of DataNode node maintenance and after the maintenance is completed, the total number of DataNodes running normally should not be less than the number of data replicas (usually 2), nor less than the number of metadata replicas (usually 3).
+
+#### Adding DataNode Nodes
+
+Script command:
+
+```Bash
+# Linux / MacOS
+# First switch to the IoTDB root directory
+sbin/start-datanode.sh
+
+# Windows
+# First switch to the IoTDB root directory
+sbin/start-datanode.bat
+```
+
+Note: After adding a DataNode, as new writes arrive (and old data expires, if TTL is set), the cluster load will gradually balance towards the new DataNode, eventually achieving a balance of storage and computation resources on all nodes.
+
+#### Removing DataNode Nodes
+
+First connect to the cluster through the CLI and confirm the RPC address and port number of the DataNode you want to remove with `show datanodes`:
+
+```Bash
+IoTDB> show datanodes
++------+-------+----------+-------+-------------+---------------+
+|NodeID| Status|RpcAddress|RpcPort|DataRegionNum|SchemaRegionNum|
++------+-------+----------+-------+-------------+---------------+
+| 1|Running| 0.0.0.0| 6667| 0| 0|
+| 2|Running| 0.0.0.0| 6668| 1| 1|
+| 3|Running| 0.0.0.0| 6669| 1| 0|
++------+-------+----------+-------+-------------+---------------+
+Total line number = 3
+It costs 0.110s
+```
+
+Then use the script to remove the DataNode. Script command:
+
+```Bash
+# Linux / MacOS
+sbin/remove-datanode.sh [datanode_id]
+
+#Windows
+sbin/remove-datanode.bat [datanode_id]
+```
+
+## Common Questions
+
+1. Multiple prompts indicating activation failure during deployment process
+
+ - Use the `ls -al` command: Use the `ls -al` command to check if the owner information of the installation package root directory is the current user.
+
+ - Check activation directory: Check all files in the `./activation` directory and whether the owner information is the current user.
+
+2. Confignode failed to start
+
+ Step 1: Please check the startup log to see if any parameters that cannot be changed after the first startup have been modified.
+
+ Step 2: Please check the startup log for any other abnormalities. If there are any abnormal phenomena in the log, please contact Timecho Technical Support personnel for consultation on solutions.
+
+ Step 3: If it is the first deployment or data can be deleted, you can also clean up the environment according to the following steps, redeploy, and restart.
+
+ Step 4: Clean up the environment:
+
+ a. Terminate all ConfigNode Node and DataNode processes.
+
+ ```Bash
+ # 1. Stop the ConfigNode and DataNode services
+ sbin/stop-standalone.sh
+
+ # 2. Check for any remaining processes
+ jps
+ # Or
+ ps -ef|gerp iotdb
+
+ # 3. If there are any remaining processes, manually kill the
+ kill -9
+ # If you are sure there is only one iotdb on the machine, you can use the following command to clean up residual processes
+ ps -ef|grep iotdb|grep -v grep|tr -s ' ' ' ' |cut -d ' ' -f2|xargs kill -9
+ ```
+
+ b. Delete the data and logs directories.
+
+ Explanation: Deleting the data directory is necessary, deleting the logs directory is for clean logs and is not mandatory.
+
+ ```Bash
+ cd /data/iotdb
+ rm -rf data logs
+ ```
+
+## Appendix
+
+### Introduction to Configuration Node Parameters
+
+| Parameter | Description | Is it required |
+| :-------- | :---------------------------------------------- | :------------- |
+| -d | Start in daemon mode, running in the background | No |
+
+### Introduction to Datanode Node Parameters
+
+| Abbreviation | Description | Is it required |
+| :----------- | :----------------------------------------------------------- | :------------- |
+| -v | Show version information | No |
+| -f | Run the script in the foreground, do not put it in the background | No |
+| -d | Start in daemon mode, i.e. run in the background | No |
+| -p | Specify a file to store the process ID for process management | No |
+| -c | Specify the path to the configuration file folder, the script will load the configuration file from here | No |
+| -g | Print detailed garbage collection (GC) information | No |
+| -H | Specify the path of the Java heap dump file, used when JVM memory overflows | No |
+| -E | Specify the path of the JVM error log file | No |
+| -D | Define system properties, in the format key=value | No |
+| -X | Pass -XX parameters directly to the JVM | No |
+| -h | Help instruction | No |
\ No newline at end of file
diff --git a/src/UserGuide/Master/Table/Deployment-and-Maintenance/Database-Resources.md b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Database-Resources.md
new file mode 100644
index 000000000..59a380dbb
--- /dev/null
+++ b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Database-Resources.md
@@ -0,0 +1,194 @@
+
+# Database Resources
+## CPU
+
+
+ Number of timeseries (frequency<=1HZ)
+ CPU
+ Number of nodes
+
+
+ standalone mode
+ Double active
+ Distributed
+
+
+ Within 100000
+ 2core-4core
+ 1
+ 2
+ 3
+
+
+ Within 300000
+ 4core-8core
+ 1
+ 2
+ 3
+
+
+ Within 500000
+ 8core-26core
+ 1
+ 2
+ 3
+
+
+ Within 1000000
+ 16core-32core
+ 1
+ 2
+ 3
+
+
+ Within 2000000
+ 32core-48core
+ 1
+ 2
+ 3
+
+
+ Within 10000000
+ 48core
+ 1
+ 2
+ Please contact Timecho Business for consultation
+
+
+ Over 10000000
+ Please contact Timecho Business for consultation
+
+
+
+## Memory
+
+
+ Number of timeseries (frequency<=1HZ)
+ Memory
+ Number of nodes
+
+
+ standalone mode
+ Double active
+ Distributed
+
+
+ Within 100000
+ 4G-8G
+ 1
+ 2
+ 3
+
+
+ Within 300000
+ 12G-32G
+ 1
+ 2
+ 3
+
+
+ Within 500000
+ 24G-48G
+ 1
+ 2
+ 3
+
+
+ Within 1000000
+ 32G-96G
+ 1
+ 2
+ 3
+
+
+ Within 2000000
+ 64G-128G
+ 1
+ 2
+ 3
+
+
+ Within 10000000
+ 128G
+ 1
+ 2
+ Please contact Timecho Business for consultation
+
+
+ Over 10000000
+ Please contact Timecho Business for consultation
+
+
+
+## Storage (Disk)
+### Storage space
+Calculation formula: Number of measurement points * Sampling frequency (Hz) * Size of each data point (Byte, different data types may vary, see table below) * Storage time (seconds) * Number of copies (usually 1 copy for a single node and 2 copies for a cluster) ÷ Compression ratio (can be estimated at 5-10 times, but may be higher in actual situations)
+
+
+ Data point size calculation
+
+
+ data type
+ Timestamp (Bytes)
+ Value (Bytes)
+ Total size of data points (in bytes)
+
+
+
+ Boolean
+ 8
+ 1
+ 9
+
+
+ INT32/FLOAT
+ 8
+ 4
+ 12
+
+
+ INT64/DOUBLE
+ 8
+ 8
+ 16
+
+
+ TEXT
+ 8
+ The average is a
+ 8+a
+
+
+
+Example: 1000 devices, each with 100 measurement points, a total of 100000 sequences, INT32 type. Sampling frequency 1Hz (once per second), storage for 1 year, 3 copies.
+- Complete calculation formula: 1000 devices * 100 measurement points * 12 bytes per data point * 86400 seconds per day * 365 days per year * 3 copies/10 compression ratio=11T
+- Simplified calculation formula: 1000 * 100 * 12 * 86400 * 365 * 3/10=11T
+### Storage Configuration
+If the number of nodes is over 10000000 or the query load is high, it is recommended to configure SSD
+## Network (Network card)
+If the write throughput does not exceed 10 million points/second, configure 1Gbps network card. When the write throughput exceeds 10 million points per second, a 10Gbps network card needs to be configured.
+| **Write throughput (data points per second)** | **NIC rate** |
+| ------------------- | ------------- |
+| <10 million | 1Gbps |
+| >=10 million | 10Gbps |
+## Other instructions
+IoTDB has the ability to scale up clusters in seconds, and expanding node data does not require migration. Therefore, you do not need to worry about the limited cluster capacity estimated based on existing data. In the future, you can add new nodes to the cluster when you need to scale up.
\ No newline at end of file
diff --git a/src/UserGuide/Master/Table/Deployment-and-Maintenance/Environment-Requirements.md b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Environment-Requirements.md
new file mode 100644
index 000000000..539d03b09
--- /dev/null
+++ b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Environment-Requirements.md
@@ -0,0 +1,191 @@
+
+# System Requirements
+
+## Disk Array
+
+### Configuration Suggestions
+
+IoTDB has no strict operation requirements on disk array configuration. It is recommended to use multiple disk arrays to store IoTDB data to achieve the goal of concurrent writing to multiple disk arrays. For configuration, refer to the following suggestions:
+
+1. Physical environment
+ System disk: You are advised to use two disks as Raid1, considering only the space occupied by the operating system itself, and do not reserve system disk space for the IoTDB
+ Data disk:
+ Raid is recommended to protect data on disks
+ It is recommended to provide multiple disks (1-6 disks) or disk groups for the IoTDB. (It is not recommended to create a disk array for all disks, as this will affect the maximum performance of the IoTDB.)
+2. Virtual environment
+ You are advised to mount multiple hard disks (1-6 disks).
+
+### Configuration Example
+
+- Example 1: Four 3.5-inch hard disks
+
+Only a few hard disks are installed on the server. Configure Raid5 directly.
+The recommended configurations are as follows:
+| **Use classification** | **Raid type** | **Disk number** | **Redundancy** | **Available capacity** |
+| ----------- | -------- | -------- | --------- | -------- |
+| system/data disk | RAID5 | 4 | 1 | 3 | is allowed to fail|
+
+- Example 2: Twelve 3.5-inch hard disks
+
+The server is configured with twelve 3.5-inch disks.
+Two disks are recommended as Raid1 system disks. The two data disks can be divided into two Raid5 groups. Each group of five disks can be used as four disks.
+The recommended configurations are as follows:
+| **Use classification** | **Raid type** | **Disk number** | **Redundancy** | **Available capacity** |
+| -------- | -------- | -------- | --------- | -------- |
+| system disk | RAID1 | 2 | 1 | 1 |
+| data disk | RAID5 | 5 | 1 | 4 |
+| data disk | RAID5 | 5 | 1 | 4 |
+- Example 3:24 2.5-inch disks
+
+The server is configured with 24 2.5-inch disks.
+Two disks are recommended as Raid1 system disks. The last two disks can be divided into three Raid5 groups. Each group of seven disks can be used as six disks. The remaining block can be idle or used to store pre-write logs.
+The recommended configurations are as follows:
+| **Use classification** | **Raid type** | **Disk number** | **Redundancy** | **Available capacity** |
+| -------- | -------- | -------- | --------- | -------- |
+| system disk | RAID1 | 2 | 1 | 1 |
+| data disk | RAID5 | 7 | 1 | 6 |
+| data disk | RAID5 | 7 | 1 | 6 |
+| data disk | RAID5 | 7 | 1 | 6 |
+| data disk | NoRaid | 1 | 0 | 1 |
+
+## Operating System
+
+### Version Requirements
+
+IoTDB supports operating systems such as Linux, Windows, and MacOS, while the enterprise version supports domestic CPUs such as Loongson, Phytium, and Kunpeng. It also supports domestic server operating systems such as Neokylin, KylinOS, UOS, and Linx.
+
+### Disk Partition
+
+- The default standard partition mode is recommended. LVM extension and hard disk encryption are not recommended.
+- The system disk needs only the space used by the operating system, and does not need to reserve space for the IoTDB.
+- Each disk group corresponds to only one partition. Data disks (with multiple disk groups, corresponding to raid) do not need additional partitions. All space is used by the IoTDB.
+The following table lists the recommended disk partitioning methods.
+
+
+ Disk classification
+ Disk set
+ Drive
+ Capacity
+ File system type
+
+
+ System disk
+ Disk group0
+ /boot
+ 1GB
+ Acquiesce
+
+
+ /
+ Remaining space of the disk group
+ Acquiesce
+
+
+ Data disk
+ Disk set1
+ /data1
+ Full space of disk group1
+ Acquiesce
+
+
+ Disk set2
+ /data2
+ Full space of disk group2
+ Acquiesce
+
+
+ ......
+
+
+### Network Configuration
+
+1. Disable the firewall
+
+```Bash
+# View firewall
+systemctl status firewalld
+# Disable firewall
+systemctl stop firewalld
+# Disable firewall permanently
+systemctl disable firewalld
+```
+2. Ensure that the required port is not occupied
+
+(1) Check the ports occupied by the cluster: In the default cluster configuration, ConfigNode occupies ports 10710 and 10720, and DataNode occupies ports 6667, 10730, 10740, 10750, 10760, 9090, 9190, and 3000. Ensure that these ports are not occupied. Check methods are as follows:
+
+```Bash
+lsof -i:6667 or netstat -tunp | grep 6667
+lsof -i:10710 or netstat -tunp | grep 10710
+lsof -i:10720 or netstat -tunp | grep 10720
+# If the command outputs, the port is occupied.
+```
+
+(2) Checking the port occupied by the cluster deployment tool: When using the cluster management tool opskit to install and deploy the cluster, enable the SSH remote connection service configuration and open port 22.
+
+```Bash
+yum install openssh-server # Install the ssh service
+systemctl start sshd # Enable port 22
+```
+
+3. Ensure that servers are connected to each other
+
+### Other Configuration
+
+1. Disable the system swap memory
+
+```Bash
+echo "vm.swappiness = 0">> /etc/sysctl.conf
+# The swapoff -a and swapon -a commands are executed together to dump the data in swap back to memory and to empty the data in swap.
+# Do not omit the swappiness setting and just execute swapoff -a; Otherwise, swap automatically opens again after the restart, making the operation invalid.
+swapoff -a && swapon -a
+# Make the configuration take effect without restarting.
+sysctl -p
+# Check memory allocation, expecting swap to be 0
+free -m
+```
+2. Set the maximum number of open files to 65535 to avoid the error of "too many open files".
+
+```Bash
+# View current restrictions
+ulimit -n
+# Temporary changes
+ulimit -n 65535
+# Permanent modification
+echo "* soft nofile 65535" >> /etc/security/limits.conf
+echo "* hard nofile 65535" >> /etc/security/limits.conf
+# View after exiting the current terminal session, expect to display 65535
+ulimit -n
+```
+## Software Dependence
+
+Install the Java runtime environment (Java version >= 1.8). Ensure that jdk environment variables are set. (It is recommended to deploy JDK17 for V1.3.2.2 or later. In some scenarios, the performance of JDK of earlier versions is compromised, and Datanodes cannot be stopped.)
+
+```Bash
+# The following is an example of installing in centos7 using JDK-17:
+tar -zxvf JDk-17_linux-x64_bin.tar # Decompress the JDK file
+Vim ~/.bashrc # Configure the JDK environment
+{ export JAVA_HOME=/usr/lib/jvm/jdk-17.0.9
+ export PATH=$JAVA_HOME/bin:$PATH
+} # Add JDK environment variables
+source ~/.bashrc # The configuration takes effect
+java -version # Check the JDK environment
+```
\ No newline at end of file
diff --git a/src/UserGuide/Master/Table/Deployment-and-Maintenance/IoTDB-Package_timecho.md b/src/UserGuide/Master/Table/Deployment-and-Maintenance/IoTDB-Package_timecho.md
new file mode 100644
index 000000000..57cad838b
--- /dev/null
+++ b/src/UserGuide/Master/Table/Deployment-and-Maintenance/IoTDB-Package_timecho.md
@@ -0,0 +1,42 @@
+
+# Obtain TimechoDB
+## How to obtain TimechoDB
+The enterprise version installation package can be obtained through product trial application or by directly contacting the business personnel who are in contact with you.
+
+## Installation Package Structure
+The directory structure after unpacking the installation package is as follows:
+| **catalogue** | **Type** | **Explanation** |
+| :--------------: | -------- | ------------------------------------------------------------ |
+| activation | folder | The directory where the activation file is located, including the generated machine code and the enterprise version activation code obtained from the business side (this directory will only be generated after starting ConfigNode to obtain the activation code) |
+| conf | folder | Configuration file directory, including configuration files such as ConfigNode, DataNode, JMX, and logback |
+| data | folder | The default data file directory contains data files for ConfigNode and DataNode. (The directory will only be generated after starting the program) |
+| lib | folder | IoTDB executable library file directory |
+| licenses | folder | Open source community certificate file directory |
+| logs | folder | The default log file directory, which includes log files for ConfigNode and DataNode (this directory will only be generated after starting the program) |
+| sbin | folder | Main script directory, including start, stop, and other scripts |
+| tools | folder | Directory of System Peripheral Tools |
+| ext | folder | Related files for pipe, trigger, and UDF plugins (created by the user when needed) |
+| LICENSE | file | certificate |
+| NOTICE | file | Tip |
+| README_ZH\.md | file | Explanation of the Chinese version in Markdown format |
+| README\.md | file | Instructions for use |
+| RELEASE_NOTES\.md | file | Version Description |
diff --git a/src/UserGuide/Master/Table/Deployment-and-Maintenance/Monitoring-panel-deployment.md b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Monitoring-panel-deployment.md
new file mode 100644
index 000000000..4e9a50a1a
--- /dev/null
+++ b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Monitoring-panel-deployment.md
@@ -0,0 +1,680 @@
+
+# Monitoring Panel Deployment
+
+The IoTDB monitoring panel is one of the supporting tools for the IoTDB Enterprise Edition. It aims to solve the monitoring problems of IoTDB and its operating system, mainly including operating system resource monitoring, IoTDB performance monitoring, and hundreds of kernel monitoring indicators, in order to help users monitor the health status of the cluster, and perform cluster optimization and operation. This article will take common 3C3D clusters (3 Confignodes and 3 Datanodes) as examples to introduce how to enable the system monitoring module in an IoTDB instance and use Prometheus+Grafana to visualize the system monitoring indicators.
+
+## Installation Preparation
+
+1. Installing IoTDB: You need to first install IoTDB V1.0 or above Enterprise Edition. You can contact business or technical support to obtain
+2. Obtain the IoTDB monitoring panel installation package: Based on the enterprise version of IoTDB database monitoring panel, you can contact business or technical support to obtain
+
+## Installation Steps
+
+### Step 1: IoTDB enables monitoring indicator collection
+
+1. Open the monitoring configuration item. The configuration items related to monitoring in IoTDB are disabled by default. Before deploying the monitoring panel, you need to open the relevant configuration items (note that the service needs to be restarted after enabling monitoring configuration).
+
+| **Configuration** | Located in the configuration file | **Description** |
+| :--------------------------------- | :-------------------------------- | :----------------------------------------------------------- |
+| cn_metric_reporter_list | conf/iotdb-system.properties | Uncomment the configuration item and set the value to PROMETHEUS |
+| cn_metric_level | conf/iotdb-system.properties | Uncomment the configuration item and set the value to IMPORTANT |
+| cn_metric_prometheus_reporter_port | conf/iotdb-system.properties | Uncomment the configuration item to maintain the default setting of 9091. If other ports are set, they will not conflict with each other |
+| dn_metric_reporter_list | conf/iotdb-system.properties | Uncomment the configuration item and set the value to PROMETHEUS |
+| dn_metric_level | conf/iotdb-system.properties | Uncomment the configuration item and set the value to IMPORTANT |
+| dn_metric_prometheus_reporter_port | conf/iotdb-system.properties | Uncomment the configuration item and set it to 9092 by default. If other ports are set, they will not conflict with each other |
+
+Taking the 3C3D cluster as an example, the monitoring configuration that needs to be modified is as follows:
+
+| Node IP | Host Name | Cluster Role | Configuration File Path | Configuration |
+| ----------- | --------- | ------------ | -------------------------------- | ------------------------------------------------------------ |
+| 192.168.1.3 | iotdb-1 | confignode | conf/iotdb-system.properties | cn_metric_reporter_list=PROMETHEUS cn_metric_level=IMPORTANT cn_metric_prometheus_reporter_port=9091 |
+| 192.168.1.4 | iotdb-2 | confignode | conf/iotdb-system.properties | cn_metric_reporter_list=PROMETHEUS cn_metric_level=IMPORTANT cn_metric_prometheus_reporter_port=9091 |
+| 192.168.1.5 | iotdb-3 | confignode | conf/iotdb-system.properties | cn_metric_reporter_list=PROMETHEUS cn_metric_level=IMPORTANT cn_metric_prometheus_reporter_port=9091 |
+| 192.168.1.3 | iotdb-1 | datanode | conf/iotdb-system.properties | dn_metric_reporter_list=PROMETHEUS dn_metric_level=IMPORTANT dn_metric_prometheus_reporter_port=9092 |
+| 192.168.1.4 | iotdb-2 | datanode | conf/iotdb-system.properties | dn_metric_reporter_list=PROMETHEUS dn_metric_level=IMPORTANT dn_metric_prometheus_reporter_port=9092 |
+| 192.168.1.5 | iotdb-3 | datanode | conf/iotdb-system.properties | dn_metric_reporter_list=PROMETHEUS dn_metric_level=IMPORTANT dn_metric_prometheus_reporter_port=9092 |
+
+2. Restart all nodes. After modifying the monitoring indicator configuration of three nodes, the confignode and datanode services of all nodes can be restarted:
+
+```Bash
+./sbin/stop-standalone.sh #Stop confignode and datanode first
+./sbin/start-confignode.sh -d #Start confignode
+./sbin/start-datanode.sh -d #Start datanode
+```
+
+3. After restarting, confirm the running status of each node through the client. If the status is Running, it indicates successful configuration:
+
+![](https://alioss.timecho.com/docs/img/%E5%90%AF%E5%8A%A8.PNG)
+
+### Step 2: Install and configure Prometheus
+
+> Taking Prometheus installed on server 192.168.1.3 as an example.
+
+1. Download the Prometheus installation package, which requires installation of V2.30.3 and above. You can go to the Prometheus official website to download it(https://prometheus.io/docs/introduction/first_steps/)
+2. Unzip the installation package and enter the unzipped folder:
+
+```Shell
+tar xvfz prometheus-*.tar.gz
+cd prometheus-*
+```
+
+3. Modify the configuration. Modify the configuration file prometheus.yml as follows
+ 1. Add configNode task to collect monitoring data for ConfigNode
+ 2. Add a datanode task to collect monitoring data for DataNodes
+
+```YAML
+global:
+ scrape_interval: 15s
+ evaluation_interval: 15s
+scrape_configs:
+ - job_name: "prometheus"
+ static_configs:
+ - targets: ["localhost:9090"]
+ - job_name: "confignode"
+ static_configs:
+ - targets: ["iotdb-1:9091","iotdb-2:9091","iotdb-3:9091"]
+ honor_labels: true
+ - job_name: "datanode"
+ static_configs:
+ - targets: ["iotdb-1:9092","iotdb-2:9092","iotdb-3:9092"]
+ honor_labels: true
+```
+
+4. Start Prometheus. The default expiration time for Prometheus monitoring data is 15 days. In production environments, it is recommended to adjust it to 180 days or more to track historical monitoring data for a longer period of time. The startup command is as follows:
+
+```Shell
+./prometheus --config.file=prometheus.yml --storage.tsdb.retention.time=180d
+```
+
+5. Confirm successful startup. Enter in browser http://192.168.1.3:9090 Go to Prometheus and click on the Target interface under Status. When you see that all States are Up, it indicates successful configuration and connectivity.
+
+
+
+
+
+
+6. Clicking on the left link in Targets will redirect you to web monitoring and view the monitoring information of the corresponding node:
+
+![](https://alioss.timecho.com/docs/img/%E8%8A%82%E7%82%B9%E7%9B%91%E6%8E%A7.png)
+
+### Step 3: Install Grafana and configure the data source
+
+> Taking Grafana installed on server 192.168.1.3 as an example.
+
+1. Download the Grafana installation package, which requires installing version 8.4.2 or higher. You can go to the Grafana official website to download it(https://grafana.com/grafana/download)
+2. Unzip and enter the corresponding folder
+
+```Shell
+tar -zxvf grafana-*.tar.gz
+cd grafana-*
+```
+
+3. Start Grafana:
+
+```Shell
+./bin/grafana-server web
+```
+
+4. Log in to Grafana. Enter in browser http://192.168.1.3:3000 (or the modified port), enter Grafana, and the default initial username and password are both admin.
+
+5. Configure data sources. Find Data sources in Connections, add a new data source, and configure the Data Source to Prometheus
+
+![](https://alioss.timecho.com/docs/img/%E6%B7%BB%E5%8A%A0%E9%85%8D%E7%BD%AE.png)
+
+When configuring the Data Source, pay attention to the URL where Prometheus is located. After configuring it, click on Save&Test and a Data Source is working prompt will appear, indicating successful configuration
+
+![](https://alioss.timecho.com/docs/img/%E9%85%8D%E7%BD%AE%E6%88%90%E5%8A%9F.png)
+
+### Step 4: Import IoTDB Grafana Dashboards
+
+1. Enter Grafana and select Dashboards:
+
+ ![](https://alioss.timecho.com/docs/img/%E9%9D%A2%E6%9D%BF%E9%80%89%E6%8B%A9.png)
+
+2. Click the Import button on the right side
+
+ ![](https://alioss.timecho.com/docs/img/Import%E6%8C%89%E9%92%AE.png)
+
+3. Import Dashboard using upload JSON file
+
+ ![](https://alioss.timecho.com/docs/img/%E5%AF%BC%E5%85%A5Dashboard.png)
+
+4. Select the JSON file of one of the panels in the IoTDB monitoring panel, using the Apache IoTDB ConfigNode Dashboard as an example (refer to the installation preparation section in this article for the monitoring panel installation package):
+
+ ![](https://alioss.timecho.com/docs/img/%E9%80%89%E6%8B%A9%E9%9D%A2%E6%9D%BF.png)
+
+5. Select Prometheus as the data source and click Import
+
+ ![](https://alioss.timecho.com/docs/img/%E9%80%89%E6%8B%A9%E6%95%B0%E6%8D%AE%E6%BA%90.png)
+
+6. Afterwards, you can see the imported Apache IoTDB ConfigNode Dashboard monitoring panel
+
+ ![](https://alioss.timecho.com/docs/img/%E9%9D%A2%E6%9D%BF.png)
+
+7. Similarly, we can import the Apache IoTDB DataNode Dashboard Apache Performance Overview Dashboard、Apache System Overview Dashboard, You can see the following monitoring panel:
+
+
+
+8. At this point, all IoTDB monitoring panels have been imported and monitoring information can now be viewed at any time.
+
+ ![](https://alioss.timecho.com/docs/img/%E9%9D%A2%E6%9D%BF%E6%B1%87%E6%80%BB.png)
+
+## Appendix, Detailed Explanation of Monitoring Indicators
+
+### System Dashboard
+
+This panel displays the current usage of system CPU, memory, disk, and network resources, as well as partial status of the JVM.
+
+#### CPU
+
+- CPU Core:CPU cores
+- CPU Load:
+ - System CPU Load:The average CPU load and busyness of the entire system during the sampling time
+ - Process CPU Load:The proportion of CPU occupied by the IoTDB process during sampling time
+- CPU Time Per Minute:The total CPU time of all processes in the system per minute
+
+#### Memory
+
+- System Memory:The current usage of system memory.
+ - Commited vm size: The size of virtual memory allocated by the operating system to running processes.
+ - Total physical memory:The total amount of available physical memory in the system.
+ - Used physical memory:The total amount of memory already used by the system. Contains the actual amount of memory used by the process and the memory occupied by the operating system buffers/cache.
+- System Swap Memory:Swap Space memory usage.
+- Process Memory:The usage of memory by the IoTDB process.
+ - Max Memory:The maximum amount of memory that an IoTDB process can request from the operating system. (Configure the allocated memory size in the datanode env/configure env configuration file)
+ - Total Memory:The total amount of memory that the IoTDB process has currently requested from the operating system.
+ - Used Memory:The total amount of memory currently used by the IoTDB process.
+
+#### Disk
+
+- Disk Space:
+ - Total disk space:The maximum disk space that IoTDB can use.
+ - Used disk space:The disk space already used by IoTDB.
+- Log Number Per Minute:The average number of logs at each level of IoTDB per minute during the sampling time.
+- File Count:Number of IoTDB related files
+ - all:All file quantities
+ - TsFile:Number of TsFiles
+ - seq:Number of sequential TsFiles
+ - unseq:Number of unsequence TsFiles
+ - wal:Number of WAL files
+ - cross-temp:Number of cross space merge temp files
+ - inner-seq-temp:Number of merged temp files in sequential space
+ - innser-unseq-temp:Number of merged temp files in unsequential space
+ - mods:Number of tombstone files
+- Open File Count:Number of file handles opened by the system
+- File Size:The size of IoTDB related files. Each sub item corresponds to the size of the corresponding file.
+- Disk I/O Busy Rate:Equivalent to the% util indicator in iostat, it to some extent reflects the level of disk busyness. Each sub item is an indicator corresponding to the disk.
+- Disk I/O Throughput:The average I/O throughput of each disk in the system over a period of time. Each sub item is an indicator corresponding to the disk.
+- Disk I/O Ops:Equivalent to the four indicators of r/s, w/s, rrqm/s, and wrqm/s in iostat, it refers to the number of times a disk performs I/O per second. Read and write refer to the number of times a disk performs a single I/O. Due to the corresponding scheduling algorithm of block devices, in some cases, multiple adjacent I/Os can be merged into one. Merge read and merge write refer to the number of times multiple I/Os are merged into one I/O.
+- Disk I/O Avg Time:Equivalent to the await of iostat, which is the average latency of each I/O request. Separate recording of read and write requests.
+- Disk I/O Avg Size:Equivalent to the avgrq sz of iostat, it reflects the size of each I/O request. Separate recording of read and write requests.
+- Disk I/O Avg Queue Size:Equivalent to avgqu sz in iostat, which is the average length of the I/O request queue.
+- I/O System Call Rate:The frequency of process calls to read and write system calls, similar to IOPS.
+- I/O Throughput:The throughput of process I/O can be divided into two categories: actual-read/write and attemppt-read/write. Actual read and actual write refer to the number of bytes that a process actually causes block devices to perform I/O, excluding the parts processed by Page Cache.
+
+#### JVM
+
+- GC Time Percentage:The proportion of GC time spent by the node JVM in the past minute's time window
+- GC Allocated/Promoted Size Detail: The average size of objects promoted to the old era per minute by the node JVM, as well as the size of objects newly applied for by the new generation/old era and non generational new applications
+- GC Data Size Detail:The long-term surviving object size of the node JVM and the maximum intergenerational allowed value
+- Heap Memory:JVM heap memory usage.
+ - Maximum heap memory:The maximum available heap memory size for the JVM.
+ - Committed heap memory:The size of heap memory that has been committed by the JVM.
+ - Used heap memory:The size of heap memory already used by the JVM.
+ - PS Eden Space:The size of the PS Young area.
+ - PS Old Space:The size of the PS Old area.
+ - PS Survivor Space:The size of the PS survivor area.
+ - ...(CMS/G1/ZGC, etc)
+- Off Heap Memory:Out of heap memory usage.
+ - direct memory:Out of heap direct memory.
+ - mapped memory:Out of heap mapped memory.
+- GC Number Per Minute:The average number of garbage collection attempts per minute by the node JVM, including YGC and FGC
+- GC Time Per Minute:The average time it takes for node JVM to perform garbage collection per minute, including YGC and FGC
+- GC Number Per Minute Detail:The average number of garbage collections per minute by node JVM due to different reasons, including YGC and FGC
+- GC Time Per Minute Detail:The average time spent by node JVM on garbage collection per minute due to different reasons, including YGC and FGC
+- Time Consumed Of Compilation Per Minute:The total time JVM spends compiling per minute
+- The Number of Class:
+ - loaded:The number of classes currently loaded by the JVM
+ - unloaded:The number of classes uninstalled by the JVM since system startup
+- The Number of Java Thread:The current number of surviving threads in IoTDB. Each sub item represents the number of threads in each state.
+
+#### Network
+
+Eno refers to the network card connected to the public network, while lo refers to the virtual network card.
+
+- Net Speed:The speed of network card sending and receiving data
+- Receive/Transmit Data Size:The size of data packets sent or received by the network card, calculated from system restart
+- Packet Speed:The speed at which the network card sends and receives packets, and one RPC request can correspond to one or more packets
+- Connection Num:The current number of socket connections for the selected process (IoTDB only has TCP)
+
+### Performance Overview Dashboard
+
+#### Cluster Overview
+
+- Total CPU Core:Total CPU cores of cluster machines
+- DataNode CPU Load:CPU usage of each DataNode node in the cluster
+- Disk
+ - Total Disk Space: Total disk size of cluster machines
+ - DataNode Disk Usage: The disk usage rate of each DataNode in the cluster
+- Total Timeseries: The total number of time series managed by the cluster (including replicas), the actual number of time series needs to be calculated in conjunction with the number of metadata replicas
+- Cluster: Number of ConfigNode and DataNode nodes in the cluster
+- Up Time: The duration of cluster startup until now
+- Total Write Point Per Second: The total number of writes per second in the cluster (including replicas), and the actual total number of writes needs to be analyzed in conjunction with the number of data replicas
+- Memory
+ - Total System Memory: Total memory size of cluster machine system
+ - Total Swap Memory: Total size of cluster machine swap memory
+ - DataNode Process Memory Usage: Memory usage of each DataNode in the cluster
+- Total File Number:Total number of cluster management files
+- Cluster System Overview:Overview of cluster machines, including average DataNode node memory usage and average machine disk usage
+- Total DataBase: The total number of databases managed by the cluster (including replicas)
+- Total DataRegion: The total number of DataRegions managed by the cluster
+- Total SchemaRegion: The total number of SchemeRegions managed by the cluster
+
+#### Node Overview
+
+- CPU Core: The number of CPU cores in the machine where the node is located
+- Disk Space: The disk size of the machine where the node is located
+- Timeseries: Number of time series managed by the machine where the node is located (including replicas)
+- System Overview: System overview of the machine where the node is located, including CPU load, process memory usage ratio, and disk usage ratio
+- Write Point Per Second: The write speed per second of the machine where the node is located (including replicas)
+- System Memory: The system memory size of the machine where the node is located
+- Swap Memory:The swap memory size of the machine where the node is located
+- File Number: Number of files managed by nodes
+
+#### Performance
+
+- Session Idle Time:The total idle time and total busy time of the session connection of the node
+- Client Connection: The client connection status of the node, including the total number of connections and the number of active connections
+- Time Consumed Of Operation: The time consumption of various types of node operations, including average and P99
+- Average Time Consumed Of Interface: The average time consumption of each thrust interface of a node
+- P99 Time Consumed Of Interface: P99 time consumption of various thrust interfaces of nodes
+- Task Number: The number of system tasks for each node
+- Average Time Consumed of Task: The average time spent on various system tasks of a node
+- P99 Time Consumed of Task: P99 time consumption for various system tasks of nodes
+- Operation Per Second: The number of operations per second for a node
+- Mainstream Process
+ - Operation Per Second Of Stage: The number of operations per second for each stage of the node's main process
+ - Average Time Consumed Of Stage: The average time consumption of each stage in the main process of a node
+ - P99 Time Consumed Of Stage: P99 time consumption for each stage of the node's main process
+- Schedule Stage
+ - OPS Of Schedule: The number of operations per second in each sub stage of the node schedule stage
+ - Average Time Consumed Of Schedule Stage:The average time consumption of each sub stage in the node schedule stage
+ - P99 Time Consumed Of Schedule Stage: P99 time consumption for each sub stage of the schedule stage of the node
+- Local Schedule Sub Stages
+ - OPS Of Local Schedule Stage: The number of operations per second in each sub stage of the local schedule node
+ - Average Time Consumed Of Local Schedule Stage: The average time consumption of each sub stage in the local schedule stage of the node
+ - P99 Time Consumed Of Local Schedule Stage: P99 time consumption for each sub stage of the local schedule stage of the node
+- Storage Stage
+ - OPS Of Storage Stage: The number of operations per second in each sub stage of the node storage stage
+ - Average Time Consumed Of Storage Stage: Average time consumption of each sub stage in the node storage stage
+ - P99 Time Consumed Of Storage Stage: P99 time consumption for each sub stage of node storage stage
+- Engine Stage
+ - OPS Of Engine Stage: The number of operations per second in each sub stage of the node engine stage
+ - Average Time Consumed Of Engine Stage: The average time consumption of each sub stage in the engine stage of a node
+ - P99 Time Consumed Of Engine Stage: P99 time consumption of each sub stage in the node engine stage
+
+#### System
+
+- CPU Load: CPU load of nodes
+- CPU Time Per Minute: The CPU time per minute of a node, with the maximum value related to the number of CPU cores
+- GC Time Per Minute:The average GC time per minute for nodes, including YGC and FGC
+- Heap Memory: Node's heap memory usage
+- Off Heap Memory: Non heap memory usage of nodes
+- The Number Of Java Thread: Number of Java threads on nodes
+- File Count:Number of files managed by nodes
+- File Size: Node management file size situation
+- Log Number Per Minute: Different types of logs per minute for nodes
+
+### ConfigNode Dashboard
+
+This panel displays the performance of all management nodes in the cluster, including partitioning, node information, and client connection statistics.
+
+#### Node Overview
+
+- Database Count: Number of databases for nodes
+- Region
+ - DataRegion Count:Number of DataRegions for nodes
+ - DataRegion Current Status: The state of the DataRegion of the node
+ - SchemaRegion Count: Number of SchemeRegions for nodes
+ - SchemaRegion Current Status: The state of the SchemeRegion of the node
+- System Memory: The system memory size of the node
+- Swap Memory: Node's swap memory size
+- ConfigNodes: The running status of the ConfigNode in the cluster where the node is located
+- DataNodes:The DataNode situation of the cluster where the node is located
+- System Overview: System overview of nodes, including system memory, disk usage, process memory, and CPU load
+
+#### NodeInfo
+
+- Node Count: The number of nodes in the cluster where the node is located, including ConfigNode and DataNode
+- ConfigNode Status: The status of the ConfigNode node in the cluster where the node is located
+- DataNode Status: The status of the DataNode node in the cluster where the node is located
+- SchemaRegion Distribution: The distribution of SchemaRegions in the cluster where the node is located
+- SchemaRegionGroup Leader Distribution: The distribution of leaders in the SchemaRegionGroup of the cluster where the node is located
+- DataRegion Distribution: The distribution of DataRegions in the cluster where the node is located
+- DataRegionGroup Leader Distribution:The distribution of leaders in the DataRegionGroup of the cluster where the node is located
+
+#### Protocol
+
+- Client Count
+ - Active Client Num: The number of active clients in each thread pool of a node
+ - Idle Client Num: The number of idle clients in each thread pool of a node
+ - Borrowed Client Count: Number of borrowed clients in each thread pool of the node
+ - Created Client Count: Number of created clients for each thread pool of the node
+ - Destroyed Client Count: The number of destroyed clients in each thread pool of the node
+- Client time situation
+ - Client Mean Active Time: The average active time of clients in each thread pool of a node
+ - Client Mean Borrow Wait Time: The average borrowing waiting time of clients in each thread pool of a node
+ - Client Mean Idle Time: The average idle time of clients in each thread pool of a node
+
+#### Partition Table
+
+- SchemaRegionGroup Count: The number of SchemaRegionGroups in the Database of the cluster where the node is located
+- DataRegionGroup Count: The number of DataRegionGroups in the Database of the cluster where the node is located
+- SeriesSlot Count: The number of SeriesSlots in the Database of the cluster where the node is located
+- TimeSlot Count: The number of TimeSlots in the Database of the cluster where the node is located
+- DataRegion Status: The DataRegion status of the cluster where the node is located
+- SchemaRegion Status: The status of the SchemeRegion of the cluster where the node is located
+
+#### Consensus
+
+- Ratis Stage Time: The time consumption of each stage of the node's Ratis
+- Write Log Entry: The time required to write a log for the Ratis of a node
+- Remote / Local Write Time: The time consumption of remote and local writes for the Ratis of nodes
+- Remote / Local Write QPS: Remote and local QPS written to node Ratis
+- RatisConsensus Memory: Memory usage of Node Ratis consensus protocol
+
+### DataNode Dashboard
+
+This panel displays the monitoring status of all data nodes in the cluster, including write time, query time, number of stored files, etc.
+
+#### Node Overview
+
+- The Number Of Entity: Entity situation of node management
+- Write Point Per Second: The write speed per second of the node
+- Memory Usage: The memory usage of the node, including the memory usage of various parts of IoT Consensus, the total memory usage of SchemaRegion, and the memory usage of various databases.
+
+#### Protocol
+
+- Node Operation Time Consumption
+ - The Time Consumed Of Operation (avg): The average time spent on various operations of a node
+ - The Time Consumed Of Operation (50%): The median time spent on various operations of a node
+ - The Time Consumed Of Operation (99%): P99 time consumption for various operations of nodes
+- Thrift Statistics
+ - The QPS Of Interface: QPS of various Thrift interfaces of nodes
+ - The Avg Time Consumed Of Interface: The average time consumption of each Thrift interface of a node
+ - Thrift Connection: The number of Thrfit connections of each type of node
+ - Thrift Active Thread: The number of active Thrift connections for each type of node
+- Client Statistics
+ - Active Client Num: The number of active clients in each thread pool of a node
+ - Idle Client Num: The number of idle clients in each thread pool of a node
+ - Borrowed Client Count:Number of borrowed clients for each thread pool of a node
+ - Created Client Count: Number of created clients for each thread pool of the node
+ - Destroyed Client Count: The number of destroyed clients in each thread pool of the node
+ - Client Mean Active Time: The average active time of clients in each thread pool of a node
+ - Client Mean Borrow Wait Time: The average borrowing waiting time of clients in each thread pool of a node
+ - Client Mean Idle Time: The average idle time of clients in each thread pool of a node
+
+#### Storage Engine
+
+- File Count: Number of files of various types managed by nodes
+- File Size: Node management of various types of file sizes
+- TsFile
+ - TsFile Total Size In Each Level: The total size of TsFile files at each level of node management
+ - TsFile Count In Each Level: Number of TsFile files at each level of node management
+ - Avg TsFile Size In Each Level: The average size of TsFile files at each level of node management
+- Task Number: Number of Tasks for Nodes
+- The Time Consumed of Task: The time consumption of tasks for nodes
+- Compaction
+ - Compaction Read And Write Per Second: The merge read and write speed of nodes per second
+ - Compaction Number Per Minute: The number of merged nodes per minute
+ - Compaction Process Chunk Status: The number of Chunks in different states merged by nodes
+ - Compacted Point Num Per Minute: The number of merged nodes per minute
+
+#### Write Performance
+
+- Write Cost(avg): Average node write time, including writing wal and memtable
+- Write Cost(50%): Median node write time, including writing wal and memtable
+- Write Cost(99%): P99 for node write time, including writing wal and memtable
+- WAL
+ - WAL File Size: Total size of WAL files managed by nodes
+ - WAL File Num:Number of WAL files managed by nodes
+ - WAL Nodes Num: Number of WAL nodes managed by nodes
+ - Make Checkpoint Costs: The time required to create various types of CheckPoints for nodes
+ - WAL Serialize Total Cost: Total time spent on node WAL serialization
+ - Data Region Mem Cost: Memory usage of different DataRegions of nodes, total memory usage of DataRegions of the current instance, and total memory usage of DataRegions of the current cluster
+ - Serialize One WAL Info Entry Cost: Node serialization time for a WAL Info Entry
+ - Oldest MemTable Ram Cost When Cause Snapshot: MemTable size when node WAL triggers oldest MemTable snapshot
+ - Oldest MemTable Ram Cost When Cause Flush: MemTable size when node WAL triggers oldest MemTable flush
+ - Effective Info Ratio Of WALNode: The effective information ratio of different WALNodes of nodes
+ - WAL Buffer
+ - WAL Buffer Cost: Node WAL flush SyncBuffer takes time, including both synchronous and asynchronous options
+ - WAL Buffer Used Ratio: The usage rate of the WAL Buffer of the node
+ - WAL Buffer Entries Count: The number of entries in the WAL Buffer of a node
+- Flush Statistics
+ - Flush MemTable Cost(avg): The total time spent on node Flush and the average time spent on each sub stage
+ - Flush MemTable Cost(50%): The total time spent on node Flush and the median time spent on each sub stage
+ - Flush MemTable Cost(99%): The total time spent on node Flush and the P99 time spent on each sub stage
+ - Flush Sub Task Cost(avg): The average time consumption of each node's Flush subtask, including sorting, encoding, and IO stages
+ - Flush Sub Task Cost(50%): The median time consumption of each subtask of the Flush node, including sorting, encoding, and IO stages
+ - Flush Sub Task Cost(99%): The average subtask time P99 for Flush of nodes, including sorting, encoding, and IO stages
+- Pending Flush Task Num: The number of Flush tasks in a blocked state for a node
+- Pending Flush Sub Task Num: Number of Flush subtasks blocked by nodes
+- Tsfile Compression Ratio Of Flushing MemTable: The compression rate of TsFile corresponding to node flashing Memtable
+- Flush TsFile Size Of DataRegions: The corresponding TsFile size for each disk flush of nodes in different DataRegions
+- Size Of Flushing MemTable: The size of the Memtable for node disk flushing
+- Points Num Of Flushing MemTable: The number of points when flashing data in different DataRegions of a node
+- Series Num Of Flushing MemTable: The number of time series when flashing Memtables in different DataRegions of a node
+- Average Point Num Of Flushing MemChunk: The average number of disk flushing points for node MemChunk
+
+#### Schema Engine
+
+- Schema Engine Mode: The metadata engine pattern of nodes
+- Schema Consensus Protocol: Node metadata consensus protocol
+- Schema Region Number:Number of SchemeRegions managed by nodes
+- Schema Region Memory Overview: The amount of memory in the SchemeRegion of a node
+- Memory Usgae per SchemaRegion:The average memory usage size of node SchemaRegion
+- Cache MNode per SchemaRegion: The number of cache nodes in each SchemeRegion of a node
+- MLog Length and Checkpoint: The total length and checkpoint position of the current mlog for each SchemeRegion of the node (valid only for SimpleConsense)
+- Buffer MNode per SchemaRegion: The number of buffer nodes in each SchemeRegion of a node
+- Activated Template Count per SchemaRegion: The number of activated templates in each SchemeRegion of a node
+- Time Series statistics
+ - Timeseries Count per SchemaRegion: The average number of time series for node SchemaRegion
+ - Series Type: Number of time series of different types of nodes
+ - Time Series Number: The total number of time series nodes
+ - Template Series Number: The total number of template time series for nodes
+ - Template Series Count per SchemaRegion: The number of sequences created through templates in each SchemeRegion of a node
+- IMNode Statistics
+ - Pinned MNode per SchemaRegion: Number of IMNode nodes with Pinned nodes in each SchemeRegion
+ - Pinned Memory per SchemaRegion: The memory usage size of the IMNode node for Pinned nodes in each SchemeRegion of the node
+ - Unpinned MNode per SchemaRegion: The number of unpinned IMNode nodes in each SchemeRegion of a node
+ - Unpinned Memory per SchemaRegion: Memory usage size of unpinned IMNode nodes in each SchemeRegion of the node
+ - Schema File Memory MNode Number: Number of IMNode nodes with global pinned and unpinned nodes
+ - Release and Flush MNode Rate: The number of IMNodes that release and flush nodes per second
+- Cache Hit Rate: Cache hit rate of nodes
+- Release and Flush Thread Number: The current number of active Release and Flush threads on the node
+- Time Consumed of Relead and Flush (avg): The average time taken for node triggered cache release and buffer flushing
+- Time Consumed of Relead and Flush (99%): P99 time consumption for node triggered cache release and buffer flushing
+
+#### Query Engine
+
+- Time Consumption In Each Stage
+ - The time consumed of query plan stages(avg): The average time spent on node queries at each stage
+ - The time consumed of query plan stages(50%): Median time spent on node queries at each stage
+ - The time consumed of query plan stages(99%): P99 time consumption for node query at each stage
+- Execution Plan Distribution Time
+ - The time consumed of plan dispatch stages(avg): The average time spent on node query execution plan distribution
+ - The time consumed of plan dispatch stages(50%): Median time spent on node query execution plan distribution
+ - The time consumed of plan dispatch stages(99%): P99 of node query execution plan distribution time
+- Execution Plan Execution Time
+ - The time consumed of query execution stages(avg): The average execution time of node query execution plan
+ - The time consumed of query execution stages(50%):Median execution time of node query execution plan
+ - The time consumed of query execution stages(99%): P99 of node query execution plan execution time
+- Operator Execution Time
+ - The time consumed of operator execution stages(avg): The average execution time of node query operators
+ - The time consumed of operator execution(50%): Median execution time of node query operator
+ - The time consumed of operator execution(99%): P99 of node query operator execution time
+- Aggregation Query Computation Time
+ - The time consumed of query aggregation(avg): The average computation time for node aggregation queries
+ - The time consumed of query aggregation(50%): Median computation time for node aggregation queries
+ - The time consumed of query aggregation(99%): P99 of node aggregation query computation time
+- File/Memory Interface Time Consumption
+ - The time consumed of query scan(avg): The average time spent querying file/memory interfaces for nodes
+ - The time consumed of query scan(50%): Median time spent querying file/memory interfaces for nodes
+ - The time consumed of query scan(99%): P99 time consumption for node query file/memory interface
+- Number Of Resource Visits
+ - The usage of query resource(avg): The average number of resource visits for node queries
+ - The usage of query resource(50%): Median number of resource visits for node queries
+ - The usage of query resource(99%): P99 for node query resource access quantity
+- Data Transmission Time
+ - The time consumed of query data exchange(avg): The average time spent on node query data transmission
+ - The time consumed of query data exchange(50%): Median query data transmission time for nodes
+ - The time consumed of query data exchange(99%): P99 for node query data transmission time
+- Number Of Data Transfers
+ - The count of Data Exchange(avg): The average number of data transfers queried by nodes
+ - The count of Data Exchange: The quantile of the number of data transfers queried by nodes, including the median and P99
+- Task Scheduling Quantity And Time Consumption
+ - The number of query queue: Node query task scheduling quantity
+ - The time consumed of query schedule time(avg): The average time spent on scheduling node query tasks
+ - The time consumed of query schedule time(50%): Median time spent on node query task scheduling
+ - The time consumed of query schedule time(99%): P99 of node query task scheduling time
+
+#### Query Interface
+
+- Load Time Series Metadata
+ - The time consumed of load timeseries metadata(avg): The average time taken for node queries to load time series metadata
+ - The time consumed of load timeseries metadata(50%): Median time spent on loading time series metadata for node queries
+ - The time consumed of load timeseries metadata(99%): P99 time consumption for node query loading time series metadata
+- Read Time Series
+ - The time consumed of read timeseries metadata(avg): The average time taken for node queries to read time series
+ - The time consumed of read timeseries metadata(50%): The median time taken for node queries to read time series
+ - The time consumed of read timeseries metadata(99%): P99 time consumption for node query reading time series
+- Modify Time Series Metadata
+ - The time consumed of timeseries metadata modification(avg):The average time taken for node queries to modify time series metadata
+ - The time consumed of timeseries metadata modification(50%): Median time spent on querying and modifying time series metadata for nodes
+ - The time consumed of timeseries metadata modification(99%): P99 time consumption for node query and modification of time series metadata
+- Load Chunk Metadata List
+ - The time consumed of load chunk metadata list(avg): The average time it takes for node queries to load Chunk metadata lists
+ - The time consumed of load chunk metadata list(50%): Median time spent on node query loading Chunk metadata list
+ - The time consumed of load chunk metadata list(99%): P99 time consumption for node query loading Chunk metadata list
+- Modify Chunk Metadata
+ - The time consumed of chunk metadata modification(avg): The average time it takes for node queries to modify Chunk metadata
+ - The time consumed of chunk metadata modification(50%): The total number of bits spent on modifying Chunk metadata for node queries
+ - The time consumed of chunk metadata modification(99%): P99 time consumption for node query and modification of Chunk metadata
+- Filter According To Chunk Metadata
+ - The time consumed of chunk metadata filter(avg): The average time spent on node queries filtering by Chunk metadata
+ - The time consumed of chunk metadata filter(50%): Median filtering time for node queries based on Chunk metadata
+ - The time consumed of chunk metadata filter(99%): P99 time consumption for node query filtering based on Chunk metadata
+- Constructing Chunk Reader
+ - The time consumed of construct chunk reader(avg): The average time spent on constructing Chunk Reader for node queries
+ - The time consumed of construct chunk reader(50%): Median time spent on constructing Chunk Reader for node queries
+ - The time consumed of construct chunk reader(99%): P99 time consumption for constructing Chunk Reader for node queries
+- Read Chunk
+ - The time consumed of read chunk(avg): The average time taken for node queries to read Chunks
+ - The time consumed of read chunk(50%): Median time spent querying nodes to read Chunks
+ - The time consumed of read chunk(99%): P99 time spent on querying and reading Chunks for nodes
+- Initialize Chunk Reader
+ - The time consumed of init chunk reader(avg): The average time spent initializing Chunk Reader for node queries
+ - The time consumed of init chunk reader(50%): Median time spent initializing Chunk Reader for node queries
+ - The time consumed of init chunk reader(99%):P99 time spent initializing Chunk Reader for node queries
+- Constructing TsBlock Through Page Reader
+ - The time consumed of build tsblock from page reader(avg): The average time it takes for node queries to construct TsBlock through Page Reader
+ - The time consumed of build tsblock from page reader(50%): The median time spent on constructing TsBlock through Page Reader for node queries
+ - The time consumed of build tsblock from page reader(99%):Node query using Page Reader to construct TsBlock time-consuming P99
+- Query the construction of TsBlock through Merge Reader
+ - The time consumed of build tsblock from merge reader(avg): The average time taken for node queries to construct TsBlock through Merge Reader
+ - The time consumed of build tsblock from merge reader(50%): The median time spent on constructing TsBlock through Merge Reader for node queries
+ - The time consumed of build tsblock from merge reader(99%): Node query using Merge Reader to construct TsBlock time-consuming P99
+
+#### Query Data Exchange
+
+The data exchange for the query is time-consuming.
+
+- Obtain TsBlock through source handle
+ - The time consumed of source handle get tsblock(avg): The average time taken for node queries to obtain TsBlock through source handle
+ - The time consumed of source handle get tsblock(50%):Node query obtains the median time spent on TsBlock through source handle
+ - The time consumed of source handle get tsblock(99%): Node query obtains TsBlock time P99 through source handle
+- Deserialize TsBlock through source handle
+ - The time consumed of source handle deserialize tsblock(avg): The average time taken for node queries to deserialize TsBlock through source handle
+ - The time consumed of source handle deserialize tsblock(50%): The median time taken for node queries to deserialize TsBlock through source handle
+ - The time consumed of source handle deserialize tsblock(99%): P99 time spent on deserializing TsBlock through source handle for node query
+- Send TsBlock through sink handle
+ - The time consumed of sink handle send tsblock(avg): The average time taken for node queries to send TsBlock through sink handle
+ - The time consumed of sink handle send tsblock(50%): Node query median time spent sending TsBlock through sink handle
+ - The time consumed of sink handle send tsblock(99%): Node query sends TsBlock through sink handle with a time consumption of P99
+- Callback data block event
+ - The time consumed of on acknowledge data block event task(avg): The average time taken for node query callback data block event
+ - The time consumed of on acknowledge data block event task(50%): Median time spent on node query callback data block event
+ - The time consumed of on acknowledge data block event task(99%): P99 time consumption for node query callback data block event
+- Get Data Block Tasks
+ - The time consumed of get data block task(avg): The average time taken for node queries to obtain data block tasks
+ - The time consumed of get data block task(50%): The median time taken for node queries to obtain data block tasks
+ - The time consumed of get data block task(99%): P99 time consumption for node query to obtain data block task
+
+#### Query Related Resource
+
+- MppDataExchangeManager:The number of shuffle sink handles and source handles during node queries
+- LocalExecutionPlanner: The remaining memory that nodes can allocate to query shards
+- FragmentInstanceManager: The query sharding context information and the number of query shards that the node is running
+- Coordinator: The number of queries recorded on the node
+- MemoryPool Size: Node query related memory pool situation
+- MemoryPool Capacity: The size of memory pools related to node queries, including maximum and remaining available values
+- DriverScheduler: Number of queue tasks related to node queries
+
+#### Consensus - IoT Consensus
+
+- Memory Usage
+ - IoTConsensus Used Memory: The memory usage of IoT Consumes for nodes, including total memory usage, queue usage, and synchronization usage
+- Synchronization Status Between Nodes
+ - IoTConsensus Sync Index: SyncIndex size for different DataRegions of IoT Consumption nodes
+ - IoTConsensus Overview:The total synchronization gap and cached request count of IoT consumption for nodes
+ - IoTConsensus Search Index Rate: The growth rate of writing SearchIndex for different DataRegions of IoT Consumer nodes
+ - IoTConsensus Safe Index Rate: The growth rate of synchronous SafeIndex for different DataRegions of IoT Consumer nodes
+ - IoTConsensus LogDispatcher Request Size: The request size for node IoT Consusus to synchronize different DataRegions to other nodes
+ - Sync Lag: The size of synchronization gap between different DataRegions in IoT Consumption node
+ - Min Peer Sync Lag: The minimum synchronization gap between different DataRegions and different replicas of node IoT Consumption
+ - Sync Speed Diff Of Peers: The maximum difference in synchronization from different DataRegions to different replicas for node IoT Consumption
+ - IoTConsensus LogEntriesFromWAL Rate: The rate at which nodes IoT Consumus obtain logs from WAL for different DataRegions
+ - IoTConsensus LogEntriesFromQueue Rate: The rate at which nodes IoT Consumes different DataRegions retrieve logs from the queue
+- Different Execution Stages Take Time
+ - The Time Consumed Of Different Stages (avg): The average time spent on different execution stages of node IoT Consumus
+ - The Time Consumed Of Different Stages (50%): The median time spent on different execution stages of node IoT Consusus
+ - The Time Consumed Of Different Stages (99%):P99 of the time consumption for different execution stages of node IoT Consusus
+
+#### Consensus - DataRegion Ratis Consensus
+
+- Ratis Stage Time: The time consumption of different stages of node Ratis
+- Write Log Entry: The time consumption of writing logs at different stages of node Ratis
+- Remote / Local Write Time: The time it takes for node Ratis to write locally or remotely
+- Remote / Local Write QPS: QPS written by node Ratis locally or remotely
+- RatisConsensus Memory:Memory usage of node Ratis
+
+#### Consensus - SchemaRegion Ratis Consensus
+
+- Ratis Stage Time: The time consumption of different stages of node Ratis
+- Write Log Entry: The time consumption for writing logs at each stage of node Ratis
+- Remote / Local Write Time: The time it takes for node Ratis to write locally or remotelyThe time it takes for node Ratis to write locally or remotely
+- Remote / Local Write QPS: QPS written by node Ratis locally or remotely
+- RatisConsensus Memory: Node Ratis Memory Usage
\ No newline at end of file
diff --git a/src/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
new file mode 100644
index 000000000..571f67246
--- /dev/null
+++ b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
@@ -0,0 +1,244 @@
+
+# Stand-Alone Deployment
+
+This chapter will introduce how to start an IoTDB standalone instance, which includes 1 ConfigNode and 1 DataNode (commonly known as 1C1D).
+
+## Note
+
+1. Before installation, ensure that the system is complete by referring to [System Requirements](./Environment-Requirements.md).
+
+ 2. It is recommended to prioritize using 'hostname' for IP configuration during deployment, which can avoid the problem of modifying the host IP in the later stage and causing the database to fail to start. To set the host name, you need to configure/etc/hosts on the target server. For example, if the local IP is 192.168.1.3 and the host name is iotdb-1, you can use the following command to set the server's host name and configure IoTDB's' cn_internal-address' using the host name dn_internal_address、dn_rpc_address。
+
+ ```shell
+ echo "192.168.1.3 iotdb-1" >> /etc/hosts
+ ```
+
+ 3. Some parameters cannot be modified after the first startup. Please refer to the "Parameter Configuration" section below for settings.
+
+ 4. Whether in linux or windows, ensure that the IoTDB installation path does not contain Spaces and Chinese characters to avoid software exceptions.
+
+ 5. Please note that when installing and deploying IoTDB (including activating and using software), it is necessary to use the same user for operations. You can:
+
+ - Using root user (recommended): Using root user can avoid issues such as permissions.
+ - Using a fixed non root user:
+ - Using the same user operation: Ensure that the same user is used for start, activation, stop, and other operations, and do not switch users.
+ - Avoid using sudo: Try to avoid using sudo commands as they execute commands with root privileges, which may cause confusion or security issues.
+
+ 6. It is recommended to deploy a monitoring panel, which can monitor important operational indicators and keep track of database operation status at any time. The monitoring panel can be obtained by contacting the business department, and the steps for deploying the monitoring panel can be referred to:[Monitoring Board Install and Deploy](./Monitoring-panel-deployment.md).
+
+## Installation Steps
+
+### 1、Unzip the installation package and enter the installation directory
+
+```Plain
+unzip timechodb-{version}-bin.zip
+cd timechodb-{version}-bin
+```
+
+### 2、Parameter Configuration
+
+#### Memory Configuration
+
+- conf/confignode-env.sh(or .bat)
+
+ | **Configuration** | **Description** | **Default** | **Recommended value** | Note |
+ | :---------------: | :----------------------------------------------------------: | :---------: | :----------------------------------------------------------: | :---------------------------------: |
+ | MEMORY_SIZE | The total amount of memory that IoTDB ConfigNode nodes can use | empty | Can be filled in as needed, and the system will allocate memory based on the filled in values | Restarting the service takes effect |
+
+- conf/datanode-env.sh(or .bat)
+
+ | **Configuration** | **Description** | **Default** | **Recommended value** | **Note** |
+ | :---------------: | :----------------------------------------------------------: | :---------: | :----------------------------------------------------------: | :---------------------------------: |
+ | MEMORY_SIZE | The total amount of memory that IoTDB DataNode nodes can use | empty | Can be filled in as needed, and the system will allocate memory based on the filled in values | Restarting the service takes effect |
+
+#### Function Configuration
+
+The parameters that actually take effect in the system are in the file conf/iotdb-system.exe. To start, the following parameters need to be set, which can be viewed in the conf/iotdb-system.exe file for all parameters
+
+Cluster function configuration
+
+| **Configuration** | **Description** | **Default** | **Recommended value** | Note |
+| :-----------------------: | :----------------------------------------------------------: | :------------: | :----------------------------------------------------------: | :---------------------------------------------------: |
+| cluster_name | Cluster Name | defaultCluster | The cluster name can be set as needed, and if there are no special needs, the default can be kept | Cannot be modified after initial startup |
+| schema_replication_factor | Number of metadata replicas, set to 1 for the standalone version here | 1 | 1 | Default 1, cannot be modified after the first startup |
+| data_replication_factor | Number of data replicas, set to 1 for the standalone version here | 1 | 1 | Default 1, cannot be modified after the first startup |
+
+ConfigNode Configuration
+
+| **Configuration** | **Description** | **Default** | **Recommended value** | Note |
+| :-----------------: | :----------------------------------------------------------: | :-------------: | :----------------------------------------------------------: | :--------------------------------------: |
+| cn_internal_address | The address used by ConfigNode for communication within the cluster | 127.0.0.1 | The IPV4 address or host name of the server where it is located, and it is recommended to use host name | Cannot be modified after initial startup |
+| cn_internal_port | The port used by ConfigNode for communication within the cluster | 10710 | 10710 | Cannot be modified after initial startup |
+| cn_consensus_port | The port used for ConfigNode replica group consensus protocol communication | 10720 | 10720 | Cannot be modified after initial startup |
+| cn_seed_config_node | The address of the ConfigNode that the node connects to when registering to join the cluster, cn_internal_address:cn_internal_port | 127.0.0.1:10710 | cn_internal_address:cn_internal_port | Cannot be modified after initial startup |
+
+DataNode Configuration
+
+| **Configuration** | **Description** | **Default** | **Recommended value** | **Note** |
+| :------------------------------ | :----------------------------------------------------------- | :-------------- | :----------------------------------------------------------- | :--------------------------------------- |
+| dn_rpc_address | The address of the client RPC service | 0.0.0.0 | The IPV4 address or host name of the server where it is located, and it is recommended to use host name | Restarting the service takes effect |
+| dn_rpc_port | The port of the client RPC service | 6667 | 6667 | Restarting the service takes effect |
+| dn_internal_address | The address used by DataNode for communication within the cluster | 127.0.0.1 | The IPV4 address or host name of the server where it is located, and it is recommended to use host name | Cannot be modified after initial startup |
+| dn_internal_port | The port used by DataNode for communication within the cluster | 10730 | 10730 | Cannot be modified after initial startup |
+| dn_mpp_data_exchange_port | The port used by DataNode to receive data streams | 10740 | 10740 | Cannot be modified after initial startup |
+| dn_data_region_consensus_port | The port used by DataNode for data replica consensus protocol communication | 10750 | 10750 | Cannot be modified after initial startup |
+| dn_schema_region_consensus_port | The port used by DataNode for metadata replica consensus protocol communication | 10760 | 10760 | Cannot be modified after initial startup |
+| dn_seed_config_node | The ConfigNode address that the node connects to when registering to join the cluster, i.e. cn_internal-address: cn_internal_port | 127.0.0.1:10710 | cn_internal_address:cn_internal_port | Cannot be modified after initial startup |
+
+### 3、Start ConfigNode
+
+Enter the sbin directory of iotdb and start confignode
+
+```shell
+
+./start-confignode.sh -d #The "- d" parameter will start in the background
+
+```
+
+If the startup fails, please refer to [Common Problem](#common-problem).
+
+### 4、Start DataNode
+
+ Enter the sbin directory of iotdb and start datanode:
+
+```shell
+
+cd sbin
+
+./start-datanode.sh -d # The "- d" parameter will start in the background
+
+```
+
+### 5、Activate Database
+
+#### Method 1: Activate file copy activation
+
+- After starting the confignode datanode node, enter the activation folder and copy the systeminfo file to the Timecho staff
+
+- Received the license file returned by the staff
+
+- Place the license file in the activation folder of the corresponding node;
+
+#### Method 2: Activate Script Activation
+
+- Obtain the required machine code for activation, enter the IoTDB CLI (./start-cli.sh-sql-dialect table/start-cli.bat - sql-dialect table), and perform the following:
+
+ - Note: When sql-dialect is a table, it is temporarily not supported to use
+
+```shell
+show system info
+```
+
+- Display the following information, please copy the machine code (i.e. green string) to the Timecho staff:
+
+```sql
++--------------------------------------------------------------+
+| SystemInfo|
++--------------------------------------------------------------+
+| 01-TE5NLES4-UDDWCMYE|
++--------------------------------------------------------------+
+Total line number = 1
+It costs 0.030s
+```
+
+- Enter the activation code returned by the staff into the CLI and enter the following content
+
+ - Note: The activation code needs to be marked with a `'`symbol before and after, as shown in
+
+```sql
+IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA==='
+```
+
+### 6、Verify Activation
+
+When the "ClusterActivation Status" field is displayed as Activated, it indicates successful activation
+
+![](https://alioss.timecho.com/docs/img/%E5%8D%95%E6%9C%BA-%E9%AA%8C%E8%AF%81.png)
+
+## Common Problem
+
+1. Multiple prompts indicating activation failure during deployment process
+
+ - Use the `ls -al` command: Use the `ls -al` command to check if the owner information of the installation package root directory is the current user.
+
+ - Check activation directory: Check all files in the `./activation` directory and whether the owner information is the current user.
+
+2. Confignode failed to start
+
+ Step 1: Please check the startup log to see if any parameters that cannot be changed after the first startup have been modified.
+
+ Step 2: Please check the startup log for any other abnormalities. If there are any abnormal phenomena in the log, please contact Timecho Technical Support personnel for consultation on solutions.
+
+ Step 3: If it is the first deployment or data can be deleted, you can also clean up the environment according to the following steps, redeploy, and restart.
+
+ Step 4: Clean up the environment:
+
+ a. Terminate all ConfigNode Node and DataNode processes.
+
+ ```Bash
+ # 1. Stop the ConfigNode and DataNode services
+ sbin/stop-standalone.sh
+
+ # 2. Check for any remaining processes
+ jps
+ # Or
+ ps -ef|gerp iotdb
+
+ # 3. If there are any remaining processes, manually kill the
+ kill -9
+ # If you are sure there is only one iotdb on the machine, you can use the following command to clean up residual processes
+ ps -ef|grep iotdb|grep -v grep|tr -s ' ' ' ' |cut -d ' ' -f2|xargs kill -9
+ ```
+
+ b. Delete the data and logs directories.
+
+ Explanation: Deleting the data directory is necessary, deleting the logs directory is for clean logs and is not mandatory.
+
+ ```Bash
+ cd /data/iotdb
+ rm -rf data logs
+ ```
+
+## Appendix
+
+### Introduction to Configuration Node Parameters
+
+| Parameter | Description | Is it required |
+| :-------- | :---------------------------------------------- | :----------------- |
+| -d | Start in daemon mode, running in the background | No |
+
+### Introduction to Datanode Node Parameters
+
+| Abbreviation | Description | Is it required |
+| :----------- | :----------------------------------------------------------- | :------------- |
+| -v | Show version information | No |
+| -f | Run the script in the foreground, do not put it in the background | No |
+| -d | Start in daemon mode, i.e. run in the background | No |
+| -p | Specify a file to store the process ID for process management | No |
+| -c | Specify the path to the configuration file folder, the script will load the configuration file from here | No |
+| -g | Print detailed garbage collection (GC) information | No |
+| -H | Specify the path of the Java heap dump file, used when JVM memory overflows | No |
+| -E | Specify the path of the JVM error log file | No |
+| -D | Define system properties, in the format key=value | No |
+| -X | Pass -XX parameters directly to the JVM | No |
+| -h | Help instruction | No |
+
diff --git a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
new file mode 100644
index 000000000..f44f729b9
--- /dev/null
+++ b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
@@ -0,0 +1,362 @@
+
+# 集群版安装部署
+
+本小节描述如何手动部署包括3个ConfigNode和3个DataNode的实例,即通常所说的3C3D集群。
+
+
+
+
+
+## 注意事项
+
+1. 安装前请确认系统已参照[系统配置](../Deployment-and-Maintenance/Environment-Requirements.md)准备完成。
+
+2. 推荐使用`hostname`进行IP配置,可避免后期修改主机ip导致数据库无法启动的问题。设置hostname需要在服务器上配`/etc/hosts`,如本机ip是11.101.17.224,hostname是iotdb-1,则可以使用以下命令设置服务器的 hostname,并使用hostname配置IoTDB的`cn_internal_address`、`dn_internal_address`。
+
+ ```shell
+ echo "11.101.17.224 iotdb-1" >> /etc/hosts
+ ```
+
+3. 有些参数首次启动后不能修改,请参考下方的[参数配置](#参数配置)章节来进行设置。
+
+4. 无论是在linux还是windows中,请确保IoTDB的安装路径中不含空格和中文,避免软件运行异常。
+
+5. 请注意,安装部署(包括激活和使用软件)IoTDB时,您可以:
+
+- 使用 root 用户(推荐):可以避免权限等问题。
+
+- 使用固定的非 root 用户:
+
+ - 使用同一用户操作:确保在启动、激活、停止等操作均保持使用同一用户,不要切换用户。
+
+ - 避免使用 sudo:使用 sudo 命令会以 root 用户权限执行命令,可能会引起权限混淆或安全问题。
+
+6. 推荐部署监控面板,可以对重要运行指标进行监控,随时掌握数据库运行状态,监控面板可以联系商务获取,部署监控面板步骤可以参考:[监控面板部署](./Monitoring-panel-deployment.md)
+
+## 准备步骤
+
+1. 准备IoTDB数据库安装包 :timechodb-{version}-bin.zip(安装包获取见:[链接](./IoTDB-Package_timecho.md))
+2. 按环境要求配置好操作系统环境(系统环境配置见:[链接](./Environment-Requirements.md))
+
+## 安装步骤
+
+假设现在有3台linux服务器,IP地址和服务角色分配如下:
+
+| 节点ip | 主机名 | 服务 |
+| ------------- | ------- | -------------------- |
+| 11.101.17.224 | iotdb-1 | ConfigNode、DataNode |
+| 11.101.17.225 | iotdb-2 | ConfigNode、DataNode |
+| 11.101.17.226 | iotdb-3 | ConfigNode、DataNode |
+
+### 设置主机名
+
+在3台机器上分别配置主机名,设置主机名需要在目标服务器上配置/etc/hosts,使用如下命令:
+
+```shell
+echo "11.101.17.224 iotdb-1" >> /etc/hosts
+echo "11.101.17.225 iotdb-2" >> /etc/hosts
+echo "11.101.17.226 iotdb-3" >> /etc/hosts
+```
+
+### 参数配置
+
+解压安装包并进入安装目录
+
+```shell
+unzip timechodb-{version}-bin.zip
+cd timechodb-{version}-bin
+```
+
+#### 环境脚本配置
+
+- ./conf/confignode-env.sh配置
+
+| **配置项** | **说明** | **默认值** | **推荐值** | 备注 |
+| :---------- | :------------------------------------- | :--------- | :----------------------------------------------- | :----------- |
+| MEMORY_SIZE | IoTDB ConfigNode节点可以使用的内存总量 | 空 | 可按需填写,填写后系统会根据填写的数值来分配内存 | 重启服务生效 |
+
+- ./conf/datanode-env.sh配置
+
+| **配置项** | **说明** | **默认值** | **推荐值** | 备注 |
+| :---------- | :----------------------------------- | :--------- | :----------------------------------------------- | :----------- |
+| MEMORY_SIZE | IoTDB DataNode节点可以使用的内存总量 | 空 | 可按需填写,填写后系统会根据填写的数值来分配内存 | 重启服务生效 |
+
+#### 通用配置(./conf/iotdb-system.properties)
+
+- 集群配置
+
+| 配置项 | 说明 | 11.101.17.224 | 11.101.17.225 | 11.101.17.226 |
+| ------------------------- | ---------------------------------------- | -------------- | -------------- | -------------- |
+| cluster_name | 集群名称 | defaultCluster | defaultCluster | defaultCluster |
+| schema_replication_factor | 元数据副本数,DataNode数量不应少于此数目 | 3 | 3 | 3 |
+| data_replication_factor | 数据副本数,DataNode数量不应少于此数目 | 2 | 2 | 2 |
+
+#### ConfigNode 配置
+
+| 配置项 | 说明 | 默认 | 推荐值 | 11.101.17.224 | 11.101.17.225 | 11.101.17.226 | 备注 |
+| ------------------- | ------------------------------------------------------------ | --------------- | ------------------------------------------------------- | ------------- | ------------- | ------------- | ------------------ |
+| cn_internal_address | ConfigNode在集群内部通讯使用的地址 | 127.0.0.1 | 所在服务器的IPV4地址或hostname,推荐使用hostname | iotdb-1 | iotdb-2 | iotdb-3 | 首次启动后不能修改 |
+| cn_internal_port | ConfigNode在集群内部通讯使用的端口 | 10710 | 10710 | 10710 | 10710 | 10710 | 首次启动后不能修改 |
+| cn_consensus_port | ConfigNode副本组共识协议通信使用的端口 | 10720 | 10720 | 10720 | 10720 | 10720 | 首次启动后不能修改 |
+| cn_seed_config_node | 节点注册加入集群时连接的ConfigNode 的地址,cn_internal_address:cn_internal_port | 127.0.0.1:10710 | 第一个CongfigNode的cn_internal_address:cn_internal_port | iotdb-1:10710 | iotdb-1:10710 | iotdb-1:10710 | 首次启动后不能修改 |
+
+#### DataNode 配置
+
+| 配置项 | 说明 | 默认 | 推荐值 | 11.101.17.224 | 11.101.17.225 | 11.101.17.226 | 备注 |
+| ------------------------------- | ------------------------------------------------------------ | --------------- | ------------------------------------------------------- | ------------- | ------------- | ------------- | ------------------ |
+| dn_rpc_address | 客户端 RPC 服务的地址 | 0.0.0.0 | 0.0.0.0 | 0.0.0.0 | 0.0.0.0 | 0.0.0.0 | 重启服务生效 |
+| dn_rpc_port | 客户端 RPC 服务的端口 | 6667 | 6667 | 6667 | 6667 | 6667 | 重启服务生效 |
+| dn_internal_address | DataNode在集群内部通讯使用的地址 | 127.0.0.1 | 所在服务器的IPV4地址或hostname,推荐使用hostname | iotdb-1 | iotdb-2 | iotdb-3 | 首次启动后不能修改 |
+| dn_internal_port | DataNode在集群内部通信使用的端口 | 10730 | 10730 | 10730 | 10730 | 10730 | 首次启动后不能修改 |
+| dn_mpp_data_exchange_port | DataNode用于接收数据流使用的端口 | 10740 | 10740 | 10740 | 10740 | 10740 | 首次启动后不能修改 |
+| dn_data_region_consensus_port | DataNode用于数据副本共识协议通信使用的端口 | 10750 | 10750 | 10750 | 10750 | 10750 | 首次启动后不能修改 |
+| dn_schema_region_consensus_port | DataNode用于元数据副本共识协议通信使用的端口 | 10760 | 10760 | 10760 | 10760 | 10760 | 首次启动后不能修改 |
+| dn_seed_config_node | 节点注册加入集群时连接的ConfigNode地址,即cn_internal_address:cn_internal_port | 127.0.0.1:10710 | 第一个CongfigNode的cn_internal_address:cn_internal_port | iotdb-1:10710 | iotdb-1:10710 | iotdb-1:10710 | 首次启动后不能修改 |
+
+> ❗️注意:VSCode Remote等编辑器无自动保存配置功能,请确保修改的文件被持久化保存,否则配置项无法生效
+
+### 启动ConfigNode节点
+
+先启动第一个iotdb-1的confignode, 保证种子confignode节点先启动,然后依次启动第2和第3个confignode节点
+
+```shell
+cd sbin
+./start-confignode.sh -d #“-d”参数将在后台进行启动
+```
+
+如果启动失败,请参考下[常见问题](#常见问题)
+
+### 启动DataNode 节点
+
+ 分别进入iotdb的sbin目录下,依次启动3个datanode节点:
+
+```shell
+cd sbin
+./start-datanode.sh -d #-d参数将在后台进行启动
+```
+
+### 激活数据库
+
+#### 方式一:激活文件拷贝激活
+
+- 依次启动3个Confignode、Datanode节点后,每台机器各自的activation文件夹, 分别拷贝每台机器的system_info文件给天谋工作人员;
+- 工作人员将返回每个ConfigNode、Datanode节点的license文件,这里会返回3个license文件;
+- 将3个license文件分别放入对应的ConfigNode节点的activation文件夹下;
+
+#### 方式二:激活脚本激活
+
+- 依次获取3台机器的机器码,进入到IoTDB树模型的CLI中(./start-cli.sh -sql_dialect table/start-cli.bat -sql_dialect table),执行以下内容:
+ - 注:当 sql_dialect 为 table 时,暂时不支持使用
+
+```shell
+show system info
+```
+
+- 显示如下信息,这里显示的是1台机器的机器码 :
+
+```shell
++--------------------------------------------------------------+
+| SystemInfo|
++--------------------------------------------------------------+
+|01-TE5NLES4-UDDWCMYE,01-GG5NLES4-XXDWCMYE,01-FF5NLES4-WWWWCMYE|
++--------------------------------------------------------------+
+Total line number = 1
+It costs 0.030s
+```
+
+- 其他2个节点依次进入到IoTDB树模型的CLI中,执行语句后将获取的3台机器的机器码都复制给天谋工作人员
+- 工作人员会返回3段激活码,正常是与提供的3个机器码的顺序对应的,请分别将各自的激活码粘贴到CLI中,如下提示:
+ - 注:激活码前后需要用`'`符号进行标注,如下所示
+```shell
+ IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA==='
+```
+
+### 验证激活
+
+当看到“Result”字段状态显示为success表示激活成功
+
+![](https://alioss.timecho.com/docs/img/%E9%9B%86%E7%BE%A4-%E9%AA%8C%E8%AF%81.png)
+
+## 节点维护步骤
+
+### ConfigNode节点维护
+
+ConfigNode节点维护分为ConfigNode添加和移除两种操作,有两个常见使用场景:
+
+- 集群扩展:如集群中只有1个ConfigNode时,希望增加ConfigNode以提升ConfigNode节点高可用性,则可以添加2个ConfigNode,使得集群中有3个ConfigNode。
+- 集群故障恢复:1个ConfigNode所在机器发生故障,使得该ConfigNode无法正常运行,此时可以移除该ConfigNode,然后添加一个新的ConfigNode进入集群。
+
+> ❗️注意,在完成ConfigNode节点维护后,需要保证集群中有1或者3个正常运行的ConfigNode。2个ConfigNode不具备高可用性,超过3个ConfigNode会导致性能损失。
+
+#### 添加ConfigNode节点
+
+脚本命令:
+
+```shell
+# Linux / MacOS
+# 首先切换到IoTDB根目录
+sbin/start-confignode.sh
+
+# Windows
+# 首先切换到IoTDB根目录
+sbin/start-confignode.bat
+```
+
+#### 移除ConfigNode节点
+
+首先通过CLI连接集群,通过`show confignodes`确认想要移除ConfigNode的内部地址与端口号:
+
+```shell
+IoTDB> show confignodes
++------+-------+---------------+------------+--------+
+|NodeID| Status|InternalAddress|InternalPort| Role|
++------+-------+---------------+------------+--------+
+| 0|Running| 127.0.0.1| 10710| Leader|
+| 1|Running| 127.0.0.1| 10711|Follower|
+| 2|Running| 127.0.0.1| 10712|Follower|
++------+-------+---------------+------------+--------+
+Total line number = 3
+It costs 0.030s
+```
+
+然后使用脚本将DataNode移除。脚本命令:
+
+```Bash
+# Linux / MacOS
+sbin/remove-confignode.sh [confignode_id]
+或
+./sbin/remove-confignode.sh [cn_internal_address:cn_internal_port]
+
+#Windows
+sbin/remove-confignode.bat [confignode_id]
+或
+./sbin/remove-confignode.bat [cn_internal_address:cn_internal_port]
+```
+
+### DataNode节点维护
+
+DataNode节点维护有两个常见场景:
+
+- 集群扩容:出于集群能力扩容等目的,添加新的DataNode进入集群
+- 集群故障恢复:一个DataNode所在机器出现故障,使得该DataNode无法正常运行,此时可以移除该DataNode,并添加新的DataNode进入集群
+
+> ❗️注意,为了使集群能正常工作,在DataNode节点维护过程中以及维护完成后,正常运行的DataNode总数不得少于数据副本数(通常为2),也不得少于元数据副本数(通常为3)。
+
+#### 添加DataNode节点
+
+脚本命令:
+
+```Bash
+# Linux / MacOS
+# 首先切换到IoTDB根目录
+sbin/start-datanode.sh
+
+#Windows
+# 首先切换到IoTDB根目录
+sbin/start-datanode.bat
+```
+
+说明:在添加DataNode后,随着新的写入到来(以及旧数据过期,如果设置了TTL),集群负载会逐渐向新的DataNode均衡,最终在所有节点上达到存算资源的均衡。
+
+#### 移除DataNode节点
+
+首先通过CLI连接集群,通过`show datanodes`确认想要移除的DataNode的RPC地址与端口号:
+
+```Bash
+IoTDB> show datanodes
++------+-------+----------+-------+-------------+---------------+
+|NodeID| Status|RpcAddress|RpcPort|DataRegionNum|SchemaRegionNum|
++------+-------+----------+-------+-------------+---------------+
+| 1|Running| 0.0.0.0| 6667| 0| 0|
+| 2|Running| 0.0.0.0| 6668| 1| 1|
+| 3|Running| 0.0.0.0| 6669| 1| 0|
++------+-------+----------+-------+-------------+---------------+
+Total line number = 3
+It costs 0.110s
+```
+
+然后使用脚本将DataNode移除。脚本命令:
+
+```Bash
+# Linux / MacOS
+sbin/remove-datanode.sh [dn_rpc_address:dn_rpc_port]
+
+#Windows
+sbin/remove-datanode.bat [dn_rpc_address:dn_rpc_port]
+```
+
+## 常见问题
+
+1. 部署过程中多次提示激活失败
+ - 使用 `ls -al` 命令:使用 `ls -al` 命令检查安装包根目录的所有者信息是否为当前用户。
+ - 检查激活目录:检查 `./activation` 目录下的所有文件,所有者信息是否为当前用户。
+2. Confignode节点启动失败
+ - 步骤 1: 请查看启动日志,检查是否修改了某些首次启动后不可改的参数。
+ - 步骤 2: 请查看启动日志,检查是否出现其他异常。日志中若存在异常现象,请联系天谋技术支持人员咨询解决方案。
+ - 步骤 3: 如果是首次部署或者数据可删除,也可按下述步骤清理环境,重新部署后,再次启动。
+ - 清理环境:
+
+ 1. 结束所有 ConfigNode 和 DataNode 进程。
+ ```Bash
+ # 1. 停止 ConfigNode 和 DataNode 服务
+ sbin/stop-standalone.sh
+
+ # 2. 检查是否还有进程残留
+ jps
+ # 或者
+ ps -ef|gerp iotdb
+
+ # 3. 如果有进程残留,则手动kill
+ kill -9
+ # 如果确定机器上仅有1个iotdb,可以使用下面命令清理残留进程
+ ps -ef|grep iotdb|grep -v grep|tr -s ' ' ' ' |cut -d ' ' -f2|xargs kill -9
+ ```
+
+ 2. 删除 data 和 logs 目录。
+ - 说明:删除 data 目录是必要的,删除 logs 目录是为了纯净日志,非必需。
+ ```shell
+ cd /data/iotdb rm -rf data logs
+ ```
+## 附录
+
+### Confignode节点参数介绍
+
+| 参数 | 描述 | 是否为必填项 |
+| :--- | :------------------------------- | :----------- |
+| -d | 以守护进程模式启动,即在后台运行 | 否 |
+
+### Datanode节点参数介绍
+
+| 缩写 | 描述 | 是否为必填项 |
+| :--- | :--------------------------------------------- | :----------- |
+| -v | 显示版本信息 | 否 |
+| -f | 在前台运行脚本,不将其放到后台 | 否 |
+| -d | 以守护进程模式启动,即在后台运行 | 否 |
+| -p | 指定一个文件来存放进程ID,用于进程管理 | 否 |
+| -c | 指定配置文件夹的路径,脚本会从这里加载配置文件 | 否 |
+| -g | 打印垃圾回收(GC)的详细信息 | 否 |
+| -H | 指定Java堆转储文件的路径,当JVM内存溢出时使用 | 否 |
+| -E | 指定JVM错误日志文件的路径 | 否 |
+| -D | 定义系统属性,格式为 key=value | 否 |
+| -X | 直接传递 -XX 参数给 JVM | 否 |
+| -h | 帮助指令 | 否 |
+
diff --git a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Database-Resources.md b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Database-Resources.md
new file mode 100644
index 000000000..17e09aa0f
--- /dev/null
+++ b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Database-Resources.md
@@ -0,0 +1,193 @@
+
+# 资源规划
+## CPU
+
+
+ 序列数(采集频率<=1HZ)
+ CPU
+ 节点数
+
+
+ 单机
+ 双活
+ 分布式
+
+
+ 10W以内
+ 2核-4核
+ 1
+ 2
+ 3
+
+
+ 30W以内
+ 4核-8核
+ 1
+ 2
+ 3
+
+
+ 50W以内
+ 8核-16核
+ 1
+ 2
+ 3
+
+
+ 100W以内
+ 16核-32核
+ 1
+ 2
+ 3
+
+
+ 200w以内
+ 32核-48核
+ 1
+ 2
+ 3
+
+
+ 1000w以内
+ 48核
+ 1
+ 2
+ 请联系天谋商务咨询
+
+
+ 1000w以上
+ 请联系天谋商务咨询
+
+
+
+## 内存
+
+
+ 序列数(采集频率<=1HZ)
+ 内存
+ 节点数
+
+
+ 单机
+ 双活
+ 分布式
+
+
+ 10W以内
+ 4G-8G
+ 1
+ 2
+ 3
+
+
+ 30W以内
+ 12G-32G
+ 1
+ 2
+ 3
+
+
+ 50W以内
+ 24G-48G
+ 1
+ 2
+ 3
+
+
+ 100W以内
+ 32G-96G
+ 1
+ 2
+ 3
+
+
+ 200w以内
+ 64G-128G
+ 1
+ 2
+ 3
+
+
+ 1000w以内
+ 128G
+ 1
+ 2
+ 请联系天谋商务咨询
+
+
+ 1000w以上
+ 请联系天谋商务咨询
+
+
+
+## 存储(磁盘)
+### 存储空间
+计算公式:测点数量 * 采样频率(Hz)* 每个数据点大小(Byte,不同数据类型不一样,见下表)
+
+
+ 数据点大小计算表
+
+
+ 数据类型
+ 时间戳(字节)
+ 值(字节)
+ 数据点总大小(字节)
+
+
+ 开关量(Boolean)
+ 8
+ 1
+ 9
+
+
+ 整型(INT32)/ 单精度浮点数(FLOAT)
+ 8
+ 4
+ 12
+
+
+ 长整型(INT64)/ 双精度浮点数(DOUBLE)
+ 8
+ 8
+ 16
+
+
+ 字符串(TEXT)
+ 8
+ 平均为a
+ 8+a
+
+
+
+示例:1000设备,每个设备100 测点,共 100000 序列,INT32 类型。采样频率1Hz(每秒一次),存储1年,3副本。
+- 完整计算公式:1000设备 * 100测点 * 12字节每数据点 * 86400秒每天 * 365天每年 * 3副本/10压缩比=11T
+- 简版计算公式:1000 * 100 * 12 * 86400 * 365 * 3 / 10 = 11T
+### 存储配置
+1000w 点位以上或查询负载较大,推荐配置 SSD。
+## 网络(网卡)
+在写入吞吐不超过1000万点/秒时,需配置千兆网卡;当写入吞吐超过 1000万点/秒时,需配置万兆网卡。
+| **写入吞吐(数据点/秒)** | **网卡速率** |
+| ------------------- | ------------- |
+| <1000万 | 1Gbps(千兆) |
+| >=1000万 | 10Gbps(万兆) |
+## 其他说明
+IoTDB 具有集群秒级扩容能力,扩容节点数据可不迁移,因此您无需担心按现有数据情况估算的集群能力有限,未来您可在需要扩容时为集群加入新的节点。
\ No newline at end of file
diff --git a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Environment-Requirements.md b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Environment-Requirements.md
new file mode 100644
index 000000000..99c5b14cc
--- /dev/null
+++ b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Environment-Requirements.md
@@ -0,0 +1,205 @@
+
+# 系统配置
+
+## 磁盘阵列
+
+### 配置建议
+
+IoTDB对磁盘阵列配置没有严格运行要求,推荐使用多个磁盘阵列存储IoTDB的数据,以达到多个磁盘阵列并发写入的目标,配置可参考以下建议:
+
+1. 物理环境
+ 系统盘:建议使用2块磁盘做Raid1,仅考虑操作系统自身所占空间即可,可以不为IoTDB预留系统盘空间
+ 数据盘
+ 建议做Raid,在磁盘维度进行数据保护
+ 建议为IoTDB提供多块磁盘(1-6块左右)或磁盘组(不建议将所有磁盘做成一个磁盘阵列,会影响 IoTDB的性能上限)
+2. 虚拟环境
+ 建议挂载多块硬盘(1-6块左右)
+
+### 配置示例
+
+- 示例1,4块3.5英寸硬盘
+
+因服务器安装的硬盘较少,直接做Raid5即可,无需其他配置。
+
+推荐配置如下:
+
+| **使用分类** | **Raid类型** | **硬盘数量** | **冗余** | **可用容量** |
+| ----------- | -------- | -------- | --------- | -------- |
+| 系统/数据盘 | RAID5 | 4 | 允许坏1块 | 3 |
+
+- 示例2,12块3.5英寸硬盘
+
+服务器配置12块3.5英寸盘。
+
+前2块盘推荐Raid1作系统盘,2组数据盘可分为2组Raid5,每组5块盘实际可用4块。
+
+推荐配置如下:
+
+| **使用分类** | **Raid类型** | **硬盘数量** | **冗余** | **可用容量** |
+| -------- | -------- | -------- | --------- | -------- |
+| 系统盘 | RAID1 | 2 | 允许坏1块 | 1 |
+| 数据盘 | RAID5 | 5 | 允许坏1块 | 4 |
+| 数据盘 | RAID5 | 5 | 允许坏1块 | 4 |
+
+- 示例3,24块2.5英寸盘
+
+服务器配置24块2.5英寸盘。
+
+前2块盘推荐Raid1作系统盘,后面可分为3组Raid5,每组7块盘实际可用6块。剩余一块可闲置或存储写前日志使用。
+
+推荐配置如下:
+
+| **使用分类** | **Raid类型** | **硬盘数量** | **冗余** | **可用容量** |
+| -------- | -------- | -------- | --------- | -------- |
+| 系统盘 | RAID1 | 2 | 允许坏1块 | 1 |
+| 数据盘 | RAID5 | 7 | 允许坏1块 | 6 |
+| 数据盘 | RAID5 | 7 | 允许坏1块 | 6 |
+| 数据盘 | RAID5 | 7 | 允许坏1块 | 6 |
+| 数据盘 | NoRaid | 1 | 损坏丢失 | 1 |
+
+## 操作系统
+
+### 版本要求
+
+IoTDB支持Linux、Windows、MacOS等操作系统,同时企业版支持龙芯、飞腾、鲲鹏等国产 CPU,支持中标麒麟、银河麒麟、统信、凝思等国产服务器操作系统。
+
+### 硬盘分区
+
+- 建议使用默认的标准分区方式,不推荐LVM扩展和硬盘加密。
+- 系统盘只需满足操作系统的使用空间即可,不需要为IoTDB预留空间。
+- 每个硬盘组只对应一个分区即可,数据盘(里面有多个磁盘组,对应raid)不用再额外分区,所有空间给IoTDB使用。
+
+建议的磁盘分区方式如下表所示。
+
+
+ 硬盘分类
+ 磁盘组
+ 对应盘符
+ 大小
+ 文件系统类型
+
+
+ 系统盘
+ 磁盘组0
+ /boot
+ 1GB
+ 默认
+
+
+ /
+ 磁盘组剩余全部空间
+ 默认
+
+
+ 数据盘
+ 磁盘组1
+ /data1
+ 磁盘组1全部空间
+ 默认
+
+
+ 磁盘组2
+ /data2
+ 磁盘组2全部空间
+ 默认
+
+
+ ......
+
+
+
+### 网络配置
+
+1. 关闭防火墙
+
+```Bash
+# 查看防火墙
+systemctl status firewalld
+# 关闭防火墙
+systemctl stop firewalld
+# 永久关闭防火墙
+systemctl disable firewalld
+```
+
+2. 保证所需端口不被占用
+
+(1)集群占用端口的检查:在集群默认配置中,ConfigNode 会占用端口 10710 和 10720,DataNode 会占用端口 6667、10730、10740、10750 、10760、9090、9190、3000请确保这些端口未被占用。检查方式如下:
+
+```Bash
+lsof -i:6667 或 netstat -tunp | grep 6667
+lsof -i:10710 或 netstat -tunp | grep 10710
+lsof -i:10720 或 netstat -tunp | grep 10720
+#如果命令有输出,则表示该端口已被占用。
+```
+
+(2)集群部署工具占用端口的检查:使用集群管理工具opskit安装部署集群时,需打开SSH远程连接服务配置,并开放22号端口。
+
+```Bash
+yum install openssh-server #安装ssh服务
+systemctl start sshd #启用22号端口
+```
+
+3. 保证服务器之间的网络相互连通
+
+### 其他配置
+
+1. 关闭系统 swap 内存
+
+```Bash
+echo "vm.swappiness = 0">> /etc/sysctl.conf
+# 一起执行 swapoff -a 和 swapon -a 命令是为了将 swap 里的数据转储回内存,并清空 swap 里的数据。
+# 不可省略 swappiness 设置而只执行 swapoff -a;否则,重启后 swap 会再次自动打开,使得操作失效。
+swapoff -a && swapon -a
+# 在不重启的情况下使配置生效。
+sysctl -p
+# 检查内存分配,预期 swap 为 0
+free -m
+```
+
+2. 设置系统最大打开文件数为 65535,以避免出现 "太多的打开文件 "的错误。
+
+```Bash
+#查看当前限制
+ulimit -n
+# 临时修改
+ulimit -n 65535
+# 永久修改
+echo "* soft nofile 65535" >> /etc/security/limits.conf
+echo "* hard nofile 65535" >> /etc/security/limits.conf
+#退出当前终端会话后查看,预期显示65535
+ulimit -n
+```
+
+## 软件依赖
+
+安装 Java 运行环境 ,Java 版本 >= 1.8,请确保已设置 jdk 环境变量。(V1.3.2.2 及之上版本推荐直接部署JDK17,老版本JDK部分场景下性能有问题,且datanode会出现stop不掉的问题)
+
+```Bash
+ #下面以在centos7,使用JDK-17安装为例:
+ tar -zxvf jdk-17_linux-x64_bin.tar #解压JDK文件
+ Vim ~/.bashrc #配置JDK环境
+ { export JAVA_HOME=/usr/lib/jvm/jdk-17.0.9
+ export PATH=$JAVA_HOME/bin:$PATH
+ } #添加JDK环境变量
+ source ~/.bashrc #配置环境生效
+ java -version #检查JDK环境
+```
\ No newline at end of file
diff --git a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/IoTDB-Package_timecho.md b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/IoTDB-Package_timecho.md
new file mode 100644
index 000000000..6c66c7fb9
--- /dev/null
+++ b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/IoTDB-Package_timecho.md
@@ -0,0 +1,45 @@
+
+# 安装包获取
+## 获取方式
+
+企业版安装包可通过产品试用申请,或直接联系与您对接的工作人员获取。
+
+## 安装包结构
+
+安装包解压后目录结构如下:
+
+| **目录** | **类型** | **说明** |
+| :--------------- | :------- | :----------------------------------------------------------- |
+| activation | 文件夹 | 激活文件所在目录,包括生成的机器码以及从天谋工作人员获取的企业版激活码(启动ConfigNode后才会生成该目录,即可获取激活码) |
+| conf | 文件夹 | 配置文件目录,包含 ConfigNode、DataNode、JMX 和 logback 等配置文件 |
+| data | 文件夹 | 默认的数据文件目录,包含 ConfigNode 和 DataNode 的数据文件。(启动程序后才会生成该目录) |
+| lib | 文件夹 | 库文件目录 |
+| licenses | 文件夹 | 开源协议证书文件目录 |
+| logs | 文件夹 | 默认的日志文件目录,包含 ConfigNode 和 DataNode 的日志文件(启动程序后才会生成该目录) |
+| sbin | 文件夹 | 主要脚本目录,包含数据库启、停等脚本 |
+| tools | 文件夹 | 工具目录 |
+| ext | 文件夹 | pipe,trigger,udf插件的相关文件 |
+| LICENSE | 文件 | 开源许可证文件 |
+| NOTICE | 文件 | 开源声明文件 |
+| README_ZH.md | 文件 | 使用说明(中文版) |
+| README.md | 文件 | 使用说明(英文版) |
+| RELEASE_NOTES.md | 文件 | 版本说明 |
diff --git a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Monitoring-panel-deployment.md b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Monitoring-panel-deployment.md
new file mode 100644
index 000000000..c7fba837e
--- /dev/null
+++ b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Monitoring-panel-deployment.md
@@ -0,0 +1,682 @@
+
+# 监控面板部署
+
+IoTDB配套监控面板是IoTDB企业版配套工具之一。它旨在解决IoTDB及其所在操作系统的监控问题,主要包括:操作系统资源监控、IoTDB性能监控,及上百项内核监控指标,从而帮助用户监控集群健康状态,并进行集群调优和运维。本文将以常见的3C3D集群(3个Confignode和3个Datanode)为例,为您介绍如何在IoTDB的实例中开启系统监控模块,并且使用Prometheus + Grafana的方式完成对系统监控指标的可视化。
+
+## 安装准备
+
+1. 安装 IoTDB:需先安装IoTDB V1.0 版本及以上企业版,您可联系商务或技术支持获取
+2. 获取 IoTDB 监控面板安装包:基于企业版 IoTDB 的数据库监控面板,您可联系商务或技术支持获取
+
+## 安装步骤
+
+### 步骤一:IoTDB开启监控指标采集
+
+1. 打开监控配置项。IoTDB中监控有关的配置项默认是关闭的,在部署监控面板前,您需要打开相关配置项(注意开启监控配置后需要重启服务)。
+
+| 配置项 | 所在配置文件 | 配置说明 |
+| :--------------------------------- | :------------------------------- | :----------------------------------------------------------- |
+| cn_metric_reporter_list | conf/iotdb-system.properties | 将配置项取消注释,值设置为PROMETHEUS |
+| cn_metric_level | conf/iotdb-system.properties | 将配置项取消注释,值设置为IMPORTANT |
+| cn_metric_prometheus_reporter_port | conf/iotdb-system.properties | 将配置项取消注释,可保持默认设置9091,如设置其他端口,不与其他端口冲突即可 |
+| dn_metric_reporter_list | conf/iotdb-system.properties | 将配置项取消注释,值设置为PROMETHEUS |
+| dn_metric_level | conf/iotdb-system.properties | 将配置项取消注释,值设置为IMPORTANT |
+| dn_metric_prometheus_reporter_port | conf/iotdb-system.properties | 将配置项取消注释,可默认设置为9092,如设置其他端口,不与其他端口冲突即可 |
+
+以3C3D集群为例,需要修改的监控配置如下:
+
+| 节点ip | 主机名 | 集群角色 | 配置文件路径 | 配置项 |
+| ----------- | ------- | ---------- | -------------------------------- | ------------------------------------------------------------ |
+| 192.168.1.3 | iotdb-1 | confignode | conf/iotdb-system.properties | cn_metric_reporter_list=PROMETHEUS cn_metric_level=IMPORTANT cn_metric_prometheus_reporter_port=9091 |
+| 192.168.1.4 | iotdb-2 | confignode | conf/iotdb-system.properties | cn_metric_reporter_list=PROMETHEUS cn_metric_level=IMPORTANT cn_metric_prometheus_reporter_port=9091 |
+| 192.168.1.5 | iotdb-3 | confignode | conf/iotdb-system.properties | cn_metric_reporter_list=PROMETHEUS cn_metric_level=IMPORTANT cn_metric_prometheus_reporter_port=9091 |
+| 192.168.1.3 | iotdb-1 | datanode | conf/iotdb-system.properties | dn_metric_reporter_list=PROMETHEUS dn_metric_level=IMPORTANT dn_metric_prometheus_reporter_port=9092 |
+| 192.168.1.4 | iotdb-2 | datanode | conf/iotdb-system.properties | dn_metric_reporter_list=PROMETHEUS dn_metric_level=IMPORTANT dn_metric_prometheus_reporter_port=9092 |
+| 192.168.1.5 | iotdb-3 | datanode | conf/iotdb-system.properties | dn_metric_reporter_list=PROMETHEUS dn_metric_level=IMPORTANT dn_metric_prometheus_reporter_port=9092 |
+
+2. 重启所有节点。修改3个节点的监控指标配置后,可重新启动所有节点的confignode和datanode服务:
+
+```shell
+./sbin/stop-standalone.sh #先停止confignode和datanode
+./sbin/start-confignode.sh -d #启动confignode
+./sbin/start-datanode.sh -d #启动datanode
+```
+
+3. 重启后,通过客户端确认各节点的运行状态,若状态都为Running,则为配置成功:
+
+![](https://alioss.timecho.com/docs/img/%E5%90%AF%E5%8A%A8.PNG)
+
+### 步骤二:安装、配置Prometheus
+
+> 此处以prometheus安装在服务器192.168.1.3为例。
+
+1. 下载 Prometheus 安装包,要求安装 V2.30.3 版本及以上,可前往 Prometheus 官网下载(https://prometheus.io/docs/introduction/first_steps/)
+2. 解压安装包,进入解压后的文件夹:
+
+```Shell
+tar xvfz prometheus-*.tar.gz
+cd prometheus-*
+```
+
+3. 修改配置。修改配置文件prometheus.yml如下
+ 1. 新增confignode任务收集ConfigNode的监控数据
+ 2. 新增datanode任务收集DataNode的监控数据
+
+```shell
+global:
+ scrape_interval: 15s
+ evaluation_interval: 15s
+scrape_configs:
+ - job_name: "prometheus"
+ static_configs:
+ - targets: ["localhost:9090"]
+ - job_name: "confignode"
+ static_configs:
+ - targets: ["iotdb-1:9091","iotdb-2:9091","iotdb-3:9091"]
+ honor_labels: true
+ - job_name: "datanode"
+ static_configs:
+ - targets: ["iotdb-1:9092","iotdb-2:9092","iotdb-3:9092"]
+ honor_labels: true
+```
+
+4. 启动Prometheus。Prometheus 监控数据的默认过期时间为15天,在生产环境中,建议将其调整为180天以上,以对更长时间的历史监控数据进行追踪,启动命令如下所示:
+
+```Shell
+./prometheus --config.file=prometheus.yml --storage.tsdb.retention.time=180d
+```
+
+5. 确认启动成功。在浏览器中输入 http://192.168.1.3:9090,进入Prometheus,点击进入Status下的Target界面,当看到State均为Up时表示配置成功并已经联通。
+
+
+
+
+
+
+
+
+6. 点击Targets中左侧链接可以跳转到网页监控,查看相应节点的监控信息:
+
+![](https://alioss.timecho.com/docs/img/%E8%8A%82%E7%82%B9%E7%9B%91%E6%8E%A7.png)
+
+### 步骤三:安装grafana并配置数据源
+
+> 此处以Grafana安装在服务器192.168.1.3为例。
+
+1. 下载 Grafana 安装包,要求安装 V8.4.2 版本及以上,可以前往Grafana官网下载(https://grafana.com/grafana/download)
+2. 解压并进入对应文件夹
+
+```Shell
+tar -zxvf grafana-*.tar.gz
+cd grafana-*
+```
+
+3. 启动Grafana:
+
+```Shell
+./bin/grafana-server web
+```
+
+4. 登录Grafana。在浏览器中输入 http://192.168.1.3:3000(或修改后的端口),进入Grafana,默认初始用户名和密码均为 admin。
+
+5. 配置数据源。在Connections中找到Data sources,新增一个data source并配置Data Source为Prometheus
+
+![](https://alioss.timecho.com/docs/img/%E6%B7%BB%E5%8A%A0%E9%85%8D%E7%BD%AE.png)
+
+在配置Data Source时注意Prometheus所在的URL,配置好后点击Save & Test 出现 Data source is working 提示则为配置成功
+
+![](https://alioss.timecho.com/docs/img/%E9%85%8D%E7%BD%AE%E6%88%90%E5%8A%9F.png)
+
+### 步骤四:导入IoTDB Grafana看板
+
+1. 进入Grafana,选择Dashboards:
+
+ ![](https://alioss.timecho.com/docs/img/%E9%9D%A2%E6%9D%BF%E9%80%89%E6%8B%A9.png)
+
+2. 点击右侧 Import 按钮
+
+ ![](https://alioss.timecho.com/docs/img/Import%E6%8C%89%E9%92%AE.png)
+
+3. 使用upload json file的方式导入Dashboard
+
+ ![](https://alioss.timecho.com/docs/img/%E5%AF%BC%E5%85%A5Dashboard.png)
+
+4. 选择IoTDB监控面板中其中一个面板的json文件,这里以选择 Apache IoTDB ConfigNode Dashboard为例(监控面板安装包获取参见本文【安装准备】):
+
+ ![](https://alioss.timecho.com/docs/img/%E9%80%89%E6%8B%A9%E9%9D%A2%E6%9D%BF.png)
+
+5. 选择数据源为Prometheus,然后点击Import
+
+ ![](https://alioss.timecho.com/docs/img/%E9%80%89%E6%8B%A9%E6%95%B0%E6%8D%AE%E6%BA%90.png)
+
+6. 之后就可以看到导入的Apache IoTDB ConfigNode Dashboard监控面板
+
+ ![](https://alioss.timecho.com/docs/img/%E9%9D%A2%E6%9D%BF.png)
+
+7. 同样地,我们可以导入Apache IoTDB DataNode Dashboard、Apache Performance Overview Dashboard、Apache System Overview Dashboard,可看到如下的监控面板:
+
+
+
+8. 至此,IoTDB监控面板就全部导入完成了,现在可以随时查看监控信息了。
+
+ ![](https://alioss.timecho.com/docs/img/%E9%9D%A2%E6%9D%BF%E6%B1%87%E6%80%BB.png)
+
+## 附录、监控指标详解
+
+### 系统面板(System Dashboard)
+
+该面板展示了当前系统CPU、内存、磁盘、网络资源的使用情况已经JVM的部分状况。
+
+#### CPU
+
+- CPU Core:CPU 核数
+- CPU Load:
+ - System CPU Load:整个系统在采样时间内 CPU 的平均负载和繁忙程度
+ - Process CPU Load:IoTDB 进程在采样时间内占用的 CPU 比例
+- CPU Time Per Minute:系统每分钟内所有进程的 CPU 时间总和
+
+#### Memory
+
+- System Memory:当前系统内存的使用情况。
+ - Commited vm size: 操作系统分配给正在运行的进程使用的虚拟内存的大小。
+ - Total physical memory:系统可用物理内存的总量。
+ - Used physical memory:系统已经使用的内存总量。包含进程实际使用的内存量和操作系统 buffers/cache 占用的内存。
+- System Swap Memory:交换空间(Swap Space)内存用量。
+- Process Memory:IoTDB 进程使用内存的情况。
+ - Max Memory:IoTDB 进程能够从操作系统那里最大请求到的内存量。(datanode-env/confignode-env 配置文件中配置分配的内存大小)
+ - Total Memory:IoTDB 进程当前已经从操作系统中请求到的内存总量。
+ - Used Memory:IoTDB 进程当前已经使用的内存总量。
+
+#### Disk
+
+- Disk Space:
+ - Total disk space:IoTDB 可使用的最大磁盘空间。
+ - Used disk space:IoTDB 已经使用的磁盘空间。
+- Log Number Per Minute:采样时间内每分钟 IoTDB 各级别日志数量的平均值。
+- File Count:IoTDB 相关文件数量
+ - all:所有文件数量
+ - TsFile:TsFile 数量
+ - seq:顺序 TsFile 数量
+ - unseq:乱序 TsFile 数量
+ - wal:WAL 文件数量
+ - cross-temp:跨空间合并 temp 文件数量
+ - inner-seq-temp:顺序空间内合并 temp 文件数量
+ - innser-unseq-temp:乱序空间内合并 temp 文件数量
+ - mods:墓碑文件数量
+- Open File Count:系统打开的文件句柄数量
+- File Size:IoTDB 相关文件的大小。各子项分别是对应文件的大小。
+- Disk I/O Busy Rate:等价于 iostat 中的 %util 指标,一定程度上反映磁盘的繁忙程度。各子项分别是对应磁盘的指标。
+- Disk I/O Throughput:系统各磁盘在一段时间 I/O Throughput 的平均值。各子项分别是对应磁盘的指标。
+- Disk I/O Ops:等价于 iostat 中的 r/s 、w/s、rrqm/s、wrqm/s 四个指标,指的是磁盘每秒钟进行 I/O 的次数。read 和 write 指的是磁盘执行单次 I/O 的次数,由于块设备有相应的调度算法,在某些情况下可以将多个相邻的 I/O 合并为一次进行,merged-read 和 merged-write 指的是将多个 I/O 合并为一个 I/O 进行的次数。
+- Disk I/O Avg Time:等价于 iostat 的 await,即每个 I/O 请求的平均时延。读和写请求分开记录。
+- Disk I/O Avg Size:等价于 iostat 的 avgrq-sz,反映了每个 I/O 请求的大小。读和写请求分开记录。
+- Disk I/O Avg Queue Size:等价于 iostat 中的 avgqu-sz,即 I/O 请求队列的平均长度。
+- I/O System Call Rate:进程调用读写系统调用的频率,类似于 IOPS。
+- I/O Throughput:进程进行 I/O 的吞吐量,分为 actual_read/write 和 attempt_read/write 两类。actual read 和 actual write 指的是进程实际导致块设备进行 I/O 的字节数,不包含被 Page Cache 处理的部分。
+
+#### JVM
+
+- GC Time Percentage:节点 JVM 在过去一分钟的时间窗口内,GC 耗时所占的比例
+- GC Allocated/Promoted Size Detail: 节点 JVM 平均每分钟晋升到老年代的对象大小,新生代/老年代和非分代新申请的对象大小
+- GC Data Size Detail:节点 JVM 长期存活的对象大小和对应代际允许的最大值
+- Heap Memory:JVM 堆内存使用情况。
+ - Maximum heap memory:JVM 最大可用的堆内存大小。
+ - Committed heap memory:JVM 已提交的堆内存大小。
+ - Used heap memory:JVM 已经使用的堆内存大小。
+ - PS Eden Space:PS Young 区的大小。
+ - PS Old Space:PS Old 区的大小。
+ - PS Survivor Space:PS Survivor 区的大小。
+ - ...(CMS/G1/ZGC 等)
+- Off Heap Memory:堆外内存用量。
+ - direct memory:堆外直接内存。
+ - mapped memory:堆外映射内存。
+- GC Number Per Minute:节点 JVM 平均每分钟进行垃圾回收的次数,包括 YGC 和 FGC
+- GC Time Per Minute:节点 JVM 平均每分钟进行垃圾回收的耗时,包括 YGC 和 FGC
+- GC Number Per Minute Detail:节点 JVM 平均每分钟由于不同 cause 进行垃圾回收的次数,包括 YGC 和 FGC
+- GC Time Per Minute Detail:节点 JVM 平均每分钟由于不同 cause 进行垃圾回收的耗时,包括 YGC 和 FGC
+- Time Consumed Of Compilation Per Minute:每分钟 JVM 用于编译的总时间
+- The Number of Class:
+ - loaded:JVM 目前已经加载的类的数量
+ - unloaded:系统启动至今 JVM 卸载的类的数量
+- The Number of Java Thread:IoTDB 目前存活的线程数。各子项分别为各状态的线程数。
+
+#### Network
+
+eno 指的是到公网的网卡,lo 是虚拟网卡。
+
+- Net Speed:网卡发送和接收数据的速度
+- Receive/Transmit Data Size:网卡发送或者接收的数据包大小,自系统重启后算起
+- Packet Speed:网卡发送和接收数据包的速度,一次 RPC 请求可以对应一个或者多个数据包
+- Connection Num:当前选定进程的 socket 连接数(IoTDB只有 TCP)
+
+### 整体性能面板(Performance Overview Dashboard)
+
+#### Cluster Overview
+
+- Total CPU Core: 集群机器 CPU 总核数
+- DataNode CPU Load: 集群各DataNode 节点的 CPU 使用率
+- 磁盘
+ - Total Disk Space: 集群机器磁盘总大小
+ - DataNode Disk Usage: 集群各 DataNode 的磁盘使用率
+- Total Timeseries: 集群管理的时间序列总数(含副本),实际时间序列数需结合元数据副本数计算
+- Cluster: 集群 ConfigNode 和 DataNode 节点数量
+- Up Time: 集群启动至今的时长
+- Total Write Point Per Second: 集群每秒写入总点数(含副本),实际写入总点数需结合数据副本数分析
+- 内存
+ - Total System Memory: 集群机器系统内存总大小
+ - Total Swap Memory: 集群机器交换内存总大小
+ - DataNode Process Memory Usage: 集群各 DataNode 的内存使用率
+- Total File Number: 集群管理文件总数量
+- Cluster System Overview: 集群机器概述,包括平均 DataNode 节点内存占用率、平均机器磁盘使用率
+- Total DataBase: 集群管理的 Database 总数(含副本)
+- Total DataRegion: 集群管理的 DataRegion 总数
+- Total SchemaRegion: 集群管理的 SchemaRegion 总数
+
+#### Node Overview
+
+- CPU Core: 节点所在机器的 CPU 核数
+- Disk Space: 节点所在机器的磁盘大小
+- Timeseries: 节点所在机器管理的时间序列数量(含副本)
+- System Overview: 节点所在机器的系统概述,包括 CPU 负载、进程内存使用比率、磁盘使用比率
+- Write Point Per Second: 节点所在机器的每秒写入速度(含副本)
+- System Memory: 节点所在机器的系统内存大小
+- Swap Memory: 节点所在机器的交换内存大小
+- File Number: 节点管理的文件数
+
+#### Performance
+
+- Session Idle Time: 节点的 session 连接的总空闲时间和总忙碌时间
+- Client Connection: 节点的客户端连接情况,包括总连接数和活跃连接数
+- Time Consumed Of Operation: 节点的各类型操作耗时,包括平均值和P99
+- Average Time Consumed Of Interface: 节点的各个 thrift 接口平均耗时
+- P99 Time Consumed Of Interface: 节点的各个 thrift 接口的 P99 耗时数
+- Task Number: 节点的各项系统任务数量
+- Average Time Consumed of Task: 节点的各项系统任务的平均耗时
+- P99 Time Consumed of Task: 节点的各项系统任务的 P99 耗时
+- Operation Per Second: 节点的每秒操作数
+- 主流程
+ - Operation Per Second Of Stage: 节点主流程各阶段的每秒操作数
+ - Average Time Consumed Of Stage: 节点主流程各阶段平均耗时
+ - P99 Time Consumed Of Stage: 节点主流程各阶段 P99 耗时
+- Schedule 阶段
+ - OPS Of Schedule: 节点 schedule 阶段各子阶段每秒操作数
+ - Average Time Consumed Of Schedule Stage: 节点 schedule 阶段各子阶段平均耗时
+ - P99 Time Consumed Of Schedule Stage: 节点的 schedule 阶段各子阶段 P99 耗时
+- Local Schedule 各子阶段
+ - OPS Of Local Schedule Stage: 节点 local schedule 各子阶段每秒操作数
+ - Average Time Consumed Of Local Schedule Stage: 节点 local schedule 阶段各子阶段平均耗时
+ - P99 Time Consumed Of Local Schedule Stage: 节点的 local schedule 阶段各子阶段 P99 耗时
+- Storage 阶段
+ - OPS Of Storage Stage: 节点 storage 阶段各子阶段每秒操作数
+ - Average Time Consumed Of Storage Stage: 节点 storage 阶段各子阶段平均耗时
+ - P99 Time Consumed Of Storage Stage: 节点 storage 阶段各子阶段 P99 耗时
+- Engine 阶段
+ - OPS Of Engine Stage: 节点 engine 阶段各子阶段每秒操作数
+ - Average Time Consumed Of Engine Stage: 节点的 engine 阶段各子阶段平均耗时
+ - P99 Time Consumed Of Engine Stage: 节点 engine 阶段各子阶段的 P99 耗时
+
+#### System
+
+- CPU Load: 节点的 CPU 负载
+- CPU Time Per Minute: 节点的每分钟 CPU 时间,最大值和 CPU 核数相关
+- GC Time Per Minute: 节点的平均每分钟 GC 耗时,包括 YGC 和 FGC
+- Heap Memory: 节点的堆内存使用情况
+- Off Heap Memory: 节点的非堆内存使用情况
+- The Number Of Java Thread: 节点的 Java 线程数量情况
+- File Count: 节点管理的文件数量情况
+- File Size: 节点管理文件大小情况
+- Log Number Per Minute: 节点的每分钟不同类型日志情况
+
+### ConfigNode 面板(ConfigNode Dashboard)
+
+该面板展示了集群中所有管理节点的表现情况,包括分区、节点信息、客户端连接情况统计等。
+
+#### Node Overview
+
+- Database Count: 节点的数据库数量
+- Region
+ - DataRegion Count: 节点的 DataRegion 数量
+ - DataRegion Current Status: 节点的 DataRegion 的状态
+ - SchemaRegion Count: 节点的 SchemaRegion 数量
+ - SchemaRegion Current Status: 节点的 SchemaRegion 的状态
+- System Memory: 节点的系统内存大小
+- Swap Memory: 节点的交换区内存大小
+- ConfigNodes: 节点所在集群的 ConfigNode 的运行状态
+- DataNodes: 节点所在集群的 DataNode 情况
+- System Overview: 节点的系统概述,包括系统内存、磁盘使用、进程内存以及CPU负载
+
+#### NodeInfo
+
+- Node Count: 节点所在集群的节点数量,包括 ConfigNode 和 DataNode
+- ConfigNode Status: 节点所在集群的 ConfigNode 节点的状态
+- DataNode Status: 节点所在集群的 DataNode 节点的状态
+- SchemaRegion Distribution: 节点所在集群的 SchemaRegion 的分布情况
+- SchemaRegionGroup Leader Distribution: 节点所在集群的 SchemaRegionGroup 的 Leader 分布情况
+- DataRegion Distribution: 节点所在集群的 DataRegion 的分布情况
+- DataRegionGroup Leader Distribution: 节点所在集群的 DataRegionGroup 的 Leader 分布情况
+
+#### Protocol
+
+- 客户端数量统计
+ - Active Client Num: 节点各线程池的活跃客户端数量
+ - Idle Client Num: 节点各线程池的空闲客户端数量
+ - Borrowed Client Count: 节点各线程池的借用客户端数量
+ - Created Client Count: 节点各线程池的创建客户端数量
+ - Destroyed Client Count: 节点各线程池的销毁客户端数量
+- 客户端时间情况
+ - Client Mean Active Time: 节点各线程池客户端的平均活跃时间
+ - Client Mean Borrow Wait Time: 节点各线程池的客户端平均借用等待时间
+ - Client Mean Idle Time: 节点各线程池的客户端平均空闲时间
+
+#### Partition Table
+
+- SchemaRegionGroup Count: 节点所在集群的 Database 的 SchemaRegionGroup 的数量
+- DataRegionGroup Count: 节点所在集群的 Database 的 DataRegionGroup 的数量
+- SeriesSlot Count: 节点所在集群的 Database 的 SeriesSlot 的数量
+- TimeSlot Count: 节点所在集群的 Database 的 TimeSlot 的数量
+- DataRegion Status: 节点所在集群的 DataRegion 状态
+- SchemaRegion Status: 节点所在集群的 SchemaRegion 的状态
+
+#### Consensus
+
+- Ratis Stage Time: 节点的 Ratis 各阶段耗时
+- Write Log Entry: 节点的 Ratis 写 Log 的耗时
+- Remote / Local Write Time: 节点的 Ratis 的远程写入和本地写入的耗时
+- Remote / Local Write QPS: 节点 Ratis 的远程和本地写入的 QPS
+- RatisConsensus Memory: 节点 Ratis 共识协议的内存使用
+
+### DataNode 面板(DataNode Dashboard)
+
+该面板展示了集群中所有数据节点的监控情况,包含写入耗时、查询耗时、存储文件数等。
+
+#### Node Overview
+
+- The Number Of Entity: 节点管理的实体情况
+- Write Point Per Second: 节点的每秒写入速度
+- Memory Usage: 节点的内存使用情况,包括 IoT Consensus 各部分内存占用、SchemaRegion内存总占用和各个数据库的内存占用。
+
+#### Protocol
+
+- 节点操作耗时
+ - The Time Consumed Of Operation (avg): 节点的各项操作的平均耗时
+ - The Time Consumed Of Operation (50%): 节点的各项操作耗时的中位数
+ - The Time Consumed Of Operation (99%): 节点的各项操作耗时的P99
+- Thrift统计
+ - The QPS Of Interface: 节点各个 Thrift 接口的 QPS
+ - The Avg Time Consumed Of Interface: 节点各个 Thrift 接口的平均耗时
+ - Thrift Connection: 节点的各类型的 Thrfit 连接数量
+ - Thrift Active Thread: 节点各类型的活跃 Thrift 连接数量
+- 客户端统计
+ - Active Client Num: 节点各线程池的活跃客户端数量
+ - Idle Client Num: 节点各线程池的空闲客户端数量
+ - Borrowed Client Count: 节点的各线程池借用客户端数量
+ - Created Client Count: 节点各线程池的创建客户端数量
+ - Destroyed Client Count: 节点各线程池的销毁客户端数量
+ - Client Mean Active Time: 节点各线程池的客户端平均活跃时间
+ - Client Mean Borrow Wait Time: 节点各线程池的客户端平均借用等待时间
+ - Client Mean Idle Time: 节点各线程池的客户端平均空闲时间
+
+#### Storage Engine
+
+- File Count: 节点管理的各类型文件数量
+- File Size: 节点管理的各类型文件大小
+- TsFile
+ - TsFile Total Size In Each Level: 节点管理的各级别 TsFile 文件总大小
+ - TsFile Count In Each Level: 节点管理的各级别 TsFile 文件数量
+ - Avg TsFile Size In Each Level: 节点管理的各级别 TsFile 文件的平均大小
+- Task Number: 节点的 Task 数量
+- The Time Consumed of Task: 节点的 Task 的耗时
+- Compaction
+ - Compaction Read And Write Per Second: 节点的每秒钟合并读写速度
+ - Compaction Number Per Minute: 节点的每分钟合并数量
+ - Compaction Process Chunk Status: 节点合并不同状态的 Chunk 的数量
+ - Compacted Point Num Per Minute: 节点每分钟合并的点数
+
+#### Write Performance
+
+- Write Cost(avg): 节点写入耗时平均值,包括写入 wal 和 memtable
+- Write Cost(50%): 节点写入耗时中位数,包括写入 wal 和 memtable
+- Write Cost(99%): 节点写入耗时的P99,包括写入 wal 和 memtable
+- WAL
+ - WAL File Size: 节点管理的 WAL 文件总大小
+ - WAL File Num: 节点管理的 WAL 文件数量
+ - WAL Nodes Num: 节点管理的 WAL Node 数量
+ - Make Checkpoint Costs: 节点创建各类型的 CheckPoint 的耗时
+ - WAL Serialize Total Cost: 节点 WAL 序列化总耗时
+ - Data Region Mem Cost: 节点不同的DataRegion的内存占用、当前实例的DataRegion的内存总占用、当前集群的 DataRegion 的内存总占用
+ - Serialize One WAL Info Entry Cost: 节点序列化一个WAL Info Entry 耗时
+ - Oldest MemTable Ram Cost When Cause Snapshot: 节点 WAL 触发 oldest MemTable snapshot 时 MemTable 大小
+ - Oldest MemTable Ram Cost When Cause Flush: 节点 WAL 触发 oldest MemTable flush 时 MemTable 大小
+ - Effective Info Ratio Of WALNode: 节点的不同 WALNode 的有效信息比
+ - WAL Buffer
+ - WAL Buffer Cost: 节点 WAL flush SyncBuffer 耗时,包含同步和异步两种
+ - WAL Buffer Used Ratio: 节点的 WAL Buffer 的使用率
+ - WAL Buffer Entries Count: 节点的 WAL Buffer 的条目数量
+- Flush统计
+ - Flush MemTable Cost(avg): 节点 Flush 的总耗时和各个子阶段耗时的平均值
+ - Flush MemTable Cost(50%): 节点 Flush 的总耗时和各个子阶段耗时的中位数
+ - Flush MemTable Cost(99%): 节点 Flush 的总耗时和各个子阶段耗时的 P99
+ - Flush Sub Task Cost(avg): 节点的 Flush 平均子任务耗时平均情况,包括排序、编码、IO 阶段
+ - Flush Sub Task Cost(50%): 节点的 Flush 各个子任务的耗时中位数情况,包括排序、编码、IO 阶段
+ - Flush Sub Task Cost(99%): 节点的 Flush 平均子任务耗时P99情况,包括排序、编码、IO 阶段
+- Pending Flush Task Num: 节点的处于阻塞状态的 Flush 任务数量
+- Pending Flush Sub Task Num: 节点阻塞的 Flush 子任务数量
+- Tsfile Compression Ratio Of Flushing MemTable: 节点刷盘 Memtable 时对应的 TsFile 压缩率
+- Flush TsFile Size Of DataRegions: 节点不同 DataRegion 的每次刷盘时对应的 TsFile 大小
+- Size Of Flushing MemTable: 节点刷盘的 Memtable 的大小
+- Points Num Of Flushing MemTable: 节点不同 DataRegion 刷盘时的点数
+- Series Num Of Flushing MemTable: 节点的不同 DataRegion 的 Memtable 刷盘时的时间序列数
+- Average Point Num Of Flushing MemChunk: 节点 MemChunk 刷盘的平均点数
+
+#### Schema Engine
+
+- Schema Engine Mode: 节点的元数据引擎模式
+- Schema Consensus Protocol: 节点的元数据共识协议
+- Schema Region Number: 节点管理的 SchemaRegion 数量
+- Schema Region Memory Overview: 节点的 SchemaRegion 的内存数量
+- Memory Usgae per SchemaRegion: 节点 SchemaRegion 的平均内存使用大小
+- Cache MNode per SchemaRegion: 节点每个 SchemaRegion 中 cache node 个数
+- MLog Length and Checkpoint: 节点每个 SchemaRegion 的当前 mlog 的总长度和检查点位置(仅 SimpleConsensus 有效)
+- Buffer MNode per SchemaRegion: 节点每个 SchemaRegion 中 buffer node 个数
+- Activated Template Count per SchemaRegion: 节点每个SchemaRegion中已激活的模版数
+- 时间序列统计
+ - Timeseries Count per SchemaRegion: 节点 SchemaRegion 的平均时间序列数
+ - Series Type: 节点不同类型的时间序列数量
+ - Time Series Number: 节点的时间序列总数
+ - Template Series Number: 节点的模板时间序列总数
+ - Template Series Count per SchemaRegion: 节点每个SchemaRegion中通过模版创建的序列数
+- IMNode统计
+ - Pinned MNode per SchemaRegion: 节点每个 SchemaRegion 中 Pinned 的 IMNode 节点数
+ - Pinned Memory per SchemaRegion: 节点每个 SchemaRegion 中 Pinned 的 IMNode 节点的内存占用大小
+ - Unpinned MNode per SchemaRegion: 节点每个 SchemaRegion 中 Unpinned 的 IMNode 节点数
+ - Unpinned Memory per SchemaRegion: 节点每个 SchemaRegion 中 Unpinned 的 IMNode 节点的内存占用大小
+ - Schema File Memory MNode Number: 节点全局 pinned 和 unpinned 的 IMNode 节点数
+ - Release and Flush MNode Rate: 节点每秒 release 和 flush 的 IMNode 数量
+- Cache Hit Rate: 节点的缓存命中率
+- Release and Flush Thread Number: 节点当前活跃的 Release 和 Flush 线程数量
+- Time Consumed of Relead and Flush (avg): 节点触发 cache 释放和 buffer 刷盘耗时的平均值
+- Time Consumed of Relead and Flush (99%): 节点触发 cache 释放和 buffer 刷盘的耗时的 P99
+
+#### Query Engine
+
+- 各阶段耗时
+ - The time consumed of query plan stages(avg): 节点查询各阶段耗时的平均值
+ - The time consumed of query plan stages(50%): 节点查询各阶段耗时的中位数
+ - The time consumed of query plan stages(99%): 节点查询各阶段耗时的P99
+- 执行计划分发耗时
+ - The time consumed of plan dispatch stages(avg): 节点查询执行计划分发耗时的平均值
+ - The time consumed of plan dispatch stages(50%): 节点查询执行计划分发耗时的中位数
+ - The time consumed of plan dispatch stages(99%): 节点查询执行计划分发耗时的P99
+- 执行计划执行耗时
+ - The time consumed of query execution stages(avg): 节点查询执行计划执行耗时的平均值
+ - The time consumed of query execution stages(50%): 节点查询执行计划执行耗时的中位数
+ - The time consumed of query execution stages(99%): 节点查询执行计划执行耗时的P99
+- 算子执行耗时
+ - The time consumed of operator execution stages(avg): 节点查询算子执行耗时的平均值
+ - The time consumed of operator execution(50%): 节点查询算子执行耗时的中位数
+ - The time consumed of operator execution(99%): 节点查询算子执行耗时的P99
+- 聚合查询计算耗时
+ - The time consumed of query aggregation(avg): 节点聚合查询计算耗时的平均值
+ - The time consumed of query aggregation(50%): 节点聚合查询计算耗时的中位数
+ - The time consumed of query aggregation(99%): 节点聚合查询计算耗时的P99
+- 文件/内存接口耗时
+ - The time consumed of query scan(avg): 节点查询文件/内存接口耗时的平均值
+ - The time consumed of query scan(50%): 节点查询文件/内存接口耗时的中位数
+ - The time consumed of query scan(99%): 节点查询文件/内存接口耗时的P99
+- 资源访问数量
+ - The usage of query resource(avg): 节点查询资源访问数量的平均值
+ - The usage of query resource(50%): 节点查询资源访问数量的中位数
+ - The usage of query resource(99%): 节点查询资源访问数量的P99
+- 数据传输耗时
+ - The time consumed of query data exchange(avg): 节点查询数据传输耗时的平均值
+ - The time consumed of query data exchange(50%): 节点查询数据传输耗时的中位数
+ - The time consumed of query data exchange(99%): 节点查询数据传输耗时的P99
+- 数据传输数量
+ - The count of Data Exchange(avg): 节点查询的数据传输数量的平均值
+ - The count of Data Exchange: 节点查询的数据传输数量的分位数,包括中位数和P99
+- 任务调度数量与耗时
+ - The number of query queue: 节点查询任务调度数量
+ - The time consumed of query schedule time(avg): 节点查询任务调度耗时的平均值
+ - The time consumed of query schedule time(50%): 节点查询任务调度耗时的中位数
+ - The time consumed of query schedule time(99%): 节点查询任务调度耗时的P99
+
+#### Query Interface
+
+- 加载时间序列元数据
+ - The time consumed of load timeseries metadata(avg): 节点查询加载时间序列元数据耗时的平均值
+ - The time consumed of load timeseries metadata(50%): 节点查询加载时间序列元数据耗时的中位数
+ - The time consumed of load timeseries metadata(99%): 节点查询加载时间序列元数据耗时的P99
+- 读取时间序列
+ - The time consumed of read timeseries metadata(avg): 节点查询读取时间序列耗时的平均值
+ - The time consumed of read timeseries metadata(50%): 节点查询读取时间序列耗时的中位数
+ - The time consumed of read timeseries metadata(99%): 节点查询读取时间序列耗时的P99
+- 修改时间序列元数据
+ - The time consumed of timeseries metadata modification(avg): 节点查询修改时间序列元数据耗时的平均值
+ - The time consumed of timeseries metadata modification(50%): 节点查询修改时间序列元数据耗时的中位数
+ - The time consumed of timeseries metadata modification(99%): 节点查询修改时间序列元数据耗时的P99
+- 加载Chunk元数据列表
+ - The time consumed of load chunk metadata list(avg): 节点查询加载Chunk元数据列表耗时的平均值
+ - The time consumed of load chunk metadata list(50%): 节点查询加载Chunk元数据列表耗时的中位数
+ - The time consumed of load chunk metadata list(99%): 节点查询加载Chunk元数据列表耗时的P99
+- 修改Chunk元数据
+ - The time consumed of chunk metadata modification(avg): 节点查询修改Chunk元数据耗时的平均值
+ - The time consumed of chunk metadata modification(50%): 节点查询修改Chunk元数据耗时的总位数
+ - The time consumed of chunk metadata modification(99%): 节点查询修改Chunk元数据耗时的P99
+- 按照Chunk元数据过滤
+ - The time consumed of chunk metadata filter(avg): 节点查询按照Chunk元数据过滤耗时的平均值
+ - The time consumed of chunk metadata filter(50%): 节点查询按照Chunk元数据过滤耗时的中位数
+ - The time consumed of chunk metadata filter(99%): 节点查询按照Chunk元数据过滤耗时的P99
+- 构造Chunk Reader
+ - The time consumed of construct chunk reader(avg): 节点查询构造Chunk Reader耗时的平均值
+ - The time consumed of construct chunk reader(50%): 节点查询构造Chunk Reader耗时的中位数
+ - The time consumed of construct chunk reader(99%): 节点查询构造Chunk Reader耗时的P99
+- 读取Chunk
+ - The time consumed of read chunk(avg): 节点查询读取Chunk耗时的平均值
+ - The time consumed of read chunk(50%): 节点查询读取Chunk耗时的中位数
+ - The time consumed of read chunk(99%): 节点查询读取Chunk耗时的P99
+- 初始化Chunk Reader
+ - The time consumed of init chunk reader(avg): 节点查询初始化Chunk Reader耗时的平均值
+ - The time consumed of init chunk reader(50%): 节点查询初始化Chunk Reader耗时的中位数
+ - The time consumed of init chunk reader(99%): 节点查询初始化Chunk Reader耗时的P99
+- 通过 Page Reader 构造 TsBlock
+ - The time consumed of build tsblock from page reader(avg): 节点查询通过 Page Reader 构造 TsBlock 耗时的平均值
+ - The time consumed of build tsblock from page reader(50%): 节点查询通过 Page Reader 构造 TsBlock 耗时的中位数
+ - The time consumed of build tsblock from page reader(99%): 节点查询通过 Page Reader 构造 TsBlock 耗时的P99
+- 查询通过 Merge Reader 构造 TsBlock
+ - The time consumed of build tsblock from merge reader(avg): 节点查询通过 Merge Reader 构造 TsBlock 耗时的平均值
+ - The time consumed of build tsblock from merge reader(50%): 节点查询通过 Merge Reader 构造 TsBlock 耗时的中位数
+ - The time consumed of build tsblock from merge reader(99%): 节点查询通过 Merge Reader 构造 TsBlock 耗时的P99
+
+#### Query Data Exchange
+
+查询的数据交换耗时。
+
+- 通过 source handle 获取 TsBlock
+ - The time consumed of source handle get tsblock(avg): 节点查询通过 source handle 获取 TsBlock 耗时的平均值
+ - The time consumed of source handle get tsblock(50%): 节点查询通过 source handle 获取 TsBlock 耗时的中位数
+ - The time consumed of source handle get tsblock(99%): 节点查询通过 source handle 获取 TsBlock 耗时的P99
+- 通过 source handle 反序列化 TsBlock
+ - The time consumed of source handle deserialize tsblock(avg): 节点查询通过 source handle 反序列化 TsBlock 耗时的平均值
+ - The time consumed of source handle deserialize tsblock(50%): 节点查询通过 source handle 反序列化 TsBlock 耗时的中位数
+ - The time consumed of source handle deserialize tsblock(99%): 节点查询通过 source handle 反序列化 TsBlock 耗时的P99
+- 通过 sink handle 发送 TsBlock
+ - The time consumed of sink handle send tsblock(avg): 节点查询通过 sink handle 发送 TsBlock 耗时的平均值
+ - The time consumed of sink handle send tsblock(50%): 节点查询通过 sink handle 发送 TsBlock 耗时的中位数
+ - The time consumed of sink handle send tsblock(99%): 节点查询通过 sink handle 发送 TsBlock 耗时的P99
+- 回调 data block event
+ - The time consumed of on acknowledge data block event task(avg): 节点查询回调 data block event 耗时的平均值
+ - The time consumed of on acknowledge data block event task(50%): 节点查询回调 data block event 耗时的中位数
+ - The time consumed of on acknowledge data block event task(99%): 节点查询回调 data block event 耗时的P99
+- 获取 data block task
+ - The time consumed of get data block task(avg): 节点查询获取 data block task 耗时的平均值
+ - The time consumed of get data block task(50%): 节点查询获取 data block task 耗时的中位数
+ - The time consumed of get data block task(99%): 节点查询获取 data block task 耗时的 P99
+
+#### Query Related Resource
+
+- MppDataExchangeManager: 节点查询时 shuffle sink handle 和 source handle 的数量
+- LocalExecutionPlanner: 节点可分配给查询分片的剩余内存
+- FragmentInstanceManager: 节点正在运行的查询分片上下文信息和查询分片的数量
+- Coordinator: 节点上记录的查询数量
+- MemoryPool Size: 节点查询相关的内存池情况
+- MemoryPool Capacity: 节点查询相关的内存池的大小情况,包括最大值和剩余可用值
+- DriverScheduler: 节点查询相关的队列任务数量
+
+#### Consensus - IoT Consensus
+
+- 内存使用
+ - IoTConsensus Used Memory: 节点的 IoT Consensus 的内存使用情况,包括总使用内存大小、队列使用内存大小、同步使用内存大小
+- 节点间同步情况
+ - IoTConsensus Sync Index: 节点的 IoT Consensus 的 不同 DataRegion 的 SyncIndex 大小
+ - IoTConsensus Overview: 节点的 IoT Consensus 的总同步差距和缓存的请求数量
+ - IoTConsensus Search Index Rate: 节点 IoT Consensus 不同 DataRegion 的写入 SearchIndex 的增长速率
+ - IoTConsensus Safe Index Rate: 节点 IoT Consensus 不同 DataRegion 的同步 SafeIndex 的增长速率
+ - IoTConsensus LogDispatcher Request Size: 节点 IoT Consensus 不同 DataRegion 同步到其他节点的请求大小
+ - Sync Lag: 节点 IoT Consensus 不同 DataRegion 的同步差距大小
+ - Min Peer Sync Lag: 节点 IoT Consensus 不同 DataRegion 向不同副本的最小同步差距
+ - Sync Speed Diff Of Peers: 节点 IoT Consensus 不同 DataRegion 向不同副本同步的最大差距
+ - IoTConsensus LogEntriesFromWAL Rate: 节点 IoT Consensus 不同 DataRegion 从 WAL 获取日志的速率
+ - IoTConsensus LogEntriesFromQueue Rate: 节点 IoT Consensus 不同 DataRegion 从 队列获取日志的速率
+- 不同执行阶段耗时
+ - The Time Consumed Of Different Stages (avg): 节点 IoT Consensus 不同执行阶段的耗时的平均值
+ - The Time Consumed Of Different Stages (50%): 节点 IoT Consensus 不同执行阶段的耗时的中位数
+ - The Time Consumed Of Different Stages (99%): 节点 IoT Consensus 不同执行阶段的耗时的P99
+
+#### Consensus - DataRegion Ratis Consensus
+
+- Ratis Stage Time: 节点 Ratis 不同阶段的耗时
+- Write Log Entry: 节点 Ratis 写 Log 不同阶段的耗时
+- Remote / Local Write Time: 节点 Ratis 在本地或者远端写入的耗时
+- Remote / Local Write QPS: 节点 Ratis 在本地或者远端写入的 QPS
+- RatisConsensus Memory: 节点 Ratis 的内存使用情况
+
+#### Consensus - SchemaRegion Ratis Consensus
+
+- Ratis Stage Time: 节点 Ratis 不同阶段的耗时
+- Write Log Entry: 节点 Ratis 写 Log 各阶段的耗时
+- Remote / Local Write Time: 节点 Ratis 在本地或者远端写入的耗时
+- Remote / Local Write QPS: 节点 Ratis 在本地或者远端写入的QPS
+- RatisConsensus Memory: 节点 Ratis 内存使用情况
\ No newline at end of file
diff --git a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
new file mode 100644
index 000000000..ff5411a1f
--- /dev/null
+++ b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
@@ -0,0 +1,217 @@
+
+# 单机版安装部署
+
+本章将介绍如何启动IoTDB单机实例,IoTDB单机实例包括 1 个ConfigNode 和1个DataNode(即通常所说的1C1D)。
+
+## 注意事项
+
+1. 安装前请确认系统已参照[系统配置](../Deployment-and-Maintenance/Environment-Requirements.md)准备完成。
+2. 推荐使用`hostname`进行IP配置,可避免后期修改主机ip导致数据库无法启动的问题。设置hostname需要在服务器上配置`/etc/hosts`,如本机ip是192.168.1.3,hostname是iotdb-1,则可以使用以下命令设置服务器的 hostname,并使用hostname配置IoTDB的 `cn_internal_address`、`dn_internal_address`。
+
+ ```shell
+ echo "192.168.1.3 iotdb-1" >> /etc/hosts
+ ```
+
+3. 部分参数首次启动后不能修改,请参考下方的[参数配置](#2参数配置)章节进行设置。
+4. 无论是在linux还是windows中,请确保IoTDB的安装路径中不含空格和中文,避免软件运行异常。
+5. 请注意,安装部署(包括激活和使用软件)IoTDB时,您可以:
+ - 使用 root 用户(推荐):可以避免权限等问题。
+ - 使用固定的非 root 用户:
+ - 使用同一用户操作:确保在启动、激活、停止等操作均保持使用同一用户,不要切换用户。
+ - 避免使用 sudo:使用 sudo 命令会以 root 用户权限执行命令,可能会引起权限混淆或安全问题。
+6. 推荐部署监控面板,可以对重要运行指标进行监控,随时掌握数据库运行状态,监控面板可以联系工作人员获取,部署监控面板步骤可以参考:[监控面板部署](../Deployment-and-Maintenance/Monitoring-panel-deployment.md)
+
+## 安装步骤
+
+### 1、解压安装包并进入安装目录
+
+```Plain
+unzip timechodb-{version}-bin.zip
+cd timechodb-{version}-bin
+```
+
+### 2、参数配置
+
+#### 内存配置
+
+- conf/confignode-env.sh(或 .bat)
+
+| **配置项** | **说明** | **默认值** | **推荐值** | 备注 |
+| :---------- | :------------------------------------- | :--------- | :----------------------------------------------- | :----------- |
+| MEMORY_SIZE | IoTDB ConfigNode节点可以使用的内存总量 | 空 | 可按需填写,填写后系统会根据填写的数值来分配内存 | 重启服务生效 |
+
+- conf/datanode-env.sh(或 .bat)
+
+| **配置项** | **说明** | **默认值** | **推荐值** | 备注 |
+| :---------- | :----------------------------------- | :--------- | :----------------------------------------------- | :----------- |
+| MEMORY_SIZE | IoTDB DataNode节点可以使用的内存总量 | 空 | 可按需填写,填写后系统会根据填写的数值来分配内存 | 重启服务生效 |
+
+#### 功能配置
+
+系统实际生效的参数在文件 conf/iotdb-system.properties 中,启动需设置以下参数,可以从 conf/iotdb-system.properties.template 文件中查看全部参数
+
+集群级功能配置
+
+| **配置项** | **说明** | **默认值** | **推荐值** | 备注 |
+| :------------------------ | :------------------------------- | :------------- | :----------------------------------------------- | :------------------------ |
+| cluster_name | 集群名称 | defaultCluster | 可根据需要设置集群名称,如无特殊需要保持默认即可 | 首次启动后不可修改 |
+| schema_replication_factor | 元数据副本数,单机版此处设置为 1 | 1 | 1 | 默认1,首次启动后不可修改 |
+| data_replication_factor | 数据副本数,单机版此处设置为 1 | 1 | 1 | 默认1,首次启动后不可修改 |
+
+ConfigNode 配置
+
+| **配置项** | **说明** | **默认** | 推荐值 | **备注** |
+| :------------------ | :----------------------------------------------------------- | :-------------- | :----------------------------------------------- | :----------------- |
+| cn_internal_address | ConfigNode在集群内部通讯使用的地址 | 127.0.0.1 | 所在服务器的IPV4地址或hostname,推荐使用hostname | 首次启动后不能修改 |
+| cn_internal_port | ConfigNode在集群内部通讯使用的端口 | 10710 | 10710 | 首次启动后不能修改 |
+| cn_consensus_port | ConfigNode副本组共识协议通信使用的端口 | 10720 | 10720 | 首次启动后不能修改 |
+| cn_seed_config_node | 节点注册加入集群时连接的ConfigNode 的地址,cn_internal_address:cn_internal_port | 127.0.0.1:10710 | cn_internal_address:cn_internal_port | 首次启动后不能修改 |
+
+DataNode 配置
+
+| **配置项** | **说明** | **默认** | 推荐值 | **备注** |
+| :------------------------------ | :----------------------------------------------------------- | :-------------- | :----------------------------------------------- | :----------------- |
+| dn_rpc_address | 客户端 RPC 服务的地址 | 0.0.0.0 | 0.0.0.0 | 重启服务生效 |
+| dn_rpc_port | 客户端 RPC 服务的端口 | 6667 | 6667 | 重启服务生效 |
+| dn_internal_address | DataNode在集群内部通讯使用的地址 | 127.0.0.1 | 所在服务器的IPV4地址或hostname,推荐使用hostname | 首次启动后不能修改 |
+| dn_internal_port | DataNode在集群内部通信使用的端口 | 10730 | 10730 | 首次启动后不能修改 |
+| dn_mpp_data_exchange_port | DataNode用于接收数据流使用的端口 | 10740 | 10740 | 首次启动后不能修改 |
+| dn_data_region_consensus_port | DataNode用于数据副本共识协议通信使用的端口 | 10750 | 10750 | 首次启动后不能修改 |
+| dn_schema_region_consensus_port | DataNode用于元数据副本共识协议通信使用的端口 | 10760 | 10760 | 首次启动后不能修改 |
+| dn_seed_config_node | 节点注册加入集群时连接的ConfigNode地址,即cn_internal_address:cn_internal_port | 127.0.0.1:10710 | cn_internal_address:cn_internal_port | 首次启动后不能修改 |
+
+### 3、启动 ConfigNode 节点
+
+进入iotdb的sbin目录下,启动confignode
+
+```shell
+./sbin/start-confignode.sh -d #“-d”参数将在后台进行启动
+```
+
+如果启动失败,请参考下方[常见问题](#常见问题)。
+
+### 4、启动 DataNode 节点
+
+ 进入iotdb的sbin目录下,启动datanode:
+
+```shell
+./sbin/start-datanode.sh -d #“-d”参数将在后台进行启动
+```
+
+### 5、激活数据库
+
+#### 方式一:文件激活
+
+- 启动Confignode、Datanode节点后,进入activation文件夹, 将 system_info文件复制给天谋工作人员
+- 收到工作人员返回的 license文件
+- 将license文件放入对应节点的activation文件夹下;
+
+#### 方式二:命令激活
+
+- 获取激活所需机器码,进入到 IoTDB CLI 中(./start-cli.sh -sql_dialect table/start-cli.bat -sql_dialect table),执行以下内容:
+ - 注:当 sql_dialect 为 table 时,暂时不支持使用
+
+```shell
+show system info
+```
+
+- 显示如下信息,请将机器码(即绿色字符串)复制给天谋工作人员:
+
+```sql
++--------------------------------------------------------------+
+| SystemInfo|
++--------------------------------------------------------------+
+| 01-TE5NLES4-UDDWCMYE|
++--------------------------------------------------------------+
+Total line number = 1
+It costs 0.030s
+```
+
+- 将工作人员返回的激活码输入到CLI中,输入以下内容
+ - 注:激活码前后需要用`'`符号进行标注,如所示
+
+```sql
+IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA==='
+```
+
+### 6、验证激活
+
+当看到“ClusterActivationStatus”字段状态显示为ACTIVATED表示激活成功
+
+![](https://alioss.timecho.com/docs/img/%E5%8D%95%E6%9C%BA-%E9%AA%8C%E8%AF%81.png)
+
+## 常见问题
+
+1. 部署过程中多次提示激活失败
+ - 使用 `ls -al` 命令:使用 `ls -al` 命令检查安装包根目录的所有者信息是否为当前用户。
+ - 检查激活目录:检查 `./activation` 目录下的所有文件,所有者信息是否为当前用户。
+2. Confignode节点启动失败
+ - 步骤 1: 请查看启动日志,检查是否修改了某些首次启动后不可改的参数。
+ - 步骤 2: 请查看启动日志,检查是否出现其他异常。日志中若存在异常现象,请联系天谋技术支持人员咨询解决方案。
+ - 步骤 3: 如果是首次部署或者数据可删除,也可按下述步骤清理环境,重新部署后,再次启动。
+ - 清理环境:
+ 1. 结束所有 ConfigNode 和 DataNode 进程。
+ ```Bash
+ # 1. 停止 ConfigNode 和 DataNode 服务
+ sbin/stop-standalone.sh
+
+ # 2. 检查是否还有进程残留
+ jps
+ # 或者
+ ps -ef|gerp iotdb
+
+ # 3. 如果有进程残留,则手动kill
+ kill -9
+ # 如果确定机器上仅有1个iotdb,可以使用下面命令清理残留进程
+ ps -ef|grep iotdb|grep -v grep|tr -s ' ' ' ' |cut -d ' ' -f2|xargs kill -9
+ ```
+
+ 2. 删除 data 和 logs 目录。
+ - 说明:删除 data 目录是必要的,删除 logs 目录是为了纯净日志,非必需。
+ ```shell
+ cd /data/iotdb rm -rf data logs
+ ```
+
+## 附录
+
+### Confignode节点参数介绍
+
+| 参数 | 描述 | 是否为必填项 |
+| :--- | :------------------------------- | :----------- |
+| -d | 以守护进程模式启动,即在后台运行 | 否 |
+
+### Datanode节点参数介绍
+
+| 缩写 | 描述 | 是否为必填项 |
+| :--- | :--------------------------------------------- | :----------- |
+| -v | 显示版本信息 | 否 |
+| -f | 在前台运行脚本,不将其放到后台 | 否 |
+| -d | 以守护进程模式启动,即在后台运行 | 否 |
+| -p | 指定一个文件来存放进程ID,用于进程管理 | 否 |
+| -c | 指定配置文件夹的路径,脚本会从这里加载配置文件 | 否 |
+| -g | 打印垃圾回收(GC)的详细信息 | 否 |
+| -H | 指定Java堆转储文件的路径,当JVM内存溢出时使用 | 否 |
+| -E | 指定JVM错误日志文件的路径 | 否 |
+| -D | 定义系统属性,格式为 key=value | 否 |
+| -X | 直接传递 -XX 参数给 JVM | 否 |
+| -h | 帮助指令 | 否 |
+
From 9460ff299e7d0670b46a52be0c50c604afdcf645 Mon Sep 17 00:00:00 2001
From: Jialin Ma <281648921@qq.com>
Date: Tue, 10 Dec 2024 18:21:56 +0800
Subject: [PATCH 2/6] Modify the activation method for both standalone and
cluster versions
---
.../Cluster-Deployment_timecho.md | 59 +++++++++++------
.../Stand-Alone-Deployment_timecho.md | 39 ++++++++---
.../Cluster-Deployment_timecho.md | 65 +++++++++++++------
.../Stand-Alone-Deployment_timecho.md | 28 ++++++--
4 files changed, 136 insertions(+), 55 deletions(-)
diff --git a/src/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
index 19e2f6f63..c4a2c6aeb 100644
--- a/src/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
+++ b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
@@ -166,30 +166,51 @@ cd sbin
#### Method 2: Activate Script Activation
-- Retrieve the machine codes of three machines in sequence, enter them into the CLI of the IoTDB tree model (./start-cli.sh-sql-dialect table/start-cli.bat - sql-dialect table), and execute the following tasks:
- - Note: When sql-dialect is a table, it is temporarily not supported to use
-
-```Bash
-show system info
-```
-
-- The following information is displayed, where the machine code of one machine is displayed:
-
-```Bash
-+--------------------------------------------------------------+
-| SystemInfo|
-+--------------------------------------------------------------+
-|01-TE5NLES4-UDDWCMYE,01-GG5NLES4-XXDWCMYE,01-FF5NLES4-WWWWCMYE|
-+--------------------------------------------------------------+
-Total line number = 1
-It costs 0.030s
-```
+- Retrieve the machine codes of 3 machines in sequence and enter IoTDB CLI
+
+ - Table Model CLI Enter Command:
+
+ ```SQL
+ # Linux or MACOS
+ ./start-cli.sh -sql_dialect table
+
+ # windows
+ ./start-cli.bat -sql_dialect table
+ ```
+
+ - Enter the tree model CLI command:
+
+ ```SQL
+ # Linux or MACOS
+ ./start-cli.sh
+
+ # windows
+ ./start-cli.bat
+ ```
+
+ - Execute the following to obtain the machine code required for activation:
+
+ ```Bash
+ show system info
+ ```
+
+ - The following information is displayed, which shows the machine code of one machine:
+
+ ```Bash
+ +--------------------------------------------------------------+
+ | SystemInfo|
+ +--------------------------------------------------------------+
+ |01-TE5NLES4-UDDWCMYE,01-GG5NLES4-XXDWCMYE,01-FF5NLES4-WWWWCMYE|
+ +--------------------------------------------------------------+
+ Total line number = 1
+ It costs 0.030s
+ ```
- The other two nodes enter the CLI of the IoTDB tree model in sequence, execute the statement, and copy the machine codes of the three machines obtained to the Timecho staff
- The staff will return three activation codes, which normally correspond to the order of the three machine codes provided. Please paste each activation code into the CLI separately, as prompted below:
- - Note: The activation code needs to be marked with a 'symbol before and after, as shown in
+ - Note: The activation code needs to be marked with a `'`symbol before and after, as shown in
```Bash
IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA==='
diff --git a/src/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
index 571f67246..e3c4718a7 100644
--- a/src/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
+++ b/src/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
@@ -140,17 +140,39 @@ cd sbin
#### Method 2: Activate Script Activation
-- Obtain the required machine code for activation, enter the IoTDB CLI (./start-cli.sh-sql-dialect table/start-cli.bat - sql-dialect table), and perform the following:
+- Retrieve the machine codes of 3 machines in sequence and enter IoTDB CLI
+
+ - Table Model CLI Enter Command:
+
+ ```SQL
+ # Linux or MACOS
+ ./start-cli.sh -sql_dialect table
- - Note: When sql-dialect is a table, it is temporarily not supported to use
+ # windows
+ ./start-cli.bat -sql_dialect table
+ ```
-```shell
-show system info
-```
+ - Enter the tree model CLI command:
+
+ ```SQL
+ # Linux or MACOS
+ ./start-cli.sh
+
+ # windows
+ ./start-cli.bat
+ ```
-- Display the following information, please copy the machine code (i.e. green string) to the Timecho staff:
+- Execute the following to obtain the machine code required for activation:
-```sql
+ ```Bash
+
+ show system info
+
+ ```
+
+- The following information is displayed, which shows the machine code of one machine:
+
+```Bash
+--------------------------------------------------------------+
| SystemInfo|
+--------------------------------------------------------------+
@@ -161,10 +183,9 @@ It costs 0.030s
```
- Enter the activation code returned by the staff into the CLI and enter the following content
-
- Note: The activation code needs to be marked with a `'`symbol before and after, as shown in
-```sql
+```Bash
IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA==='
```
diff --git a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
index f44f729b9..2925ab0f6 100644
--- a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
+++ b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
@@ -163,32 +163,55 @@ cd sbin
- 将3个license文件分别放入对应的ConfigNode节点的activation文件夹下;
#### 方式二:激活脚本激活
+- 依次获取3台机器的机器码,进入 IoTDB CLI
+
+ - 表模型 CLI 进入命令:
+
+ ```SQL
+ # Linux或MACOS系统
+ ./start-cli.sh -sql_dialect table
+
+ # windows系统
+ ./start-cli.bat -sql_dialect table
+ ```
+
+ - 树模型 CLI 进入命令:
+
+ ```SQL
+ # Linux或MACOS系统
+ ./start-cli.sh
+
+ # windows系统
+ ./start-cli.bat
+ ```
+
+ - 执行以下内容获取激活所需机器码:
+
+ ```Bash
+ show system info
+ ```
+
+ - 显示如下信息,这里显示的是1台机器的机器码 :
+
+ ```Bash
+ +--------------------------------------------------------------+
+ | SystemInfo|
+ +--------------------------------------------------------------+
+ |01-TE5NLES4-UDDWCMYE,01-GG5NLES4-XXDWCMYE,01-FF5NLES4-WWWWCMYE|
+ +--------------------------------------------------------------+
+ Total line number = 1
+ It costs 0.030s
+ ```
-- 依次获取3台机器的机器码,进入到IoTDB树模型的CLI中(./start-cli.sh -sql_dialect table/start-cli.bat -sql_dialect table),执行以下内容:
- - 注:当 sql_dialect 为 table 时,暂时不支持使用
-
-```shell
-show system info
-```
+- 其他2个节点依次进入到IoTDB树模型的CLI中,执行语句后将获取的3台机器的机器码都复制给天谋工作人员
-- 显示如下信息,这里显示的是1台机器的机器码 :
+- 工作人员会返回3段激活码,正常是与提供的3个机器码的顺序对应的,请分别将各自的激活码粘贴到CLI中,如下提示:
-```shell
-+--------------------------------------------------------------+
-| SystemInfo|
-+--------------------------------------------------------------+
-|01-TE5NLES4-UDDWCMYE,01-GG5NLES4-XXDWCMYE,01-FF5NLES4-WWWWCMYE|
-+--------------------------------------------------------------+
-Total line number = 1
-It costs 0.030s
-```
+ - 注:激活码前后需要用`'`符号进行标注,如所示
-- 其他2个节点依次进入到IoTDB树模型的CLI中,执行语句后将获取的3台机器的机器码都复制给天谋工作人员
-- 工作人员会返回3段激活码,正常是与提供的3个机器码的顺序对应的,请分别将各自的激活码粘贴到CLI中,如下提示:
- - 注:激活码前后需要用`'`符号进行标注,如下所示
-```shell
+ ```Bash
IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA==='
-```
+ ```
### 验证激活
diff --git a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
index ff5411a1f..5d4ee21cd 100644
--- a/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
+++ b/src/zh/UserGuide/Master/Table/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
@@ -126,17 +126,33 @@ DataNode 配置
- 将license文件放入对应节点的activation文件夹下;
#### 方式二:命令激活
+- 进入 IoTDB CLI
+ - 表模型 CLI 进入命令:
+ ```SQL
+ # Linux或MACOS系统
+ ./start-cli.sh -sql_dialect table
+
+ # windows系统
+ ./start-cli.bat -sql_dialect table
+ ```
-- 获取激活所需机器码,进入到 IoTDB CLI 中(./start-cli.sh -sql_dialect table/start-cli.bat -sql_dialect table),执行以下内容:
- - 注:当 sql_dialect 为 table 时,暂时不支持使用
+ - 树模型 CLI 进入命令:
+ ```SQL
+ # Linux或MACOS系统
+ ./start-cli.sh
+
+ # windows系统
+ ./start-cli.bat
+ ```
+- 执行以下内容获取激活所需机器码:
-```shell
+```Bash
show system info
```
-- 显示如下信息,请将机器码(即绿色字符串)复制给天谋工作人员:
+- 将返回机器码(即绿色字符串)复制给天谋工作人员:
-```sql
+```Bash
+--------------------------------------------------------------+
| SystemInfo|
+--------------------------------------------------------------+
@@ -149,7 +165,7 @@ It costs 0.030s
- 将工作人员返回的激活码输入到CLI中,输入以下内容
- 注:激活码前后需要用`'`符号进行标注,如所示
-```sql
+```Bash
IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA==='
```
From 02e99797121e5155d31d4a2ba0e4b24a156b4418 Mon Sep 17 00:00:00 2001
From: Jialin Ma <281648921@qq.com>
Date: Tue, 10 Dec 2024 18:46:41 +0800
Subject: [PATCH 3/6] Add V2.0.1 file
---
.../Tree/API/Programming-CSharp-Native-API.md | 213 +
.../Tree/API/Programming-Cpp-Native-API.md | 428 ++
.../Tree/API/Programming-Go-Native-API.md | 64 +
.../V2.0.1/Tree/API/Programming-JDBC.md | 296 +
.../Tree/API/Programming-Java-Native-API.md | 842 +++
.../V2.0.1/Tree/API/Programming-Kafka.md | 118 +
.../V2.0.1/Tree/API/Programming-MQTT.md | 183 +
.../Tree/API/Programming-NodeJS-Native-API.md | 181 +
.../V2.0.1/Tree/API/Programming-ODBC.md | 146 +
.../Tree/API/Programming-OPC-UA_timecho.md | 262 +
.../Tree/API/Programming-Python-Native-API.md | 732 +++
.../Tree/API/Programming-Rust-Native-API.md | 188 +
.../V2.0.1/Tree/API/RestServiceV1.md | 930 +++
.../V2.0.1/Tree/API/RestServiceV2.md | 970 +++
.../Data-Model-and-Terminology.md | 150 +
.../Navigating_Time_Series_Data.md | 64 +
.../Basic-Concept/Operate-Metadata_apache.md | 1253 ++++
.../Basic-Concept/Operate-Metadata_timecho.md | 1324 ++++
.../V2.0.1/Tree/Basic-Concept/Query-Data.md | 3009 ++++++++++
.../Tree/Basic-Concept/Write-Delete-Data.md | 278 +
.../Tree/Ecosystem-Integration/DBeaver.md | 92 +
.../Tree/Ecosystem-Integration/DataEase.md | 228 +
.../Tree/Ecosystem-Integration/Flink-IoTDB.md | 215 +
.../Ecosystem-Integration/Flink-TsFile.md | 180 +
.../Grafana-Connector.md | 180 +
.../Ecosystem-Integration/Grafana-Plugin.md | 298 +
.../Tree/Ecosystem-Integration/Hive-TsFile.md | 170 +
.../Ignition-IoTDB-plugin_timecho.md | 273 +
.../Tree/Ecosystem-Integration/NiFi-IoTDB.md | 141 +
.../Tree/Ecosystem-Integration/Spark-IoTDB.md | 232 +
.../Ecosystem-Integration/Spark-TsFile.md | 315 +
.../Ecosystem-Integration/Telegraf-IoTDB.md | 110 +
.../Tree/Ecosystem-Integration/Thingsboard.md | 99 +
.../Ecosystem-Integration/Zeppelin-IoTDB.md | 185 +
.../Tree/FAQ/Frequently-asked-questions.md | 263 +
.../V2.0.1/Tree/QuickStart/QuickStart.md | 23 +
.../Tree/QuickStart/QuickStart_apache.md | 91 +
.../Tree/QuickStart/QuickStart_timecho.md | 106 +
.../Tree/Reference/Common-Config-Manual.md | 2213 +++++++
.../Reference/ConfigNode-Config-Manual.md | 223 +
.../Tree/Reference/DataNode-Config-Manual.md | 584 ++
.../V2.0.1/Tree/Reference/Keywords.md | 227 +
.../Tree/Reference/Modify-Config-Manual.md | 71 +
.../V2.0.1/Tree/Reference/Status-Codes.md | 178 +
.../V2.0.1/Tree/Reference/Syntax-Rule.md | 281 +
.../Tree/Reference/UDF-Libraries_apache.md | 5244 ++++++++++++++++
.../SQL-Manual/Function-and-Expression.md | 3014 ++++++++++
.../SQL-Manual/Operator-and-Expression.md | 573 ++
.../V2.0.1/Tree/SQL-Manual/SQL-Manual.md | 1759 ++++++
.../Tree/SQL-Manual/UDF-Libraries_apache.md | 5244 ++++++++++++++++
.../Tree/SQL-Manual/UDF-Libraries_timecho.md | 5304 ++++++++++++++++
.../V2.0.1/Tree/Tools-System/Benchmark.md | 344 ++
src/UserGuide/V2.0.1/Tree/Tools-System/CLI.md | 295 +
.../Tree/Tools-System/Data-Export-Tool.md | 213 +
.../Tree/Tools-System/Data-Import-Tool.md | 217 +
.../Tools-System/Maintenance-Tool_apache.md | 228 +
.../Tools-System/Maintenance-Tool_timecho.md | 960 +++
.../Tree/Tools-System/Monitor-Tool_apache.md | 180 +
.../Tree/Tools-System/Monitor-Tool_timecho.md | 180 +
.../Tree/Tools-System/Workbench_timecho.md | 30 +
.../V2.0.1/Tree/User-Manual/AINode_timecho.md | 654 ++
.../Tree/User-Manual/Audit-Log_timecho.md | 93 +
.../Tree/User-Manual/Authority-Management.md | 519 ++
.../Tree/User-Manual/Data-Sync_apache.md | 530 ++
.../Tree/User-Manual/Data-Sync_timecho.md | 610 ++
.../Tree/User-Manual/Data-subscription.md | 150 +
.../Tree/User-Manual/Database-Programming.md | 592 ++
.../Tree/User-Manual/IoTDB-View_timecho.md | 549 ++
.../V2.0.1/Tree/User-Manual/Maintennance.md | 372 ++
.../Tree/User-Manual/Streaming_apache.md | 804 +++
.../Tree/User-Manual/Streaming_timecho.md | 857 +++
.../User-Manual/Tiered-Storage_timecho.md | 96 +
.../V2.0.1/Tree/User-Manual/Trigger.md | 466 ++
.../Tree/User-Manual/UDF-development.md | 743 +++
.../User-defined-function_apache.md | 213 +
.../User-defined-function_timecho.md | 213 +
.../Tree/User-Manual/White-List_timecho.md | 70 +
src/UserGuide/V2.0.1/Tree/UserGuideReadme.md | 31 +
.../V2.0.1/Tree/stage/AINode_Deployment.md | 329 +
.../Administration.md | 541 ++
.../V2.0.1/Tree/stage/Architecture.md | 44 +
.../V2.0.1/Tree/stage/Cluster-Deployment.md | 615 ++
.../Tree/stage/Cluster-Deployment_timecho.md | 1109 ++++
.../Tree/stage/Cluster/Cluster-Concept.md | 117 +
.../Tree/stage/Cluster/Cluster-Maintenance.md | 718 +++
.../Tree/stage/Cluster/Cluster-Setup.md | 447 ++
.../stage/Cluster/Get-Installation-Package.md | 223 +
.../V2.0.1/Tree/stage/ClusterQuickStart.md | 260 +
.../Tree/stage/Command-Line-Interface.md | 285 +
.../Tree/stage/Data-Import-Export-Tool.md | 278 +
.../Tree/stage/Data-Modeling/DataRegion.md | 57 +
.../Data-Modeling/SchemaRegion-rocksdb.md | 110 +
.../V2.0.1/Tree/stage/Deadband-Process.md | 113 +
.../Tree/stage/Delete-Data/Delete-Data.md | 98 +
.../V2.0.1/Tree/stage/Delete-Data/TTL.md | 132 +
.../Tree/stage/Deployment-Recommendation.md | 182 +
.../V2.0.1/Tree/stage/Docker-Install.md | 187 +
.../Edge-Cloud-Collaboration/Sync-Tool.md | 374 ++
.../Tree/stage/Environmental-Requirement.md | 33 +
src/UserGuide/V2.0.1/Tree/stage/Features.md | 58 +
src/UserGuide/V2.0.1/Tree/stage/Files.md | 128 +
.../V2.0.1/Tree/stage/Flink-SQL-IoTDB.md | 527 ++
.../Tree/stage/General-SQL-Statements.md | 160 +
.../Integration-Test-refactoring-tutorial.md | 240 +
.../V2.0.1/Tree/stage/Interface-Comparison.md | 50 +
.../Tree/stage/IoTDB-Data-Pipe_timecho.md | 24 +
.../Tree/stage/Maintenance-Tools/CSV-Tool.md | 263 +
.../IoTDB-Data-Dir-Overview-Tool.md | 82 +
.../Tree/stage/Maintenance-Tools/JMX-Tool.md | 59 +
.../stage/Maintenance-Tools/Load-Tsfile.md | 111 +
.../Tree/stage/Maintenance-Tools/Log-Tool.md | 68 +
.../Maintenance-Tools/MLogParser-Tool.md | 40 +
.../Maintenance-Tools/Maintenance-Command.md | 227 +
.../Overlap-Validation-And-Repair-Tool.md | 42 +
.../SchemaFileSketch-Tool.md | 38 +
.../TsFile-Load-Export-Tool.md | 179 +
.../TsFile-Resource-Sketch-Tool.md | 79 +
.../Maintenance-Tools/TsFile-Settle-Tool.md | 42 +
.../Maintenance-Tools/TsFile-Sketch-Tool.md | 108 +
.../Maintenance-Tools/TsFile-Split-Tool.md | 46 +
.../Maintenance-Tools/TsFileSelfCheck-Tool.md | 42 +
.../stage/Maintenance-Tools/Watermark-Tool.md | 196 +
.../V2.0.1/Tree/stage/MapReduce-TsFile.md | 199 +
.../Tree/stage/Monitor-Alert/Alerting.md | 402 ++
.../Tree/stage/Monitor-Alert/Metric-Tool.md | 674 +++
.../Monitoring-Board-Install-and-Deploy.md | 207 +
.../Operate-Metadata/Auto-Create-MetaData.md | 112 +
.../Tree/stage/Operate-Metadata/Database.md | 227 +
.../Tree/stage/Operate-Metadata/Node.md | 288 +
.../Tree/stage/Operate-Metadata/Template.md | 241 +
.../Tree/stage/Operate-Metadata/Timeseries.md | 438 ++
.../stage/Operators-Functions/Aggregation.md | 488 ++
.../Operators-Functions/Anomaly-Detection.md | 824 +++
.../stage/Operators-Functions/Comparison.md | 305 +
.../stage/Operators-Functions/Conditional.md | 349 ++
.../stage/Operators-Functions/Constant.md | 57 +
.../Continuous-Interval.md | 73 +
.../stage/Operators-Functions/Conversion.md | 101 +
.../Operators-Functions/Data-Matching.md | 335 ++
.../Operators-Functions/Data-Profiling.md | 1887 ++++++
.../stage/Operators-Functions/Data-Quality.md | 574 ++
.../Operators-Functions/Data-Repairing.md | 520 ++
.../Operators-Functions/Frequency-Domain.md | 672 +++
.../Tree/stage/Operators-Functions/Lambda.md | 77 +
.../Tree/stage/Operators-Functions/Logical.md | 63 +
.../Operators-Functions/Machine-Learning.md | 207 +
.../stage/Operators-Functions/Mathematical.md | 134 +
.../stage/Operators-Functions/Overview.md | 287 +
.../Tree/stage/Operators-Functions/Sample.md | 399 ++
.../stage/Operators-Functions/Selection.md | 51 +
.../Operators-Functions/Series-Discovery.md | 173 +
.../Tree/stage/Operators-Functions/String.md | 911 +++
.../stage/Operators-Functions/Time-Series.md | 70 +
.../User-Defined-Function.md | 658 ++
.../Operators-Functions/Variation-Trend.md | 114 +
.../V2.0.1/Tree/stage/Performance.md | 38 +
.../V2.0.1/Tree/stage/Programming-Thrift.md | 157 +
.../V2.0.1/Tree/stage/Query-Data/Align-By.md | 62 +
.../Tree/stage/Query-Data/Continuous-Query.md | 581 ++
.../V2.0.1/Tree/stage/Query-Data/Fill.md | 333 +
.../V2.0.1/Tree/stage/Query-Data/Group-By.md | 930 +++
.../Tree/stage/Query-Data/Having-Condition.md | 115 +
.../Tree/stage/Query-Data/Last-Query.md | 101 +
.../V2.0.1/Tree/stage/Query-Data/Order-By.md | 276 +
.../V2.0.1/Tree/stage/Query-Data/Overview.md | 334 +
.../Tree/stage/Query-Data/Pagination.md | 341 ++
.../stage/Query-Data/Select-Expression.md | 324 +
.../Tree/stage/Query-Data/Select-Into.md | 340 ++
.../Tree/stage/Query-Data/Where-Condition.md | 191 +
src/UserGuide/V2.0.1/Tree/stage/QuickStart.md | 232 +
.../V2.0.1/Tree/stage/SQL-Reference.md | 1326 ++++
.../V2.0.1/Tree/stage/Schema-Template.md | 67 +
.../Tree/stage/Security-Management_apache.md | 536 ++
.../Tree/stage/Security-Management_timecho.md | 544 ++
.../V2.0.1/Tree/stage/ServerFileList.md | 117 +
.../Syntax-Conventions/Detailed-Grammar.md | 28 +
.../stage/Syntax-Conventions/Identifier.md | 141 +
.../stage/Syntax-Conventions/KeyValue-Pair.md | 119 +
.../Keywords-And-Reserved-Words.md | 26 +
.../Syntax-Conventions/Literal-Values.md | 157 +
.../Syntax-Conventions/NodeName-In-Path.md | 119 +
.../Session-And-TsFile-API.md | 119 +
.../V2.0.1/Tree/stage/TSDB-Comparison.md | 386 ++
.../V2.0.1/Tree/stage/Time-Partition.md | 53 +
src/UserGuide/V2.0.1/Tree/stage/Time-zone.md | 90 +
.../stage/Trigger/Configuration-Parameters.md | 29 +
.../Tree/stage/Trigger/Implement-Trigger.md | 294 +
.../V2.0.1/Tree/stage/Trigger/Instructions.md | 51 +
.../V2.0.1/Tree/stage/Trigger/Notes.md | 30 +
.../Tree/stage/Trigger/Trigger-Management.md | 152 +
.../Tree/stage/TsFile-Import-Export-Tool.md | 428 ++
.../V2.0.1/Tree/stage/WayToGetIoTDB.md | 211 +
.../Tree/stage/Write-Data/Batch-Load-Tool.md | 32 +
.../V2.0.1/Tree/stage/Write-Data/MQTT.md | 24 +
.../V2.0.1/Tree/stage/Write-Data/REST-API.md | 58 +
.../V2.0.1/Tree/stage/Write-Data/Session.md | 38 +
.../Tree/stage/Write-Data/Write-Data.md | 110 +
.../V2.0.1/Tree/stage/Writing-Data-on-HDFS.md | 171 +
.../Background-knowledge/Cluster-Concept.md | 59 +
.../common/Background-knowledge/Data-Type.md | 184 +
.../Cluster-Deployment_timecho.md | 398 ++
.../Database-Resources.md | 194 +
.../Environment-Requirements.md | 191 +
.../IoTDB-Package_timecho.md | 42 +
.../Monitoring-panel-deployment.md | 680 +++
.../Stand-Alone-Deployment_timecho.md | 265 +
.../IoTDB-Introduction_apache.md | 77 +
.../IoTDB-Introduction_timecho.md | 266 +
.../common/IoTDB-Introduction/Scenario.md | 94 +
.../Cluster-data-partitioning.md | 110 +
.../Encoding-and-Compression.md | 131 +
.../common/Technical-Insider/Publication.md | 42 +
.../Tree/API/Programming-CSharp-Native-API.md | 274 +
.../Tree/API/Programming-Cpp-Native-API.md | 431 ++
.../Tree/API/Programming-Go-Native-API.md | 84 +
.../V2.0.1/Tree/API/Programming-JDBC.md | 291 +
.../Tree/API/Programming-Java-Native-API.md | 793 +++
.../V2.0.1/Tree/API/Programming-Kafka.md | 118 +
.../V2.0.1/Tree/API/Programming-MQTT.md | 179 +
.../Tree/API/Programming-NodeJS-Native-API.md | 201 +
.../V2.0.1/Tree/API/Programming-ODBC.md | 146 +
.../Tree/API/Programming-OPC-UA_timecho.md | 256 +
.../Tree/API/Programming-Python-Native-API.md | 717 +++
.../Tree/API/Programming-Rust-Native-API.md | 200 +
.../V2.0.1/Tree/API/RestServiceV1.md | 946 +++
.../V2.0.1/Tree/API/RestServiceV2.md | 985 +++
.../Data-Model-and-Terminology.md | 141 +
.../Navigating_Time_Series_Data.md | 67 +
.../Basic-Concept/Operate-Metadata_apache.md | 1261 ++++
.../Basic-Concept/Operate-Metadata_timecho.md | 1333 ++++
.../V2.0.1/Tree/Basic-Concept/Query-Data.md | 3041 ++++++++++
.../Tree/Basic-Concept/Write-Delete-Data.md | 256 +
.../Tree/Ecosystem-Integration/DBeaver.md | 83 +
.../Tree/Ecosystem-Integration/DataEase.md | 229 +
.../Tree/Ecosystem-Integration/Flink-IoTDB.md | 121 +
.../Ecosystem-Integration/Flink-TsFile.md | 178 +
.../Grafana-Connector.md | 184 +
.../Ecosystem-Integration/Grafana-Plugin.md | 288 +
.../Tree/Ecosystem-Integration/Hive-TsFile.md | 167 +
.../Ignition-IoTDB-plugin_timecho.md | 272 +
.../Tree/Ecosystem-Integration/NiFi-IoTDB.md | 140 +
.../Tree/Ecosystem-Integration/Spark-IoTDB.md | 229 +
.../Ecosystem-Integration/Spark-TsFile.md | 320 +
.../Ecosystem-Integration/Telegraf-IoTDB.md | 110 +
.../Tree/Ecosystem-Integration/Thingsboard.md | 99 +
.../Zeppelin-IoTDB_apache.md | 174 +
.../Zeppelin-IoTDB_timecho.md | 174 +
.../Tree/FAQ/Frequently-asked-questions.md | 261 +
.../V2.0.1/Tree/QuickStart/QuickStart.md | 23 +
.../Tree/QuickStart/QuickStart_apache.md | 91 +
.../Tree/QuickStart/QuickStart_timecho.md | 109 +
.../Tree/Reference/Common-Config-Manual.md | 2220 +++++++
.../Reference/ConfigNode-Config-Manual.md | 210 +
.../Tree/Reference/DataNode-Config-Manual.md | 576 ++
.../V2.0.1/Tree/Reference/Keywords.md | 227 +
.../Tree/Reference/Modify-Config-Manual.md | 71 +
.../V2.0.1/Tree/Reference/Status-Codes.md | 178 +
.../V2.0.1/Tree/Reference/Syntax-Rule.md | 275 +
.../Tree/Reference/UDF-Libraries_apache.md | 5346 +++++++++++++++++
.../SQL-Manual/Function-and-Expression.md | 3203 ++++++++++
.../SQL-Manual/Operator-and-Expression.md | 529 ++
.../V2.0.1/Tree/SQL-Manual/SQL-Manual.md | 1973 ++++++
.../Tree/SQL-Manual/UDF-Libraries_apache.md | 5346 +++++++++++++++++
.../Tree/SQL-Manual/UDF-Libraries_timecho.md | 5333 ++++++++++++++++
.../V2.0.1/Tree/Tools-System/Benchmark.md | 352 ++
.../UserGuide/V2.0.1/Tree/Tools-System/CLI.md | 276 +
.../Tree/Tools-System/Data-Export-Tool.md | 199 +
.../Tree/Tools-System/Data-Import-Tool.md | 206 +
.../Tools-System/Maintenance-Tool_apache.md | 229 +
.../Tools-System/Maintenance-Tool_timecho.md | 1013 ++++
.../Tree/Tools-System/Monitor-Tool_apache.md | 168 +
.../Tree/Tools-System/Monitor-Tool_timecho.md | 168 +
.../Tree/Tools-System/Workbench_timecho.md | 31 +
.../V2.0.1/Tree/User-Manual/AINode_timecho.md | 650 ++
.../Tree/User-Manual/Audit-Log_timecho.md | 108 +
.../Tree/User-Manual/Authority-Management.md | 510 ++
.../Tree/User-Manual/Data-Sync_apache.md | 527 ++
.../Tree/User-Manual/Data-Sync_timecho.md | 607 ++
.../Tree/User-Manual/Data-subscription.md | 144 +
.../Tree/User-Manual/Database-Programming.md | 586 ++
.../Tree/User-Manual/IoTDB-View_timecho.md | 548 ++
.../V2.0.1/Tree/User-Manual/Maintennance.md | 351 ++
.../Tree/User-Manual/Streaming_apache.md | 817 +++
.../Tree/User-Manual/Streaming_timecho.md | 862 +++
.../User-Manual/Tiered-Storage_timecho.md | 97 +
.../V2.0.1/Tree/User-Manual/Trigger.md | 467 ++
.../Tree/User-Manual/UDF-development.md | 721 +++
.../User-defined-function_apache.md | 209 +
.../User-defined-function_timecho.md | 209 +
.../Tree/User-Manual/White-List_timecho.md | 70 +
.../UserGuide/V2.0.1/Tree/UserGuideReadme.md | 30 +
.../V2.0.1/Tree/stage/AINode_Deployment.md | 329 +
.../Administration.md | 536 ++
.../V2.0.1/Tree/stage/Architecture.md | 44 +
.../V2.0.1/Tree/stage/Cluster-Deployment.md | 613 ++
.../Tree/stage/Cluster-Deployment_timecho.md | 1109 ++++
.../Tree/stage/Cluster/Cluster-Concept.md | 118 +
.../Tree/stage/Cluster/Cluster-Maintenance.md | 717 +++
.../Tree/stage/Cluster/Cluster-Setup.md | 436 ++
.../stage/Cluster/Get-Installation-Package.md | 213 +
.../V2.0.1/Tree/stage/ClusterQuickStart.md | 276 +
.../Tree/stage/Command-Line-Interface.md | 275 +
.../Tree/stage/Data-Import-Export-Tool.md | 278 +
.../Tree/stage/Data-Modeling/DataRegion.md | 55 +
.../Data-Modeling/SchemaRegion-rocksdb.md | 105 +
.../V2.0.1/Tree/stage/Deadband-Process.md | 108 +
.../V2.0.1/Tree/stage/Delete-Data.md | 160 +
.../Tree/stage/Delete-Data/Delete-Data.md | 92 +
.../V2.0.1/Tree/stage/Delete-Data/TTL.md | 130 +
.../Tree/stage/Deployment-Preparation.md | 40 +
.../Tree/stage/Deployment-Recommendation.md | 178 +
.../V2.0.1/Tree/stage/Docker-Install.md | 181 +
.../Edge-Cloud-Collaboration/Sync-Tool.md | 362 ++
.../UserGuide/V2.0.1/Tree/stage/Features.md | 59 +
src/zh/UserGuide/V2.0.1/Tree/stage/Files.md | 125 +
.../V2.0.1/Tree/stage/Flink-SQL-IoTDB.md | 529 ++
.../Tree/stage/General-SQL-Statements.md | 171 +
.../V2.0.1/Tree/stage/InfluxDB-Protocol.md | 347 ++
.../Integration-Test-refactoring-tutorial.md | 225 +
.../V2.0.1/Tree/stage/Interface-Comparison.md | 50 +
.../Tree/stage/IoTDB-Data-Pipe_timecho.md | 945 +++
.../UserGuide/V2.0.1/Tree/stage/Last-Query.md | 113 +
.../Tree/stage/Maintenance-Tools/CSV-Tool.md | 261 +
.../IoTDB-Data-Dir-Overview-Tool.md | 82 +
.../Tree/stage/Maintenance-Tools/JMX-Tool.md | 59 +
.../stage/Maintenance-Tools/Load-Tsfile.md | 110 +
.../Tree/stage/Maintenance-Tools/Log-Tool.md | 68 +
.../Maintenance-Tools/MLogParser-Tool.md | 39 +
.../Maintenance-Tools/Maintenance-Command.md | 222 +
.../Overlap-Validation-And-Repair-Tool.md | 41 +
.../SchemaFileSketch-Tool.md | 35 +
.../TsFile-Load-Export-Tool.md | 181 +
.../TsFile-Resource-Sketch-Tool.md | 79 +
.../Maintenance-Tools/TsFile-Settle-Tool.md | 42 +
.../Maintenance-Tools/TsFile-Sketch-Tool.md | 108 +
.../Maintenance-Tools/TsFile-Split-Tool.md | 48 +
.../Maintenance-Tools/TsFileSelfCheck-Tool.md | 42 +
.../stage/Maintenance-Tools/Watermark-Tool.md | 196 +
.../V2.0.1/Tree/stage/MapReduce-TsFile.md | 200 +
.../Tree/stage/Monitor-Alert/Alerting.md | 370 ++
.../Tree/stage/Monitor-Alert/Metric-Tool.md | 641 ++
.../Monitoring-Board-Install-and-Deploy.md | 208 +
.../Operate-Metadata/Auto-Create-MetaData.md | 111 +
.../Tree/stage/Operate-Metadata/Database.md | 227 +
.../Tree/stage/Operate-Metadata/Node.md | 294 +
.../Tree/stage/Operate-Metadata/Template.md | 240 +
.../Tree/stage/Operate-Metadata/Timeseries.md | 438 ++
.../stage/Operators-Functions/Aggregation.md | 275 +
.../Operators-Functions/Anomaly-Detection.md | 835 +++
.../stage/Operators-Functions/Comparison.md | 309 +
.../stage/Operators-Functions/Conditional.md | 345 ++
.../stage/Operators-Functions/Constant.md | 57 +
.../Continuous-Interval.md | 75 +
.../stage/Operators-Functions/Conversion.md | 102 +
.../Operators-Functions/Data-Matching.md | 333 +
.../Operators-Functions/Data-Profiling.md | 1878 ++++++
.../stage/Operators-Functions/Data-Quality.md | 579 ++
.../Operators-Functions/Data-Repairing.md | 510 ++
.../Operators-Functions/Frequency-Domain.md | 667 ++
.../Tree/stage/Operators-Functions/Lambda.md | 83 +
.../Tree/stage/Operators-Functions/Logical.md | 63 +
.../Operators-Functions/Machine-Learning.md | 208 +
.../stage/Operators-Functions/Mathematical.md | 136 +
.../stage/Operators-Functions/Overview.md | 284 +
.../Tree/stage/Operators-Functions/Sample.md | 402 ++
.../stage/Operators-Functions/Selection.md | 51 +
.../Operators-Functions/Series-Discovery.md | 173 +
.../Tree/stage/Operators-Functions/String.md | 904 +++
.../stage/Operators-Functions/Time-Series.md | 69 +
.../User-Defined-Function.md | 592 ++
.../Operators-Functions/Variation-Trend.md | 114 +
.../V2.0.1/Tree/stage/Performance.md | 36 +
.../V2.0.1/Tree/stage/Programming-Thrift.md | 155 +
.../Tree/stage/Programming-TsFile-API.md | 561 ++
.../V2.0.1/Tree/stage/Query-Data/Align-By.md | 65 +
.../Tree/stage/Query-Data/Continuous-Query.md | 585 ++
.../V2.0.1/Tree/stage/Query-Data/Fill.md | 331 +
.../V2.0.1/Tree/stage/Query-Data/Group-By.md | 913 +++
.../Tree/stage/Query-Data/Having-Condition.md | 115 +
.../V2.0.1/Tree/stage/Query-Data/Order-By.md | 277 +
.../V2.0.1/Tree/stage/Query-Data/Overview.md | 342 ++
.../Tree/stage/Query-Data/Pagination.md | 283 +
.../stage/Query-Data/Select-Expression.md | 286 +
.../Tree/stage/Query-Data/Select-Into.md | 350 ++
.../Tree/stage/Query-Data/Where-Condition.md | 185 +
.../UserGuide/V2.0.1/Tree/stage/QuickStart.md | 273 +
.../V2.0.1/Tree/stage/SQL-Reference.md | 1290 ++++
.../V2.0.1/Tree/stage/Schema-Template.md | 125 +
.../V2.0.1/Tree/stage/ServerFileList.md | 110 +
.../Syntax-Conventions/Detailed-Grammar.md | 28 +
.../stage/Syntax-Conventions/Identifier.md | 142 +
.../stage/Syntax-Conventions/KeyValue-Pair.md | 119 +
.../Keywords-And-Reserved-Words.md | 26 +
.../Syntax-Conventions/Literal-Values.md | 151 +
.../Syntax-Conventions/NodeName-In-Path.md | 120 +
.../Session-And-TsFile-API.md | 119 +
.../V2.0.1/Tree/stage/TSDB-Comparison.md | 359 ++
.../V2.0.1/Tree/stage/Time-Partition.md | 53 +
.../UserGuide/V2.0.1/Tree/stage/Time-zone.md | 90 +
.../stage/Trigger/Configuration-Parameters.md | 29 +
.../Tree/stage/Trigger/Implement-Trigger.md | 297 +
.../V2.0.1/Tree/stage/Trigger/Instructions.md | 44 +
.../V2.0.1/Tree/stage/Trigger/Notes.md | 33 +
.../Tree/stage/Trigger/Trigger-Management.md | 152 +
.../Tree/stage/TsFile-Import-Export-Tool.md | 427 ++
.../V2.0.1/Tree/stage/WayToGetIoTDB.md | 212 +
.../Tree/stage/Write-Data/Batch-Load-Tool.md | 32 +
.../V2.0.1/Tree/stage/Write-Data/MQTT.md | 24 +
.../V2.0.1/Tree/stage/Write-Data/REST-API.md | 57 +
.../V2.0.1/Tree/stage/Write-Data/Session.md | 37 +
.../Tree/stage/Write-Data/Write-Data.md | 112 +
.../V2.0.1/Tree/stage/Writing-Data-on-HDFS.md | 171 +
.../Background-knowledge/Cluster-Concept.md | 55 +
.../common/Background-knowledge/Data-Type.md | 184 +
.../Cluster-Deployment_timecho.md | 385 ++
.../Database-Resources.md | 193 +
.../Environment-Requirements.md | 205 +
.../IoTDB-Package_timecho.md | 45 +
.../Monitoring-panel-deployment.md | 682 +++
.../Stand-Alone-Deployment_timecho.md | 233 +
.../IoTDB-Introduction_apache.md | 76 +
.../IoTDB-Introduction_timecho.md | 265 +
.../common/IoTDB-Introduction/Scenario.md | 95 +
.../Cluster-data-partitioning.md | 110 +
.../Encoding-and-Compression.md | 124 +
.../common/Technical-Insider/Publication.md | 41 +
426 files changed, 168406 insertions(+)
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-CSharp-Native-API.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-Cpp-Native-API.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-Go-Native-API.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-JDBC.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-Java-Native-API.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-Kafka.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-MQTT.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-NodeJS-Native-API.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-ODBC.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-OPC-UA_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-Python-Native-API.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/Programming-Rust-Native-API.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/RestServiceV1.md
create mode 100644 src/UserGuide/V2.0.1/Tree/API/RestServiceV2.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Basic-Concept/Data-Model-and-Terminology.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Basic-Concept/Navigating_Time_Series_Data.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Basic-Concept/Query-Data.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Basic-Concept/Write-Delete-Data.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/DBeaver.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/DataEase.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Flink-IoTDB.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Flink-TsFile.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Grafana-Connector.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Grafana-Plugin.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Hive-TsFile.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/NiFi-IoTDB.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Spark-IoTDB.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Spark-TsFile.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Telegraf-IoTDB.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Thingsboard.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Zeppelin-IoTDB.md
create mode 100644 src/UserGuide/V2.0.1/Tree/FAQ/Frequently-asked-questions.md
create mode 100644 src/UserGuide/V2.0.1/Tree/QuickStart/QuickStart.md
create mode 100644 src/UserGuide/V2.0.1/Tree/QuickStart/QuickStart_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/QuickStart/QuickStart_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Reference/Common-Config-Manual.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Reference/ConfigNode-Config-Manual.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Reference/DataNode-Config-Manual.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Reference/Keywords.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Reference/Modify-Config-Manual.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Reference/Status-Codes.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Reference/Syntax-Rule.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Reference/UDF-Libraries_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/SQL-Manual/Function-and-Expression.md
create mode 100644 src/UserGuide/V2.0.1/Tree/SQL-Manual/Operator-and-Expression.md
create mode 100644 src/UserGuide/V2.0.1/Tree/SQL-Manual/SQL-Manual.md
create mode 100644 src/UserGuide/V2.0.1/Tree/SQL-Manual/UDF-Libraries_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/SQL-Manual/UDF-Libraries_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Tools-System/Benchmark.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Tools-System/CLI.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Tools-System/Data-Export-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Tools-System/Data-Import-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Tools-System/Maintenance-Tool_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Tools-System/Maintenance-Tool_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Tools-System/Monitor-Tool_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Tools-System/Monitor-Tool_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/Tools-System/Workbench_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/AINode_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Audit-Log_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Authority-Management.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Data-Sync_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Data-Sync_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Data-subscription.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Database-Programming.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/IoTDB-View_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Maintennance.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Streaming_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Streaming_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Tiered-Storage_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/Trigger.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/UDF-development.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/User-defined-function_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/User-defined-function_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/User-Manual/White-List_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/UserGuideReadme.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/AINode_Deployment.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Administration-Management/Administration.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Architecture.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Cluster-Deployment.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Cluster-Deployment_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Cluster/Cluster-Concept.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Cluster/Cluster-Maintenance.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Cluster/Cluster-Setup.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Cluster/Get-Installation-Package.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/ClusterQuickStart.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Command-Line-Interface.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Data-Import-Export-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Data-Modeling/DataRegion.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Data-Modeling/SchemaRegion-rocksdb.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Deadband-Process.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Delete-Data/Delete-Data.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Delete-Data/TTL.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Deployment-Recommendation.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Docker-Install.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Edge-Cloud-Collaboration/Sync-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Environmental-Requirement.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Features.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Files.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Flink-SQL-IoTDB.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/General-SQL-Statements.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Integration-Test/Integration-Test-refactoring-tutorial.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Interface-Comparison.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/IoTDB-Data-Pipe_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/CSV-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/IoTDB-Data-Dir-Overview-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/JMX-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Load-Tsfile.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Log-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/MLogParser-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Maintenance-Command.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Overlap-Validation-And-Repair-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/SchemaFileSketch-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Load-Export-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Resource-Sketch-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Settle-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Sketch-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Split-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFileSelfCheck-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Watermark-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/MapReduce-TsFile.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Monitor-Alert/Alerting.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Monitor-Alert/Metric-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Monitoring-Board-Install-and-Deploy.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Auto-Create-MetaData.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Database.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Node.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Template.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Timeseries.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Aggregation.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Anomaly-Detection.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Comparison.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Conditional.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Constant.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Continuous-Interval.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Conversion.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Data-Matching.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Data-Profiling.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Data-Quality.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Data-Repairing.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Frequency-Domain.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Lambda.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Logical.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Machine-Learning.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Mathematical.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Overview.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Sample.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Selection.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Series-Discovery.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/String.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Time-Series.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/User-Defined-Function.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Variation-Trend.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Performance.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Programming-Thrift.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Align-By.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Continuous-Query.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Fill.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Group-By.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Having-Condition.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Last-Query.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Order-By.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Overview.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Pagination.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Select-Expression.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Select-Into.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Query-Data/Where-Condition.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/QuickStart.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/SQL-Reference.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Schema-Template.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Security-Management_apache.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Security-Management_timecho.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/ServerFileList.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Detailed-Grammar.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Identifier.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/KeyValue-Pair.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Keywords-And-Reserved-Words.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Literal-Values.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/NodeName-In-Path.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Session-And-TsFile-API.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/TSDB-Comparison.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Time-Partition.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Time-zone.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Trigger/Configuration-Parameters.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Trigger/Implement-Trigger.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Trigger/Instructions.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Trigger/Notes.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Trigger/Trigger-Management.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/TsFile-Import-Export-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/WayToGetIoTDB.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Write-Data/Batch-Load-Tool.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Write-Data/MQTT.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Write-Data/REST-API.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Write-Data/Session.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Write-Data/Write-Data.md
create mode 100644 src/UserGuide/V2.0.1/Tree/stage/Writing-Data-on-HDFS.md
create mode 100644 src/UserGuide/V2.0.1/common/Background-knowledge/Cluster-Concept.md
create mode 100644 src/UserGuide/V2.0.1/common/Background-knowledge/Data-Type.md
create mode 100644 src/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
create mode 100644 src/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Database-Resources.md
create mode 100644 src/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Environment-Requirements.md
create mode 100644 src/UserGuide/V2.0.1/common/Deployment-and-Maintenance/IoTDB-Package_timecho.md
create mode 100644 src/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Monitoring-panel-deployment.md
create mode 100644 src/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
create mode 100644 src/UserGuide/V2.0.1/common/IoTDB-Introduction/IoTDB-Introduction_apache.md
create mode 100644 src/UserGuide/V2.0.1/common/IoTDB-Introduction/IoTDB-Introduction_timecho.md
create mode 100644 src/UserGuide/V2.0.1/common/IoTDB-Introduction/Scenario.md
create mode 100644 src/UserGuide/V2.0.1/common/Technical-Insider/Cluster-data-partitioning.md
create mode 100644 src/UserGuide/V2.0.1/common/Technical-Insider/Encoding-and-Compression.md
create mode 100644 src/UserGuide/V2.0.1/common/Technical-Insider/Publication.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-CSharp-Native-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-Cpp-Native-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-Go-Native-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-JDBC.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-Java-Native-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-Kafka.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-MQTT.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-NodeJS-Native-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-ODBC.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-OPC-UA_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-Python-Native-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/Programming-Rust-Native-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/RestServiceV1.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/API/RestServiceV2.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Basic-Concept/Data-Model-and-Terminology.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Basic-Concept/Navigating_Time_Series_Data.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Basic-Concept/Query-Data.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Basic-Concept/Write-Delete-Data.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/DBeaver.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/DataEase.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Flink-IoTDB.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Flink-TsFile.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Grafana-Connector.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Grafana-Plugin.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Hive-TsFile.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/NiFi-IoTDB.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Spark-IoTDB.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Spark-TsFile.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Telegraf-IoTDB.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Thingsboard.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Zeppelin-IoTDB_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Ecosystem-Integration/Zeppelin-IoTDB_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/FAQ/Frequently-asked-questions.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/QuickStart/QuickStart.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/QuickStart/QuickStart_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/QuickStart/QuickStart_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Reference/Common-Config-Manual.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Reference/ConfigNode-Config-Manual.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Reference/DataNode-Config-Manual.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Reference/Keywords.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Reference/Modify-Config-Manual.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Reference/Status-Codes.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Reference/Syntax-Rule.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Reference/UDF-Libraries_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/SQL-Manual/Function-and-Expression.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/SQL-Manual/Operator-and-Expression.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/SQL-Manual/SQL-Manual.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/SQL-Manual/UDF-Libraries_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/SQL-Manual/UDF-Libraries_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Tools-System/Benchmark.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Tools-System/CLI.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Tools-System/Data-Export-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Tools-System/Data-Import-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Tools-System/Maintenance-Tool_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Tools-System/Maintenance-Tool_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Tools-System/Monitor-Tool_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Tools-System/Monitor-Tool_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/Tools-System/Workbench_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/AINode_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Audit-Log_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Authority-Management.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Data-Sync_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Data-Sync_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Data-subscription.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Database-Programming.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/IoTDB-View_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Maintennance.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Streaming_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Streaming_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Tiered-Storage_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/Trigger.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/UDF-development.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/User-defined-function_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/User-defined-function_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/User-Manual/White-List_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/UserGuideReadme.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/AINode_Deployment.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Administration-Management/Administration.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Architecture.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Cluster-Deployment.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Cluster-Deployment_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Cluster/Cluster-Concept.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Cluster/Cluster-Maintenance.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Cluster/Cluster-Setup.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Cluster/Get-Installation-Package.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/ClusterQuickStart.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Command-Line-Interface.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Data-Import-Export-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Data-Modeling/DataRegion.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Data-Modeling/SchemaRegion-rocksdb.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Deadband-Process.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Delete-Data.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Delete-Data/Delete-Data.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Delete-Data/TTL.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Deployment-Preparation.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Deployment-Recommendation.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Docker-Install.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Edge-Cloud-Collaboration/Sync-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Features.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Files.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Flink-SQL-IoTDB.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/General-SQL-Statements.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/InfluxDB-Protocol.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Integration-Test/Integration-Test-refactoring-tutorial.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Interface-Comparison.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/IoTDB-Data-Pipe_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Last-Query.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/CSV-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/IoTDB-Data-Dir-Overview-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/JMX-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Load-Tsfile.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Log-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/MLogParser-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Maintenance-Command.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Overlap-Validation-And-Repair-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/SchemaFileSketch-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Load-Export-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Resource-Sketch-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Settle-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Sketch-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFile-Split-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/TsFileSelfCheck-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Maintenance-Tools/Watermark-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/MapReduce-TsFile.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Monitor-Alert/Alerting.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Monitor-Alert/Metric-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Monitoring-Board-Install-and-Deploy.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Auto-Create-MetaData.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Database.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Node.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Template.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operate-Metadata/Timeseries.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Aggregation.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Anomaly-Detection.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Comparison.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Conditional.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Constant.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Continuous-Interval.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Conversion.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Data-Matching.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Data-Profiling.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Data-Quality.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Data-Repairing.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Frequency-Domain.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Lambda.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Logical.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Machine-Learning.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Mathematical.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Overview.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Sample.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Selection.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Series-Discovery.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/String.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Time-Series.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/User-Defined-Function.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Operators-Functions/Variation-Trend.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Performance.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Programming-Thrift.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Programming-TsFile-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Align-By.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Continuous-Query.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Fill.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Group-By.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Having-Condition.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Order-By.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Overview.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Pagination.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Select-Expression.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Select-Into.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Query-Data/Where-Condition.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/QuickStart.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/SQL-Reference.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Schema-Template.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/ServerFileList.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Detailed-Grammar.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Identifier.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/KeyValue-Pair.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Keywords-And-Reserved-Words.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Literal-Values.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/NodeName-In-Path.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Syntax-Conventions/Session-And-TsFile-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/TSDB-Comparison.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Time-Partition.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Time-zone.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Trigger/Configuration-Parameters.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Trigger/Implement-Trigger.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Trigger/Instructions.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Trigger/Notes.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Trigger/Trigger-Management.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/TsFile-Import-Export-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/WayToGetIoTDB.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Write-Data/Batch-Load-Tool.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Write-Data/MQTT.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Write-Data/REST-API.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Write-Data/Session.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Write-Data/Write-Data.md
create mode 100644 src/zh/UserGuide/V2.0.1/Tree/stage/Writing-Data-on-HDFS.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Background-knowledge/Cluster-Concept.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Background-knowledge/Data-Type.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Cluster-Deployment_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Database-Resources.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Environment-Requirements.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Deployment-and-Maintenance/IoTDB-Package_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Monitoring-panel-deployment.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/IoTDB-Introduction/IoTDB-Introduction_apache.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/IoTDB-Introduction/IoTDB-Introduction_timecho.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/IoTDB-Introduction/Scenario.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Technical-Insider/Cluster-data-partitioning.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Technical-Insider/Encoding-and-Compression.md
create mode 100644 src/zh/UserGuide/V2.0.1/common/Technical-Insider/Publication.md
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-CSharp-Native-API.md b/src/UserGuide/V2.0.1/Tree/API/Programming-CSharp-Native-API.md
new file mode 100644
index 000000000..12d431a3a
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-CSharp-Native-API.md
@@ -0,0 +1,213 @@
+
+
+# C# Native API
+
+## Installation
+
+### Install from NuGet Package
+
+We have prepared Nuget Package for C# users. Users can directly install the client through .NET CLI. [The link of our NuGet Package is here](https://www.nuget.org/packages/Apache.IoTDB/). Run the following command in the command line to complete installation
+
+```sh
+dotnet add package Apache.IoTDB
+```
+
+Note that the `Apache.IoTDB` package only supports versions greater than `.net framework 4.6.1`.
+
+## Prerequisites
+
+ .NET SDK Version >= 5.0
+ .NET Framework >= 4.6.1
+
+## How to Use the Client (Quick Start)
+
+Users can quickly get started by referring to the use cases under the Apache-IoTDB-Client-CSharp-UserCase directory. These use cases serve as a useful resource for getting familiar with the client's functionality and capabilities.
+
+For those who wish to delve deeper into the client's usage and explore more advanced features, the samples directory contains additional code samples.
+
+## Developer environment requirements for iotdb-client-csharp
+
+```
+.NET SDK Version >= 5.0
+.NET Framework >= 4.6.1
+ApacheThrift >= 0.14.1
+NLog >= 4.7.9
+```
+
+### OS
+
+* Linux, Macos or other unix-like OS
+* Windows+bash(WSL, cygwin, Git Bash)
+
+### Command Line Tools
+
+* dotnet CLI
+* Thrift
+
+## Basic interface description
+
+The Session interface is semantically identical to other language clients
+
+```csharp
+// Parameters
+string host = "localhost";
+int port = 6667;
+int pool_size = 2;
+
+// Init Session
+var session_pool = new SessionPool(host, port, pool_size);
+
+// Open Session
+await session_pool.Open(false);
+
+// Create TimeSeries
+await session_pool.CreateTimeSeries("root.test_group.test_device.ts1", TSDataType.TEXT, TSEncoding.PLAIN, Compressor.UNCOMPRESSED);
+await session_pool.CreateTimeSeries("root.test_group.test_device.ts2", TSDataType.BOOLEAN, TSEncoding.PLAIN, Compressor.UNCOMPRESSED);
+await session_pool.CreateTimeSeries("root.test_group.test_device.ts3", TSDataType.INT32, TSEncoding.PLAIN, Compressor.UNCOMPRESSED);
+
+// Insert Record
+var measures = new List{"ts1", "ts2", "ts3"};
+var values = new List { "test_text", true, (int)123 };
+var timestamp = 1;
+var rowRecord = new RowRecord(timestamp, values, measures);
+await session_pool.InsertRecordAsync("root.test_group.test_device", rowRecord);
+
+// Insert Tablet
+var timestamp_lst = new List{ timestamp + 1 };
+var value_lst = new List {"iotdb", true, (int) 12};
+var tablet = new Tablet("root.test_group.test_device", measures, value_lst, timestamp_ls);
+await session_pool.InsertTabletAsync(tablet);
+
+// Close Session
+await session_pool.Close();
+```
+
+## **Row Record**
+
+- Encapsulate and abstract the `record` data in **IoTDB**
+- e.g.
+
+ | timestamp | status | temperature |
+ | --------- | ------ | ----------- |
+ | 1 | 0 | 20 |
+
+- Construction:
+
+```csharp
+var rowRecord =
+ new RowRecord(long timestamps, List values, List measurements);
+```
+
+### **Tablet**
+
+- A data structure similar to a table, containing several non empty data blocks of a device's rows。
+- e.g.
+
+ | time | status | temperature |
+ | ---- | ------ | ----------- |
+ | 1 | 0 | 20 |
+ | 2 | 0 | 20 |
+ | 3 | 3 | 21 |
+
+- Construction:
+
+```csharp
+var tablet =
+ Tablet(string deviceId, List measurements, List> values, List timestamps);
+```
+
+
+
+## **API**
+
+### **Basic API**
+
+| api name | parameters | notes | use example |
+| -------------- | ------------------------- | ------------------------ | ----------------------------- |
+| Open | bool | open session | session_pool.Open(false) |
+| Close | null | close session | session_pool.Close() |
+| IsOpen | null | check if session is open | session_pool.IsOpen() |
+| OpenDebugMode | LoggingConfiguration=null | open debug mode | session_pool.OpenDebugMode() |
+| CloseDebugMode | null | close debug mode | session_pool.CloseDebugMode() |
+| SetTimeZone | string | set time zone | session_pool.GetTimeZone() |
+| GetTimeZone | null | get time zone | session_pool.GetTimeZone() |
+
+### **Record API**
+
+| api name | parameters | notes | use example |
+| ----------------------------------- | ----------------------------- | ----------------------------------- | ------------------------------------------------------------ |
+| InsertRecordAsync | string, RowRecord | insert single record | session_pool.InsertRecordAsync("root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE", new RowRecord(1, values, measures)); |
+| InsertRecordsAsync | List\, List\ | insert records | session_pool.InsertRecordsAsync(device_id, rowRecords) |
+| InsertRecordsOfOneDeviceAsync | string, List\ | insert records of one device | session_pool.InsertRecordsOfOneDeviceAsync(device_id, rowRecords) |
+| InsertRecordsOfOneDeviceSortedAsync | string, List\ | insert sorted records of one device | InsertRecordsOfOneDeviceSortedAsync(deviceId, sortedRowRecords); |
+| TestInsertRecordAsync | string, RowRecord | test insert record | session_pool.TestInsertRecordAsync("root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE", rowRecord) |
+| TestInsertRecordsAsync | List\, List\ | test insert record | session_pool.TestInsertRecordsAsync(device_id, rowRecords) |
+
+### **Tablet API**
+
+| api name | parameters | notes | use example |
+| ---------------------- | ------------ | -------------------- | -------------------------------------------- |
+| InsertTabletAsync | Tablet | insert single tablet | session_pool.InsertTabletAsync(tablet) |
+| InsertTabletsAsync | List\ | insert tablets | session_pool.InsertTabletsAsync(tablets) |
+| TestInsertTabletAsync | Tablet | test insert tablet | session_pool.TestInsertTabletAsync(tablet) |
+| TestInsertTabletsAsync | List\ | test insert tablets | session_pool.TestInsertTabletsAsync(tablets) |
+
+### **SQL API**
+
+| api name | parameters | notes | use example |
+| ----------------------------- | ---------- | ------------------------------ | ------------------------------------------------------------ |
+| ExecuteQueryStatementAsync | string | execute sql query statement | session_pool.ExecuteQueryStatementAsync("select * from root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE where time<15"); |
+| ExecuteNonQueryStatementAsync | string | execute sql nonquery statement | session_pool.ExecuteNonQueryStatementAsync( "create timeseries root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE.status with datatype=BOOLEAN,encoding=PLAIN") |
+
+### **Scheam API**
+
+| api name | parameters | notes | use example |
+| -------------------------- | ------------------------------------------------------------ | --------------------------- | ------------------------------------------------------------ |
+| SetStorageGroup | string | set storage group | session_pool.SetStorageGroup("root.97209_TEST_CSHARP_CLIENT_GROUP_01") |
+| CreateTimeSeries | string, TSDataType, TSEncoding, Compressor | create time series | session_pool.InsertTabletsAsync(tablets) |
+| DeleteStorageGroupAsync | string | delete single storage group | session_pool.DeleteStorageGroupAsync("root.97209_TEST_CSHARP_CLIENT_GROUP_01") |
+| DeleteStorageGroupsAsync | List\ | delete storage group | session_pool.DeleteStorageGroupAsync("root.97209_TEST_CSHARP_CLIENT_GROUP") |
+| CreateMultiTimeSeriesAsync | List\, List\ , List\ , List\ | create multi time series | session_pool.CreateMultiTimeSeriesAsync(ts_path_lst, data_type_lst, encoding_lst, compressor_lst); |
+| DeleteTimeSeriesAsync | List\ | delete time series | |
+| DeleteTimeSeriesAsync | string | delete time series | |
+| DeleteDataAsync | List\, long, long | delete data | session_pool.DeleteDataAsync(ts_path_lst, 2, 3) |
+
+### **Other API**
+
+| api name | parameters | notes | use example |
+| -------------------------- | ---------- | --------------------------- | ---------------------------------------------------- |
+| CheckTimeSeriesExistsAsync | string | check if time series exists | session_pool.CheckTimeSeriesExistsAsync(time series) |
+
+
+
+[e.g.](https://github.com/apache/iotdb-client-csharp/tree/main/samples/Apache.IoTDB.Samples)
+
+## SessionPool
+
+To implement concurrent client requests, we provide a `SessionPool` for the native interface. Since `SessionPool` itself is a superset of `Session`, when `SessionPool` is a When the `pool_size` parameter is set to 1, it reverts to the original `Session`
+
+We use the `ConcurrentQueue` data structure to encapsulate a client queue to maintain multiple connections with the server. When the `Open()` interface is called, a specified number of clients are created in the queue, and synchronous access to the queue is achieved through the `System.Threading.Monitor` class.
+
+When a request occurs, it will try to find an idle client connection from the Connection pool. If there is no idle connection, the program will need to wait until there is an idle connection
+
+When a connection is used up, it will automatically return to the pool and wait for the next time it is used up
+
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-Cpp-Native-API.md b/src/UserGuide/V2.0.1/Tree/API/Programming-Cpp-Native-API.md
new file mode 100644
index 000000000..83f024d8a
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-Cpp-Native-API.md
@@ -0,0 +1,428 @@
+
+
+# C++ Native API
+
+## Dependencies
+
+- Java 8+
+- Flex
+- Bison 2.7+
+- Boost 1.56+
+- OpenSSL 1.0+
+- GCC 5.5.0+
+
+## Installation
+
+### Install Required Dependencies
+
+- **MAC**
+ 1. Install Bison:
+
+ Use the following brew command to install the Bison version:
+ ```shell
+ brew install bison
+ ```
+
+ 2. Install Boost: Make sure to install the latest version of Boost.
+
+ ```shell
+ brew install boost
+ ```
+
+ 3. Check OpenSSL: Make sure the OpenSSL library is installed. The default OpenSSL header file path is "/usr/local/opt/openssl/include".
+
+ If you encounter errors related to OpenSSL not being found during compilation, try adding `-Dopenssl.include.dir=""`.
+
+- **Ubuntu 16.04+ or Other Debian-based Systems**
+
+ Use the following commands to install dependencies:
+
+ ```shell
+ sudo apt-get update
+ sudo apt-get install gcc g++ bison flex libboost-all-dev libssl-dev
+ ```
+
+- **CentOS 7.7+/Fedora/Rocky Linux or Other Red Hat-based Systems**
+
+ Use the yum command to install dependencies:
+
+ ```shell
+ sudo yum update
+ sudo yum install gcc gcc-c++ boost-devel bison flex openssl-devel
+ ```
+
+- **Windows**
+
+ 1. Set Up the Build Environment
+ - Install MS Visual Studio (version 2019+ recommended): Make sure to select Visual Studio C/C++ IDE and compiler (supporting CMake, Clang, MinGW) during installation.
+ - Download and install [CMake](https://cmake.org/download/).
+
+ 2. Download and Install Flex, Bison
+ - Download [Win_Flex_Bison](https://sourceforge.net/projects/winflexbison/).
+ - After downloading, rename the executables to flex.exe and bison.exe to ensure they can be found during compilation, and add the directory of these executables to the PATH environment variable.
+
+ 3. Install Boost Library
+ - Download [Boost](https://www.boost.org/users/download/).
+ - Compile Boost locally: Run `bootstrap.bat` and `b2.exe` in sequence.
+ - Add the Boost installation directory to the PATH environment variable, e.g., `C:\Program Files (x86)\boost_1_78_0`.
+
+ 4. Install OpenSSL
+ - Download and install [OpenSSL](http://slproweb.com/products/Win32OpenSSL.html).
+ - Add the include directory under the installation directory to the PATH environment variable.
+
+### Compilation
+
+Clone the source code from git:
+```shell
+git clone https://github.com/apache/iotdb.git
+```
+
+The default main branch is the master branch. If you want to use a specific release version, switch to that branch (e.g., version 1.3.2):
+```shell
+git checkout rc/1.3.2
+```
+
+Run Maven to compile in the IoTDB root directory:
+
+- Mac or Linux with glibc version >= 2.32
+ ```shell
+ ./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp
+ ```
+
+- Linux with glibc version >= 2.31
+ ```shell
+ ./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Diotdb-tools-thrift.version=0.14.1.1-old-glibc-SNAPSHOT
+ ```
+
+- Linux with glibc version >= 2.17
+ ```shell
+ ./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Diotdb-tools-thrift.version=0.14.1.1-glibc223-SNAPSHOT
+ ```
+
+- Windows using Visual Studio 2022
+ ```Batchfile
+ .\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp
+ ```
+
+- Windows using Visual Studio 2019
+ ```Batchfile
+ .\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Dcmake.generator="Visual Studio 16 2019" -Diotdb-tools-thrift.version=0.14.1.1-msvc142-SNAPSHOT
+ ```
+ - If you haven't added the Boost library path to the PATH environment variable, you need to add the relevant parameters to the compile command, e.g., `-DboostIncludeDir="C:\Program Files (x86)\boost_1_78_0" -DboostLibraryDir="C:\Program Files (x86)\boost_1_78_0\stage\lib"`.
+
+After successful compilation, the packaged library files will be located in `iotdb-client/client-cpp/target`, and you can find the compiled example program under `example/client-cpp-example/target`.
+
+### Compilation Q&A
+
+Q: What are the requirements for the environment on Linux?
+
+A:
+- The known minimum version requirement for glibc (x86_64 version) is 2.17, and the minimum version for GCC is 5.5.
+- The known minimum version requirement for glibc (ARM version) is 2.31, and the minimum version for GCC is 10.2.
+- If the above requirements are not met, you can try compiling Thrift locally:
+ - Download the code from https://github.com/apache/iotdb-bin-resources/tree/iotdb-tools-thrift-v0.14.1.0/iotdb-tools-thrift.
+ - Run `./mvnw clean install`.
+ - Go back to the IoTDB code directory and run `./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp`.
+
+Q: How to resolve the `undefined reference to '_libc_single_thread'` error during Linux compilation?
+
+A:
+- This issue is caused by the precompiled Thrift dependencies requiring a higher version of glibc.
+- You can try adding `-Diotdb-tools-thrift.version=0.14.1.1-glibc223-SNAPSHOT` or `-Diotdb-tools-thrift.version=0.14.1.1-old-glibc-SNAPSHOT` to the Maven compile command.
+
+Q: What if I need to compile using Visual Studio 2017 or earlier on Windows?
+
+A:
+- You can try compiling Thrift locally before compiling the client:
+ - Download the code from https://github.com/apache/iotdb-bin-resources/tree/iotdb-tools-thrift-v0.14.1.0/iotdb-tools-thrift.
+ - Run `.\mvnw.cmd clean install`.
+ - Go back to the IoTDB code directory and run `.\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Dcmake.generator="Visual Studio 15 2017"`.
+
+
+## Native APIs
+
+Here we show the commonly used interfaces and their parameters in the Native API:
+
+### Initialization
+
+- Open a Session
+```cpp
+void open();
+```
+
+- Open a session, with a parameter to specify whether to enable RPC compression
+```cpp
+void open(bool enableRPCCompression);
+```
+Notice: this RPC compression status of client must comply with that of IoTDB server
+
+- Close a Session
+```cpp
+void close();
+```
+
+### Data Definition Interface (DDL)
+
+#### Database Management
+
+- CREATE DATABASE
+```cpp
+void setStorageGroup(const std::string &storageGroupId);
+```
+
+- Delete one or several databases
+```cpp
+void deleteStorageGroup(const std::string &storageGroup);
+void deleteStorageGroups(const std::vector &storageGroups);
+```
+
+#### Timeseries Management
+
+- Create one or multiple timeseries
+```cpp
+void createTimeseries(const std::string &path, TSDataType::TSDataType dataType, TSEncoding::TSEncoding encoding,
+ CompressionType::CompressionType compressor);
+
+void createMultiTimeseries(const std::vector &paths,
+ const std::vector &dataTypes,
+ const std::vector &encodings,
+ const std::vector &compressors,
+ std::vector> *propsList,
+ std::vector> *tagsList,
+ std::vector> *attributesList,
+ std::vector *measurementAliasList);
+```
+
+- Create aligned timeseries
+```cpp
+void createAlignedTimeseries(const std::string &deviceId,
+ const std::vector &measurements,
+ const std::vector &dataTypes,
+ const std::vector &encodings,
+ const std::vector &compressors);
+```
+
+- Delete one or several timeseries
+```cpp
+void deleteTimeseries(const std::string &path);
+void deleteTimeseries(const std::vector &paths);
+```
+
+- Check whether the specific timeseries exists.
+```cpp
+bool checkTimeseriesExists(const std::string &path);
+```
+
+#### Schema Template
+
+- Create a schema template
+```cpp
+void createSchemaTemplate(const Template &templ);
+```
+
+- Set the schema template named `templateName` at path `prefixPath`.
+```cpp
+void setSchemaTemplate(const std::string &template_name, const std::string &prefix_path);
+```
+
+- Unset the schema template
+```cpp
+void unsetSchemaTemplate(const std::string &prefix_path, const std::string &template_name);
+```
+
+- After measurement template created, you can edit the template with belowed APIs.
+```cpp
+// Add aligned measurements to a template
+void addAlignedMeasurementsInTemplate(const std::string &template_name,
+ const std::vector &measurements,
+ const std::vector &dataTypes,
+ const std::vector &encodings,
+ const std::vector &compressors);
+
+// Add one aligned measurement to a template
+void addAlignedMeasurementsInTemplate(const std::string &template_name,
+ const std::string &measurement,
+ TSDataType::TSDataType dataType,
+ TSEncoding::TSEncoding encoding,
+ CompressionType::CompressionType compressor);
+
+// Add unaligned measurements to a template
+void addUnalignedMeasurementsInTemplate(const std::string &template_name,
+ const std::vector &measurements,
+ const std::vector &dataTypes,
+ const std::vector &encodings,
+ const std::vector &compressors);
+
+// Add one unaligned measurement to a template
+void addUnalignedMeasurementsInTemplate(const std::string &template_name,
+ const std::string &measurement,
+ TSDataType::TSDataType dataType,
+ TSEncoding::TSEncoding encoding,
+ CompressionType::CompressionType compressor);
+
+// Delete a node in template and its children
+void deleteNodeInTemplate(const std::string &template_name, const std::string &path);
+```
+
+- You can query measurement templates with these APIS:
+```cpp
+// Return the amount of measurements inside a template
+int countMeasurementsInTemplate(const std::string &template_name);
+
+// Return true if path points to a measurement, otherwise returne false
+bool isMeasurementInTemplate(const std::string &template_name, const std::string &path);
+
+// Return true if path exists in template, otherwise return false
+bool isPathExistInTemplate(const std::string &template_name, const std::string &path);
+
+// Return all measurements paths inside template
+std::vector showMeasurementsInTemplate(const std::string &template_name);
+
+// Return all measurements paths under the designated patter inside template
+std::vector showMeasurementsInTemplate(const std::string &template_name, const std::string &pattern);
+```
+
+
+### Data Manipulation Interface (DML)
+
+#### Insert
+
+> It is recommended to use insertTablet to help improve write efficiency.
+
+- Insert a Tablet,which is multiple rows of a device, each row has the same measurements
+ - Better Write Performance
+ - Support null values: fill the null value with any value, and then mark the null value via BitMap
+```cpp
+void insertTablet(Tablet &tablet);
+```
+
+- Insert multiple Tablets
+```cpp
+void insertTablets(std::unordered_map &tablets);
+```
+
+- Insert a Record, which contains multiple measurement value of a device at a timestamp
+```cpp
+void insertRecord(const std::string &deviceId, int64_t time, const std::vector &measurements,
+ const std::vector &types, const std::vector &values);
+```
+
+- Insert multiple Records
+```cpp
+void insertRecords(const std::vector &deviceIds,
+ const std::vector ×,
+ const std::vector> &measurementsList,
+ const std::vector> &typesList,
+ const std::vector> &valuesList);
+```
+
+- Insert multiple Records that belong to the same device. With type info the server has no need to do type inference, which leads a better performance
+```cpp
+void insertRecordsOfOneDevice(const std::string &deviceId,
+ std::vector ×,
+ std::vector> &measurementsList,
+ std::vector> &typesList,
+ std::vector> &valuesList);
+```
+
+#### Insert with type inference
+
+Without type information, server has to do type inference, which may cost some time.
+
+```cpp
+void insertRecord(const std::string &deviceId, int64_t time, const std::vector &measurements,
+ const std::vector &values);
+
+
+void insertRecords(const std::vector &deviceIds,
+ const std::vector ×,
+ const std::vector> &measurementsList,
+ const std::vector> &valuesList);
+
+
+void insertRecordsOfOneDevice(const std::string &deviceId,
+ std::vector ×,
+ std::vector> &measurementsList,
+ const std::vector> &valuesList);
+```
+
+#### Insert data into Aligned Timeseries
+
+The Insert of aligned timeseries uses interfaces like `insertAlignedXXX`, and others are similar to the above interfaces:
+
+- insertAlignedRecord
+- insertAlignedRecords
+- insertAlignedRecordsOfOneDevice
+- insertAlignedTablet
+- insertAlignedTablets
+
+#### Delete
+
+- Delete data in a time range of one or several timeseries
+```cpp
+void deleteData(const std::string &path, int64_t endTime);
+void deleteData(const std::vector &paths, int64_t endTime);
+void deleteData(const std::vector &paths, int64_t startTime, int64_t endTime);
+```
+
+### IoTDB-SQL Interface
+
+- Execute query statement
+```cpp
+unique_ptr executeQueryStatement(const std::string &sql);
+```
+
+- Execute non query statement
+```cpp
+void executeNonQueryStatement(const std::string &sql);
+```
+
+
+## Examples
+
+The sample code of using these interfaces is in:
+
+- `example/client-cpp-example/src/SessionExample.cpp`
+- `example/client-cpp-example/src/AlignedTimeseriesSessionExample.cpp` (Aligned Timeseries)
+
+If the compilation finishes successfully, the example project will be placed under `example/client-cpp-example/target`
+
+## FAQ
+
+### on Mac
+
+If errors occur when compiling thrift source code, try to downgrade your xcode-commandline from 12 to 11.5
+
+see https://stackoverflow.com/questions/63592445/ld-unsupported-tapi-file-type-tapi-tbd-in-yaml-file/65518087#65518087
+
+
+### on Windows
+
+When Building Thrift and downloading packages via "wget", a possible annoying issue may occur with
+error message looks like:
+```shell
+Failed to delete cached file C:\Users\Administrator\.m2\repository\.cache\download-maven-plugin\index.ser
+```
+Possible fixes:
+- Try to delete the ".m2\repository\\.cache\" directory and try again.
+- Add "\true\ " configuration to the download-maven-plugin maven phase that complains this error.
+
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-Go-Native-API.md b/src/UserGuide/V2.0.1/Tree/API/Programming-Go-Native-API.md
new file mode 100644
index 000000000..b227ed672
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-Go-Native-API.md
@@ -0,0 +1,64 @@
+
+
+# Go Native API
+
+The Git repository for the Go Native API client is located [here](https://github.com/apache/iotdb-client-go/)
+
+## Dependencies
+
+ * golang >= 1.13
+ * make >= 3.0
+ * curl >= 7.1.1
+ * thrift 0.15.0
+ * Linux、Macos or other unix-like systems
+ * Windows+bash (WSL、cygwin、Git Bash)
+
+## Installation
+
+ * go mod
+
+```sh
+export GO111MODULE=on
+export GOPROXY=https://goproxy.io
+
+mkdir session_example && cd session_example
+
+curl -o session_example.go -L https://github.com/apache/iotdb-client-go/raw/main/example/session_example.go
+
+go mod init session_example
+go run session_example.go
+```
+
+* GOPATH
+
+```sh
+# get thrift 0.15.0
+go get github.com/apache/thrift
+cd $GOPATH/src/github.com/apache/thrift
+git checkout 0.15.0
+
+mkdir -p $GOPATH/src/iotdb-client-go-example/session_example
+cd $GOPATH/src/iotdb-client-go-example/session_example
+curl -o session_example.go -L https://github.com/apache/iotdb-client-go/raw/main/example/session_example.go
+go run session_example.go
+```
+
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-JDBC.md b/src/UserGuide/V2.0.1/Tree/API/Programming-JDBC.md
new file mode 100644
index 000000000..0251e469c
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-JDBC.md
@@ -0,0 +1,296 @@
+
+
+# JDBC (Not Recommend)
+
+*NOTICE: CURRENTLY, JDBC IS USED FOR CONNECTING SOME THIRD-PART TOOLS.
+IT CAN NOT PROVIDE HIGH THROUGHPUT FOR WRITE OPERATIONS.
+PLEASE USE [Java Native API](./Programming-Java-Native-API.md) INSTEAD*
+
+## Dependencies
+
+* JDK >= 1.8+
+* Maven >= 3.9+
+
+## Installation
+
+In root directory:
+
+```shell
+mvn clean install -pl iotdb-client/jdbc -am -DskipTests
+```
+
+## Use IoTDB JDBC with Maven
+
+```xml
+
+
+ org.apache.iotdb
+ iotdb-jdbc
+ 1.3.1
+
+
+```
+
+## Coding Examples
+
+This chapter provides an example of how to open a database connection, execute an SQL query, and display the results.
+
+It requires including the packages containing the JDBC classes needed for database programming.
+
+**NOTE: For faster insertion, the insertTablet() in Session is recommended.**
+
+```java
+import java.sql.*;
+import org.apache.iotdb.jdbc.IoTDBSQLException;
+
+public class JDBCExample {
+ /**
+ * Before executing a SQL statement with a Statement object, you need to create a Statement object using the createStatement() method of the Connection object.
+ * After creating a Statement object, you can use its execute() method to execute a SQL statement
+ * Finally, remember to close the 'statement' and 'connection' objects by using their close() method
+ * For statements with query results, we can use the getResultSet() method of the Statement object to get the result set.
+ */
+ public static void main(String[] args) throws SQLException {
+ Connection connection = getConnection();
+ if (connection == null) {
+ System.out.println("get connection defeat");
+ return;
+ }
+ Statement statement = connection.createStatement();
+ //Create database
+ try {
+ statement.execute("CREATE DATABASE root.demo");
+ }catch (IoTDBSQLException e){
+ System.out.println(e.getMessage());
+ }
+
+
+ //SHOW DATABASES
+ statement.execute("SHOW DATABASES");
+ outputResult(statement.getResultSet());
+
+ //Create time series
+ //Different data type has different encoding methods. Here use INT32 as an example
+ try {
+ statement.execute("CREATE TIMESERIES root.demo.s0 WITH DATATYPE=INT32,ENCODING=RLE;");
+ }catch (IoTDBSQLException e){
+ System.out.println(e.getMessage());
+ }
+ //Show time series
+ statement.execute("SHOW TIMESERIES root.demo");
+ outputResult(statement.getResultSet());
+ //Show devices
+ statement.execute("SHOW DEVICES");
+ outputResult(statement.getResultSet());
+ //Count time series
+ statement.execute("COUNT TIMESERIES root");
+ outputResult(statement.getResultSet());
+ //Count nodes at the given level
+ statement.execute("COUNT NODES root LEVEL=3");
+ outputResult(statement.getResultSet());
+ //Count timeseries group by each node at the given level
+ statement.execute("COUNT TIMESERIES root GROUP BY LEVEL=3");
+ outputResult(statement.getResultSet());
+
+
+ //Execute insert statements in batch
+ statement.addBatch("INSERT INTO root.demo(timestamp,s0) VALUES(1,1);");
+ statement.addBatch("INSERT INTO root.demo(timestamp,s0) VALUES(1,1);");
+ statement.addBatch("INSERT INTO root.demo(timestamp,s0) VALUES(2,15);");
+ statement.addBatch("INSERT INTO root.demo(timestamp,s0) VALUES(2,17);");
+ statement.addBatch("INSERT INTO root.demo(timestamp,s0) values(4,12);");
+ statement.executeBatch();
+ statement.clearBatch();
+
+ //Full query statement
+ String sql = "SELECT * FROM root.demo";
+ ResultSet resultSet = statement.executeQuery(sql);
+ System.out.println("sql: " + sql);
+ outputResult(resultSet);
+
+ //Exact query statement
+ sql = "SELECT s0 FROM root.demo WHERE time = 4;";
+ resultSet= statement.executeQuery(sql);
+ System.out.println("sql: " + sql);
+ outputResult(resultSet);
+
+ //Time range query
+ sql = "SELECT s0 FROM root.demo WHERE time >= 2 AND time < 5;";
+ resultSet = statement.executeQuery(sql);
+ System.out.println("sql: " + sql);
+ outputResult(resultSet);
+
+ //Aggregate query
+ sql = "SELECT COUNT(s0) FROM root.demo;";
+ resultSet = statement.executeQuery(sql);
+ System.out.println("sql: " + sql);
+ outputResult(resultSet);
+
+ //Delete time series
+ statement.execute("DELETE timeseries root.demo.s0");
+
+ //close connection
+ statement.close();
+ connection.close();
+ }
+
+ public static Connection getConnection() {
+ // JDBC driver name and database URL
+ String driver = "org.apache.iotdb.jdbc.IoTDBDriver";
+ String url = "jdbc:iotdb://127.0.0.1:6667/";
+ // set rpc compress mode
+ // String url = "jdbc:iotdb://127.0.0.1:6667?rpc_compress=true";
+
+ // Database credentials
+ String username = "root";
+ String password = "root";
+
+ Connection connection = null;
+ try {
+ Class.forName(driver);
+ connection = DriverManager.getConnection(url, username, password);
+ } catch (ClassNotFoundException e) {
+ e.printStackTrace();
+ } catch (SQLException e) {
+ e.printStackTrace();
+ }
+ return connection;
+ }
+
+ /**
+ * This is an example of outputting the results in the ResultSet
+ */
+ private static void outputResult(ResultSet resultSet) throws SQLException {
+ if (resultSet != null) {
+ System.out.println("--------------------------");
+ final ResultSetMetaData metaData = resultSet.getMetaData();
+ final int columnCount = metaData.getColumnCount();
+ for (int i = 0; i < columnCount; i++) {
+ System.out.print(metaData.getColumnLabel(i + 1) + " ");
+ }
+ System.out.println();
+ while (resultSet.next()) {
+ for (int i = 1; ; i++) {
+ System.out.print(resultSet.getString(i));
+ if (i < columnCount) {
+ System.out.print(", ");
+ } else {
+ System.out.println();
+ break;
+ }
+ }
+ }
+ System.out.println("--------------------------\n");
+ }
+ }
+}
+```
+
+The parameter `version` can be used in the url:
+````java
+String url = "jdbc:iotdb://127.0.0.1:6667?version=V_1_0";
+````
+The parameter `version` represents the SQL semantic version used by the client, which is used in order to be compatible with the SQL semantics of `0.12` when upgrading to `0.13`.
+The possible values are: `V_0_12`, `V_0_13`, `V_1_0`.
+
+In addition, IoTDB provides additional interfaces in JDBC for users to read and write the database using different character sets (e.g., GB18030) in the connection.
+The default character set for IoTDB is UTF-8. When users want to use a character set other than UTF-8, they need to specify the charset property in the JDBC connection. For example:
+1. Create a connection using the GB18030 charset:
+```java
+DriverManager.getConnection("jdbc:iotdb://127.0.0.1:6667?charset=GB18030", "root", "root");
+```
+2. When executing SQL with the `IoTDBStatement` interface, the SQL can be provided as a `byte[]` array, and it will be parsed into a string according to the specified charset.
+```java
+public boolean execute(byte[] sql) throws SQLException;
+```
+3. When outputting query results, the `getBytes` method of `ResultSet` can be used to get `byte[]`, which will be encoded using the charset specified in the connection.
+```java
+System.out.print(resultSet.getString(i) + " (" + new String(resultSet.getBytes(i), charset) + ")");
+```
+Here is a complete example:
+```java
+public class JDBCCharsetExample {
+
+ private static final Logger LOGGER = LoggerFactory.getLogger(JDBCCharsetExample.class);
+
+ public static void main(String[] args) throws Exception {
+ Class.forName("org.apache.iotdb.jdbc.IoTDBDriver");
+
+ try (final Connection connection =
+ DriverManager.getConnection(
+ "jdbc:iotdb://127.0.0.1:6667?charset=GB18030", "root", "root");
+ final IoTDBStatement statement = (IoTDBStatement) connection.createStatement()) {
+
+ final String insertSQLWithGB18030 =
+ "insert into root.测试(timestamp, 维语, 彝语, 繁体, 蒙文, 简体, 标点符号, 藏语) values(1, 'ئۇيغۇر تىلى', 'ꆈꌠꉙ', \"繁體\", 'ᠮᠣᠩᠭᠣᠯ ᠬᠡᠯᠡ', '简体', '——?!', \"བོད་སྐད།\");";
+ final byte[] insertSQLWithGB18030Bytes = insertSQLWithGB18030.getBytes("GB18030");
+ statement.execute(insertSQLWithGB18030Bytes);
+ } catch (IoTDBSQLException e) {
+ LOGGER.error("IoTDB Jdbc example error", e);
+ }
+
+ outputResult("GB18030");
+ outputResult("UTF-8");
+ outputResult("UTF-16");
+ outputResult("GBK");
+ outputResult("ISO-8859-1");
+ }
+
+ private static void outputResult(String charset) throws SQLException {
+ System.out.println("[Charset: " + charset + "]");
+ try (final Connection connection =
+ DriverManager.getConnection(
+ "jdbc:iotdb://127.0.0.1:6667?charset=" + charset, "root", "root");
+ final IoTDBStatement statement = (IoTDBStatement) connection.createStatement()) {
+ outputResult(statement.executeQuery("select ** from root"), Charset.forName(charset));
+ } catch (IoTDBSQLException e) {
+ LOGGER.error("IoTDB Jdbc example error", e);
+ }
+ }
+
+ private static void outputResult(ResultSet resultSet, Charset charset) throws SQLException {
+ if (resultSet != null) {
+ System.out.println("--------------------------");
+ final ResultSetMetaData metaData = resultSet.getMetaData();
+ final int columnCount = metaData.getColumnCount();
+ for (int i = 0; i < columnCount; i++) {
+ System.out.print(metaData.getColumnLabel(i + 1) + " ");
+ }
+ System.out.println();
+
+ while (resultSet.next()) {
+ for (int i = 1; ; i++) {
+ System.out.print(
+ resultSet.getString(i) + " (" + new String(resultSet.getBytes(i), charset) + ")");
+ if (i < columnCount) {
+ System.out.print(", ");
+ } else {
+ System.out.println();
+ break;
+ }
+ }
+ }
+ System.out.println("--------------------------\n");
+ }
+ }
+}
+```
\ No newline at end of file
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-Java-Native-API.md b/src/UserGuide/V2.0.1/Tree/API/Programming-Java-Native-API.md
new file mode 100644
index 000000000..387a9e075
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-Java-Native-API.md
@@ -0,0 +1,842 @@
+
+
+# Java Native API
+
+## Installation
+
+### Dependencies
+
+* JDK >= 1.8
+* Maven >= 3.6
+
+
+### Using IoTDB Java Native API with Maven
+
+```xml
+
+
+ org.apache.iotdb
+ iotdb-session
+ 1.0.0
+
+
+```
+
+## Syntax Convention
+
+- **IoTDB-SQL interface:** The input SQL parameter needs to conform to the [syntax conventions](../User-Manual/Syntax-Rule.md#Literal-Values) and be escaped for JAVA strings. For example, you need to add a backslash before the double-quotes. (That is: after JAVA escaping, it is consistent with the SQL statement executed on the command line.)
+- **Other interfaces:**
+ - The node names in path or path prefix as parameter: The node names which should be escaped by backticks (`) in the SQL statement, escaping is required here.
+ - Identifiers (such as template names) as parameters: The identifiers which should be escaped by backticks (`) in the SQL statement, and escaping is not required here.
+- **Code example for syntax convention could be found at:** `example/session/src/main/java/org/apache/iotdb/SyntaxConventionRelatedExample.java`
+
+## Native APIs
+
+Here we show the commonly used interfaces and their parameters in the Native API:
+
+### Session Management
+
+* Initialize a Session
+
+``` java
+// use default configuration
+session = new Session.Builder.build();
+
+// initialize with a single node
+session =
+ new Session.Builder()
+ .host(String host)
+ .port(int port)
+ .build();
+
+// initialize with multiple nodes
+session =
+ new Session.Builder()
+ .nodeUrls(List nodeUrls)
+ .build();
+
+// other configurations
+session =
+ new Session.Builder()
+ .fetchSize(int fetchSize)
+ .username(String username)
+ .password(String password)
+ .thriftDefaultBufferSize(int thriftDefaultBufferSize)
+ .thriftMaxFrameSize(int thriftMaxFrameSize)
+ .enableRedirection(boolean enableRedirection)
+ .version(Version version)
+ .build();
+```
+
+Version represents the SQL semantic version used by the client, which is used to be compatible with the SQL semantics of 0.12 when upgrading 0.13. The possible values are: `V_0_12`, `V_0_13`, `V_1_0`, etc.
+
+
+* Open a Session
+
+``` java
+void open()
+```
+
+* Open a session, with a parameter to specify whether to enable RPC compression
+
+``` java
+void open(boolean enableRPCCompression)
+```
+
+Notice: this RPC compression status of client must comply with that of IoTDB server
+
+* Close a Session
+
+``` java
+void close()
+```
+
+* SessionPool
+
+We provide a connection pool (`SessionPool) for Native API.
+Using the interface, you need to define the pool size.
+
+If you can not get a session connection in 60 seconds, there is a warning log but the program will hang.
+
+If a session has finished an operation, it will be put back to the pool automatically.
+If a session connection is broken, the session will be removed automatically and the pool will try
+to create a new session and redo the operation.
+You can also specify an url list of multiple reachable nodes when creating a SessionPool, just as you would when creating a Session. To ensure high availability of clients in distributed cluster.
+
+For query operations:
+
+1. When using SessionPool to query data, the result set is `SessionDataSetWrapper`;
+2. Given a `SessionDataSetWrapper`, if you have not scanned all the data in it and stop to use it,
+you have to call `SessionPool.closeResultSet(wrapper)` manually;
+3. When you call `hasNext()` and `next()` of a `SessionDataSetWrapper` and there is an exception, then
+you have to call `SessionPool.closeResultSet(wrapper)` manually;
+4. You can call `getColumnNames()` of `SessionDataSetWrapper` to get the column names of query result;
+
+Examples: ```session/src/test/java/org/apache/iotdb/session/pool/SessionPoolTest.java```
+
+Or `example/session/src/main/java/org/apache/iotdb/SessionPoolExample.java`
+
+
+### Database & Timeseries Management API
+
+#### Database Management
+
+* CREATE DATABASE
+
+``` java
+void setStorageGroup(String storageGroupId)
+```
+
+* Delete one or several databases
+
+``` java
+void deleteStorageGroup(String storageGroup)
+void deleteStorageGroups(List storageGroups)
+```
+
+#### Timeseries Management
+
+* Create one or multiple timeseries
+
+``` java
+void createTimeseries(String path, TSDataType dataType,
+ TSEncoding encoding, CompressionType compressor, Map props,
+ Map tags, Map attributes, String measurementAlias)
+
+void createMultiTimeseries(List paths, List dataTypes,
+ List encodings, List compressors,
+ List> propsList, List> tagsList,
+ List> attributesList, List measurementAliasList)
+```
+
+* Create aligned timeseries
+```
+void createAlignedTimeseries(String prefixPath, List measurements,
+ List dataTypes, List encodings,
+ List compressors, List measurementAliasList);
+```
+
+Attention: Alias of measurements are **not supported** currently.
+
+* Delete one or several timeseries
+
+``` java
+void deleteTimeseries(String path)
+void deleteTimeseries(List paths)
+```
+
+* Check whether the specific timeseries exists.
+
+``` java
+boolean checkTimeseriesExists(String path)
+```
+
+#### Schema Template
+
+
+Create a schema template for massive identical devices will help to improve memory performance. You can use Template, InternalNode and MeasurementNode to depict the structure of the template, and use belowed interface to create it inside session.
+
+``` java
+public void createSchemaTemplate(Template template);
+
+Class Template {
+ private String name;
+ private boolean directShareTime;
+ Map children;
+ public Template(String name, boolean isShareTime);
+
+ public void addToTemplate(Node node);
+ public void deleteFromTemplate(String name);
+ public void setShareTime(boolean shareTime);
+}
+
+Abstract Class Node {
+ private String name;
+ public void addChild(Node node);
+ public void deleteChild(Node node);
+}
+
+Class MeasurementNode extends Node {
+ TSDataType dataType;
+ TSEncoding encoding;
+ CompressionType compressor;
+ public MeasurementNode(String name,
+ TSDataType dataType,
+ TSEncoding encoding,
+ CompressionType compressor);
+}
+```
+
+We strongly suggest you implement templates only with flat-measurement (like object 'flatTemplate' in belowed snippet), since tree-structured template may not be a long-term supported feature in further version of IoTDB.
+
+A snippet of using above Method and Class:
+
+``` java
+MeasurementNode nodeX = new MeasurementNode("x", TSDataType.FLOAT, TSEncoding.RLE, CompressionType.SNAPPY);
+MeasurementNode nodeY = new MeasurementNode("y", TSDataType.FLOAT, TSEncoding.RLE, CompressionType.SNAPPY);
+MeasurementNode nodeSpeed = new MeasurementNode("speed", TSDataType.DOUBLE, TSEncoding.GORILLA, CompressionType.SNAPPY);
+
+// This is the template we suggest to implement
+Template flatTemplate = new Template("flatTemplate");
+template.addToTemplate(nodeX);
+template.addToTemplate(nodeY);
+template.addToTemplate(nodeSpeed);
+
+createSchemaTemplate(flatTemplate);
+```
+
+You can query measurement inside templates with these APIS:
+
+```java
+// Return the amount of measurements inside a template
+public int countMeasurementsInTemplate(String templateName);
+
+// Return true if path points to a measurement, otherwise returne false
+public boolean isMeasurementInTemplate(String templateName, String path);
+
+// Return true if path exists in template, otherwise return false
+public boolean isPathExistInTemplate(String templateName, String path);
+
+// Return all measurements paths inside template
+public List showMeasurementsInTemplate(String templateName);
+
+// Return all measurements paths under the designated patter inside template
+public List showMeasurementsInTemplate(String templateName, String pattern);
+```
+
+To implement schema template, you can set the measurement template named 'templateName' at path 'prefixPath'.
+
+**Please notice that, we strongly recommend not setting templates on the nodes above the database to accommodate future updates and collaboration between modules.**
+
+``` java
+void setSchemaTemplate(String templateName, String prefixPath)
+```
+
+Before setting template, you should firstly create the template using
+
+``` java
+void createSchemaTemplate(Template template)
+```
+
+After setting template to a certain path, you can use the template to create timeseries on given device paths through the following interface, or you can write data directly to trigger timeseries auto creation using schema template under target devices.
+
+``` java
+void createTimeseriesUsingSchemaTemplate(List devicePathList)
+```
+
+After setting template to a certain path, you can query for info about template using belowed interface in session:
+
+``` java
+/** @return All template names. */
+public List showAllTemplates();
+
+/** @return All paths have been set to designated template. */
+public List showPathsTemplateSetOn(String templateName);
+
+/** @return All paths are using designated template. */
+public List showPathsTemplateUsingOn(String templateName)
+```
+
+If you are ready to get rid of schema template, you can drop it with belowed interface. Make sure the template to drop has been unset from MTree.
+
+``` java
+void unsetSchemaTemplate(String prefixPath, String templateName);
+public void dropSchemaTemplate(String templateName);
+```
+
+Unset the measurement template named 'templateName' from path 'prefixPath'. When you issue this interface, you should assure that there is a template named 'templateName' set at the path 'prefixPath'.
+
+Attention: Unsetting the template named 'templateName' from node at path 'prefixPath' or descendant nodes which have already inserted records using template is **not supported**.
+
+
+### Data Manipulation Interface (DML Interface)
+
+### Data Insert API
+
+It is recommended to use insertTablet to help improve write efficiency.
+
+* Insert a Tablet,which is multiple rows of a device, each row has the same measurements
+ * **Better Write Performance**
+ * **Support batch write**
+ * **Support null values**: fill the null value with any value, and then mark the null value via BitMap
+
+``` java
+void insertTablet(Tablet tablet)
+
+public class Tablet {
+ /** deviceId of this tablet */
+ public String prefixPath;
+ /** the list of measurement schemas for creating the tablet */
+ private List schemas;
+ /** timestamps in this tablet */
+ public long[] timestamps;
+ /** each object is a primitive type array, which represents values of one measurement */
+ public Object[] values;
+ /** each bitmap represents the existence of each value in the current column. */
+ public BitMap[] bitMaps;
+ /** the number of rows to include in this tablet */
+ public int rowSize;
+ /** the maximum number of rows for this tablet */
+ private int maxRowNumber;
+ /** whether this tablet store data of aligned timeseries or not */
+ private boolean isAligned;
+}
+```
+
+* Insert multiple Tablets
+
+``` java
+void insertTablets(Map tablet)
+```
+
+* Insert a Record, which contains multiple measurement value of a device at a timestamp. This method is equivalent to providing a common interface for multiple data types of values. Later, the value can be cast to the original type through TSDataType.
+
+ The correspondence between the Object type and the TSDataType type is shown in the following table.
+
+ | TSDataType | Object |
+ |------------|--------------|
+ | BOOLEAN | Boolean |
+ | INT32 | Integer |
+ | DATE | LocalDate |
+ | INT64 | Long |
+ | TIMESTAMP | Long |
+ | FLOAT | Float |
+ | DOUBLE | Double |
+ | TEXT | String, Binary |
+ | STRING | String, Binary |
+ | BLOB | Binary |
+``` java
+void insertRecord(String deviceId, long time, List measurements,
+ List types, List values)
+```
+
+* Insert multiple Records
+
+``` java
+void insertRecords(List deviceIds, List times,
+ List> measurementsList, List> typesList,
+ List> valuesList)
+```
+* Insert multiple Records that belong to the same device.
+ With type info the server has no need to do type inference, which leads a better performance
+
+``` java
+void insertRecordsOfOneDevice(String deviceId, List times,
+ List> measurementsList, List> typesList,
+ List> valuesList)
+```
+
+#### Insert with type inference
+
+When the data is of String type, we can use the following interface to perform type inference based on the value of the value itself. For example, if value is "true" , it can be automatically inferred to be a boolean type. If value is "3.2" , it can be automatically inferred as a flout type. Without type information, server has to do type inference, which may cost some time.
+
+* Insert a Record, which contains multiple measurement value of a device at a timestamp
+
+``` java
+void insertRecord(String prefixPath, long time, List measurements, List values)
+```
+
+* Insert multiple Records
+
+``` java
+void insertRecords(List deviceIds, List times,
+ List> measurementsList, List> valuesList)
+```
+
+* Insert multiple Records that belong to the same device.
+
+``` java
+void insertStringRecordsOfOneDevice(String deviceId, List times,
+ List> measurementsList, List> valuesList)
+```
+
+#### Insert of Aligned Timeseries
+
+The Insert of aligned timeseries uses interfaces like insertAlignedXXX, and others are similar to the above interfaces:
+
+* insertAlignedRecord
+* insertAlignedRecords
+* insertAlignedRecordsOfOneDevice
+* insertAlignedStringRecordsOfOneDevice
+* insertAlignedTablet
+* insertAlignedTablets
+
+### Data Delete API
+
+* Delete data before or equal to a timestamp of one or several timeseries
+
+``` java
+void deleteData(String path, long time)
+void deleteData(List paths, long time)
+```
+
+### Data Query API
+
+* Time-series raw data query with time range:
+ - The specified query time range is a left-closed right-open interval, including the start time but excluding the end time.
+
+``` java
+SessionDataSet executeRawDataQuery(List paths, long startTime, long endTime);
+```
+
+* Last query:
+ - Query the last data, whose timestamp is greater than or equal LastTime.
+ ``` java
+ SessionDataSet executeLastDataQuery(List paths, long LastTime);
+ ```
+ - Query the latest point of the specified series of single device quickly, and support redirection;
+ If you are sure that the query path is valid, set 'isLegalPathNodes' to true to avoid performance penalties from path verification.
+ ``` java
+ SessionDataSet executeLastDataQueryForOneDevice(
+ String db, String device, List sensors, boolean isLegalPathNodes);
+ ```
+
+* Aggregation query:
+ - Support specified query time range: The specified query time range is a left-closed right-open interval, including the start time but not the end time.
+ - Support GROUP BY TIME.
+
+``` java
+SessionDataSet executeAggregationQuery(List paths, List aggregations);
+
+SessionDataSet executeAggregationQuery(
+ List paths, List aggregations, long startTime, long endTime);
+
+SessionDataSet executeAggregationQuery(
+ List paths,
+ List aggregations,
+ long startTime,
+ long endTime,
+ long interval);
+
+SessionDataSet executeAggregationQuery(
+ List paths,
+ List aggregations,
+ long startTime,
+ long endTime,
+ long interval,
+ long slidingStep);
+```
+
+* Execute query statement
+
+``` java
+SessionDataSet executeQueryStatement(String sql)
+```
+
+### Data Subscription
+
+#### 1 Topic Management
+
+The `SubscriptionSession` class in the IoTDB subscription client provides interfaces for topic management. The status changes of topics are illustrated in the diagram below:
+
+
+
+
+
+##### 1.1 Create Topic
+
+```Java
+ void createTopicIfNotExists(String topicName, Properties properties) throws Exception;
+```
+
+Example:
+
+```Java
+try (final SubscriptionSession session = new SubscriptionSession(host, port)) {
+ session.open();
+ final Properties config = new Properties();
+ config.put(TopicConstant.PATH_KEY, "root.db.**");
+ session.createTopic(topicName, config);
+}
+```
+
+##### 1.2 Delete Topic
+
+```Java
+void dropTopicIfExists(String topicName) throws Exception;
+```
+
+##### 1.3 View Topic
+
+```Java
+// Get all topics
+Set getTopics() throws Exception;
+
+// Get a specific topic
+Optional getTopic(String topicName) throws Exception;
+```
+
+#### 2 Check Subscription Status
+The `SubscriptionSession` class in the IoTDB subscription client provides interfaces to check the subscription status:
+
+```Java
+Set getSubscriptions() throws Exception;
+Set getSubscriptions(final String topicName) throws Exception;
+```
+
+#### 3 Create Consumer
+
+When creating a consumer using the JAVA native interface, you need to specify the parameters applied to the consumer.
+
+For both `SubscriptionPullConsumer` and `SubscriptionPushConsumer`, the following common configurations are available:
+
+
+| key | **required or optional with default** | description |
+| :---------------------- | :----------------------------------------------------------- | :----------------------------------------------------------- |
+| host | optional: 127.0.0.1 | `String`: The RPC host of a certain DataNode in IoTDB |
+| port | optional: 6667 | Integer: The RPC port of a certain DataNode in IoTDB |
+| node-urls | optional: 127.0.0.1:6667 | `List`: The RPC addresses of all DataNodes in IoTDB, can be multiple; either host:port or node-urls can be filled in. If both host:port and node-urls are filled in, the union of host:port and node-urls will be used to form a new node-urls application |
+| username | optional: root | `String`: The username of a DataNode in IoTDB |
+| password | optional: root | `String`: The password of a DataNode in IoTDB |
+| groupId | optional | `String`: consumer group id, if not specified, a new consumer group will be randomly assigned, ensuring that different consumer groups have different consumer group ids |
+| consumerId | optional | `String`: consumer client id, if not specified, it will be randomly assigned, ensuring that each consumer client id in the same consumer group is unique |
+| heartbeatIntervalMs | optional: 30000 (min: 1000) | `Long`: The interval at which the consumer sends heartbeat requests to the IoTDB DataNode |
+| endpointsSyncIntervalMs | optional: 120000 (min: 5000) | `Long`: The interval at which the consumer detects the expansion and contraction of IoTDB cluster nodes and adjusts the subscription connection |
+| fileSaveDir | optional: Paths.get(System.getProperty("user.dir"), "iotdb-subscription").toString() | `String`: The temporary directory path where the TsFile files subscribed by the consumer are stored |
+| fileSaveFsync | optional: false | `Boolean`: Whether the consumer actively calls fsync during the subscription of TsFile |
+
+
+##### 3.1 SubscriptionPushConsumer
+
+The following are special configurations for `SubscriptionPushConsumer`:
+
+
+| key | **required or optional with default** | description |
+| :----------------- | :------------------------------------ | :----------------------------------------------------------- |
+| ackStrategy | optional: `ACKStrategy.AFTER_CONSUME` | Consumption progress confirmation mechanism includes the following options: `ACKStrategy.BEFORE_CONSUME` (submit consumption progress immediately when the consumer receives data, before `onReceive`) `ACKStrategy.AFTER_CONSUME` (submit consumption progress after the consumer has consumed the data, after `onReceive`) |
+| consumeListener | optional | Consumption data callback function, need to implement the `ConsumeListener` interface, define the consumption logic of `SessionDataSetsHandler` and `TsFileHandler` form data|
+| autoPollIntervalMs | optional: 5000 (min: 500) | Long: The interval at which the consumer automatically pulls data, in ms |
+| autoPollTimeoutMs | optional: 10000 (min: 1000) | Long: The timeout time for the consumer to pull data each time, in ms |
+
+Among them, the ConsumerListener interface is defined as follows:
+
+
+```Java
+@FunctionInterface
+interface ConsumeListener {
+ default ConsumeResult onReceive(Message message) {
+ return ConsumeResult.SUCCESS;
+ }
+}
+
+enum ConsumeResult {
+ SUCCESS,
+ FAILURE,
+}
+```
+
+##### 3.2 SubscriptionPullConsumer
+
+The following are special configurations for `SubscriptionPullConsumer` :
+
+| key | **required or optional with default** | description |
+| :----------------- | :------------------------------------ | :----------------------------------------------------------- |
+| autoCommit | optional: true | Boolean: Whether to automatically commit consumption progress. If this parameter is set to false, the commit method must be called to manually `commit` consumption progress. |
+| autoCommitInterval | optional: 5000 (min: 500) | Long: The interval at which consumption progress is automatically committed, in milliseconds. This only takes effect when the autoCommit parameter is true.
+ |
+
+After creating a consumer, you need to manually call the consumer's open method:
+
+
+```Java
+void open() throws Exception;
+```
+
+At this point, the IoTDB subscription client will verify the correctness of the consumer's configuration. After a successful verification, the consumer will join the corresponding consumer group. That is, only after opening the consumer can you use the returned consumer object to subscribe to topics, consume data, and perform other operations.
+
+#### 4 Subscribe to Topics
+
+Both `SubscriptionPushConsumer` and `SubscriptionPullConsumer` provide the following JAVA native interfaces for subscribing to topics:
+
+```Java
+// Subscribe to topics
+void subscribe(String topic) throws Exception;
+void subscribe(List topics) throws Exception;
+```
+
+- Before a consumer subscribes to a topic, the topic must have been created, otherwise, the subscription will fail.
+
+- If a consumer subscribes to a topic that it has already subscribed to, no error will occur.
+
+- If there are other consumers in the same consumer group that have subscribed to the same topics, the consumer will reuse the corresponding consumption progress.
+
+
+#### 5 Consume Data
+
+For both push and pull mode consumers:
+
+
+- Only after explicitly subscribing to a topic will the consumer receive data for that topic.
+
+- If no topics are subscribed to after creation, the consumer will not be able to consume any data, even if other consumers in the same consumer group have subscribed to some topics.
+
+##### 5.1 SubscriptionPushConsumer
+
+After `SubscriptionPushConsumer` subscribes to topics, there is no need to manually pull data.
+
+The data consumption logic is within the `consumeListener` configuration specified when creating `SubscriptionPushConsumer`.
+
+##### 5.2 SubscriptionPullConsumer
+
+After SubscriptionPullConsumer subscribes to topics, it needs to actively call the poll method to pull data:
+
+```Java
+List poll(final Duration timeout) throws Exception;
+List poll(final long timeoutMs) throws Exception;
+List poll(final Set topicNames, final Duration timeout) throws Exception;
+List poll(final Set topicNames, final long timeoutMs) throws Exception;
+```
+
+In the poll method, you can specify the topic names to be pulled (if not specified, it defaults to pulling all topics that the consumer has subscribed to) and the timeout period.
+
+
+When the SubscriptionPullConsumer is configured with the autoCommit parameter set to false, it is necessary to manually call the commitSync and commitAsync methods to synchronously or asynchronously commit the consumption progress of a batch of data:
+
+
+```Java
+void commitSync(final SubscriptionMessage message) throws Exception;
+void commitSync(final Iterable messages) throws Exception;
+
+CompletableFuture commitAsync(final SubscriptionMessage message);
+CompletableFuture commitAsync(final Iterable messages);
+void commitAsync(final SubscriptionMessage message, final AsyncCommitCallback callback);
+void commitAsync(final Iterable messages, final AsyncCommitCallback callback);
+```
+
+The AsyncCommitCallback class is defined as follows:
+
+```Java
+public interface AsyncCommitCallback {
+ default void onComplete() {
+ // Do nothing
+ }
+
+ default void onFailure(final Throwable e) {
+ // Do nothing
+ }
+}
+```
+
+#### 6 Unsubscribe
+
+The `SubscriptionPushConsumer` and `SubscriptionPullConsumer` provide the following JAVA native interfaces for unsubscribing and closing the consumer:
+
+```Java
+// Unsubscribe from topics
+void unsubscribe(String topic) throws Exception;
+void unsubscribe(List topics) throws Exception;
+
+// Close consumer
+void close();
+```
+
+- If a consumer unsubscribes from a topic that it has not subscribed to, no error will occur.
+- When a consumer is closed, it will exit the corresponding consumer group and automatically unsubscribe from all topics it is currently subscribed to.
+- Once a consumer is closed, its lifecycle ends, and it cannot be reopened to subscribe to and consume data again.
+
+
+#### 7 Code Examples
+
+##### 7.1 Single Pull Consumer Consuming SessionDataSetsHandler Format Data
+
+```Java
+// Create topics
+try (final SubscriptionSession session = new SubscriptionSession(HOST, PORT)) {
+ session.open();
+ final Properties config = new Properties();
+ config.put(TopicConstant.PATH_KEY, "root.db.**");
+ session.createTopic(TOPIC_1, config);
+}
+
+// Subscription: property-style ctor
+final Properties config = new Properties();
+config.put(ConsumerConstant.CONSUMER_ID_KEY, "c1");
+config.put(ConsumerConstant.CONSUMER_GROUP_ID_KEY, "cg1");
+
+final SubscriptionPullConsumer consumer1 = new SubscriptionPullConsumer(config);
+consumer1.open();
+consumer1.subscribe(TOPIC_1);
+while (true) {
+ LockSupport.parkNanos(SLEEP_NS); // wait some time
+ final List messages = consumer1.poll(POLL_TIMEOUT_MS);
+ for (final SubscriptionMessage message : messages) {
+ for (final SubscriptionSessionDataSet dataSet : message.getSessionDataSetsHandler()) {
+ System.out.println(dataSet.getColumnNames());
+ System.out.println(dataSet.getColumnTypes());
+ while (dataSet.hasNext()) {
+ System.out.println(dataSet.next());
+ }
+ }
+ }
+ // Auto commit
+}
+
+// Show topics and subscriptions
+try (final SubscriptionSession session = new SubscriptionSession(HOST, PORT)) {
+ session.open();
+ session.getTopics().forEach((System.out::println));
+ session.getSubscriptions().forEach((System.out::println));
+}
+
+consumer1.unsubscribe(TOPIC_1);
+consumer1.close();
+```
+
+##### 7.2 Multiple Push Consumers Consuming TsFileHandler Format Data
+
+```Java
+// Create topics
+try (final SubscriptionSession subscriptionSession = new SubscriptionSession(HOST, PORT)) {
+ subscriptionSession.open();
+ final Properties config = new Properties();
+ config.put(TopicConstant.FORMAT_KEY, TopicConstant.FORMAT_TS_FILE_HANDLER_VALUE);
+ subscriptionSession.createTopic(TOPIC_2, config);
+}
+
+final List threads = new ArrayList<>();
+for (int i = 0; i < 8; ++i) {
+ final int idx = i;
+ final Thread thread =
+ new Thread(
+ () -> {
+ // Subscription: builder-style ctor
+ try (final SubscriptionPushConsumer consumer2 =
+ new SubscriptionPushConsumer.Builder()
+ .consumerId("c" + idx)
+ .consumerGroupId("cg2")
+ .fileSaveDir(System.getProperty("java.io.tmpdir"))
+ .ackStrategy(AckStrategy.AFTER_CONSUME)
+ .consumeListener(
+ message -> {
+ doSomething(message.getTsFileHandler());
+ return ConsumeResult.SUCCESS;
+ })
+ .buildPushConsumer()) {
+ consumer2.open();
+ consumer2.subscribe(TOPIC_2);
+ // block the consumer main thread
+ Thread.sleep(Long.MAX_VALUE);
+ } catch (final IOException | InterruptedException e) {
+ throw new RuntimeException(e);
+ }
+ });
+ thread.start();
+ threads.add(thread);
+}
+
+for (final Thread thread : threads) {
+ thread.join();
+}
+```
+
+### Other Modules (Execute SQL Directly)
+
+* Execute non query statement
+
+``` java
+void executeNonQueryStatement(String sql)
+```
+
+
+### Write Test Interface (to profile network cost)
+
+These methods **don't** insert data into database and server just return after accept the request.
+
+* Test the network and client cost of insertRecord
+
+``` java
+void testInsertRecord(String deviceId, long time, List measurements, List values)
+
+void testInsertRecord(String deviceId, long time, List measurements,
+ List types, List values)
+```
+
+* Test the network and client cost of insertRecords
+
+``` java
+void testInsertRecords(List deviceIds, List times,
+ List> measurementsList, List> valuesList)
+
+void testInsertRecords(List deviceIds, List times,
+ List> measurementsList, List> typesList
+ List> valuesList)
+```
+
+* Test the network and client cost of insertTablet
+
+``` java
+void testInsertTablet(Tablet tablet)
+```
+
+* Test the network and client cost of insertTablets
+
+``` java
+void testInsertTablets(Map tablets)
+```
+
+### Coding Examples
+
+To get more information of the following interfaces, please view session/src/main/java/org/apache/iotdb/session/Session.java
+
+The sample code of using these interfaces is in example/session/src/main/java/org/apache/iotdb/SessionExample.java,which provides an example of how to open an IoTDB session, execute a batch insertion.
+
+For examples of aligned timeseries and measurement template, you can refer to `example/session/src/main/java/org/apache/iotdb/AlignedTimeseriesSessionExample.java`
\ No newline at end of file
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-Kafka.md b/src/UserGuide/V2.0.1/Tree/API/Programming-Kafka.md
new file mode 100644
index 000000000..0a041448f
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-Kafka.md
@@ -0,0 +1,118 @@
+
+
+# Kafka
+
+[Apache Kafka](https://kafka.apache.org/) is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
+
+## Coding Example
+
+### kafka Producer Producing Data Java Code Example
+
+```java
+ Properties props = new Properties();
+ props.put("bootstrap.servers", "127.0.0.1:9092");
+ props.put("key.serializer", StringSerializer.class);
+ props.put("value.serializer", StringSerializer.class);
+ KafkaProducer producer = new KafkaProducer<>(props);
+ producer.send(
+ new ProducerRecord<>(
+ "Kafka-Test", "key", "root.kafka," + System.currentTimeMillis() + ",value,INT32,100"));
+ producer.close();
+```
+
+### kafka Consumer Receiving Data Java Code Example
+
+```java
+ Properties props = new Properties();
+ props.put("bootstrap.servers", "127.0.0.1:9092");
+ props.put("key.deserializer", StringDeserializer.class);
+ props.put("value.deserializer", StringDeserializer.class);
+ props.put("auto.offset.reset", "earliest");
+ props.put("group.id", "Kafka-Test");
+ KafkaConsumer kafkaConsumer = new KafkaConsumer<>(props);
+ kafkaConsumer.subscribe(Collections.singleton("Kafka-Test"));
+ ConsumerRecords records = kafkaConsumer.poll(Duration.ofSeconds(1));
+ ```
+
+### Example of Java Code Stored in IoTDB Server
+
+```java
+ SessionPool pool =
+ new SessionPool.Builder()
+ .host("127.0.0.1")
+ .port(6667)
+ .user("root")
+ .password("root")
+ .maxSize(3)
+ .build();
+ List datas = new ArrayList<>(records.count());
+ for (ConsumerRecord record : records) {
+ datas.add(record.value());
+ }
+ int size = datas.size();
+ List deviceIds = new ArrayList<>(size);
+ List times = new ArrayList<>(size);
+ List> measurementsList = new ArrayList<>(size);
+ List> typesList = new ArrayList<>(size);
+ List> valuesList = new ArrayList<>(size);
+ for (String data : datas) {
+ String[] dataArray = data.split(",");
+ String device = dataArray[0];
+ long time = Long.parseLong(dataArray[1]);
+ List measurements = Arrays.asList(dataArray[2].split(":"));
+ List types = new ArrayList<>();
+ for (String type : dataArray[3].split(":")) {
+ types.add(TSDataType.valueOf(type));
+ }
+ List values = new ArrayList<>();
+ String[] valuesStr = dataArray[4].split(":");
+ for (int i = 0; i < valuesStr.length; i++) {
+ switch (types.get(i)) {
+ case INT64:
+ values.add(Long.parseLong(valuesStr[i]));
+ break;
+ case DOUBLE:
+ values.add(Double.parseDouble(valuesStr[i]));
+ break;
+ case INT32:
+ values.add(Integer.parseInt(valuesStr[i]));
+ break;
+ case TEXT:
+ values.add(valuesStr[i]);
+ break;
+ case FLOAT:
+ values.add(Float.parseFloat(valuesStr[i]));
+ break;
+ case BOOLEAN:
+ values.add(Boolean.parseBoolean(valuesStr[i]));
+ break;
+ }
+ }
+ deviceIds.add(device);
+ times.add(time);
+ measurementsList.add(measurements);
+ typesList.add(types);
+ valuesList.add(values);
+ }
+ pool.insertRecords(deviceIds, times, measurementsList, typesList, valuesList);
+ ```
+
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-MQTT.md b/src/UserGuide/V2.0.1/Tree/API/Programming-MQTT.md
new file mode 100644
index 000000000..5bbb610cf
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-MQTT.md
@@ -0,0 +1,183 @@
+
+# MQTT Protocol
+
+[MQTT](http://mqtt.org/) is a machine-to-machine (M2M)/"Internet of Things" connectivity protocol.
+It was designed as an extremely lightweight publish/subscribe messaging transport.
+It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium.
+
+IoTDB supports the MQTT v3.1(an OASIS Standard) protocol.
+IoTDB server includes a built-in MQTT service that allows remote devices send messages into IoTDB server directly.
+
+
+
+
+## Built-in MQTT Service
+The Built-in MQTT Service provide the ability of direct connection to IoTDB through MQTT. It listen the publish messages from MQTT clients
+ and then write the data into storage immediately.
+The MQTT topic corresponds to IoTDB timeseries.
+The messages payload can be format to events by `PayloadFormatter` which loaded by java SPI, and the default implementation is `JSONPayloadFormatter`.
+The default `json` formatter support two json format and its json array. The following is an MQTT message payload example:
+
+```json
+ {
+ "device":"root.sg.d1",
+ "timestamp":1586076045524,
+ "measurements":["s1","s2"],
+ "values":[0.530635,0.530635]
+ }
+```
+or
+```json
+ {
+ "device":"root.sg.d1",
+ "timestamps":[1586076045524,1586076065526],
+ "measurements":["s1","s2"],
+ "values":[[0.530635,0.530635], [0.530655,0.530695]]
+ }
+```
+or json array of the above two.
+
+
+
+## MQTT Configurations
+The IoTDB MQTT service load configurations from `${IOTDB_HOME}/${IOTDB_CONF}/iotdb-system.properties` by default.
+
+Configurations are as follows:
+
+| NAME | DESCRIPTION | DEFAULT |
+| ------------- |:-------------:|:------:|
+| enable_mqtt_service | whether to enable the mqtt service | false |
+| mqtt_host | the mqtt service binding host | 127.0.0.1 |
+| mqtt_port | the mqtt service binding port | 1883 |
+| mqtt_handler_pool_size | the handler pool size for handing the mqtt messages | 1 |
+| mqtt_payload_formatter | the mqtt message payload formatter | json |
+| mqtt_max_message_size | the max mqtt message size in byte| 1048576 |
+
+
+## Coding Examples
+The following is an example which a mqtt client send messages to IoTDB server.
+
+```java
+MQTT mqtt = new MQTT();
+mqtt.setHost("127.0.0.1", 1883);
+mqtt.setUserName("root");
+mqtt.setPassword("root");
+
+BlockingConnection connection = mqtt.blockingConnection();
+connection.connect();
+
+Random random = new Random();
+for (int i = 0; i < 10; i++) {
+ String payload = String.format("{\n" +
+ "\"device\":\"root.sg.d1\",\n" +
+ "\"timestamp\":%d,\n" +
+ "\"measurements\":[\"s1\"],\n" +
+ "\"values\":[%f]\n" +
+ "}", System.currentTimeMillis(), random.nextDouble());
+
+ connection.publish("root.sg.d1.s1", payload.getBytes(), QoS.AT_LEAST_ONCE, false);
+}
+
+connection.disconnect();
+
+```
+
+## Customize your MQTT Message Format
+
+If you do not like the above Json format, you can customize your MQTT Message format by just writing several lines
+of codes. An example can be found in `example/mqtt-customize` project.
+
+Steps:
+1. Create a java project, and add dependency:
+```xml
+
+ org.apache.iotdb
+ iotdb-server
+ 1.1.0-SNAPSHOT
+
+```
+2. Define your implementation which implements `org.apache.iotdb.db.protocol.mqtt.PayloadFormatter`
+e.g.,
+
+```java
+package org.apache.iotdb.mqtt.server;
+
+import io.netty.buffer.ByteBuf;
+import org.apache.iotdb.db.protocol.mqtt.Message;
+import org.apache.iotdb.db.protocol.mqtt.PayloadFormatter;
+
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+public class CustomizedJsonPayloadFormatter implements PayloadFormatter {
+
+ @Override
+ public List format(ByteBuf payload) {
+ // Suppose the payload is a json format
+ if (payload == null) {
+ return null;
+ }
+
+ String json = payload.toString(StandardCharsets.UTF_8);
+ // parse data from the json and generate Messages and put them into List ret
+ List ret = new ArrayList<>();
+ // this is just an example, so we just generate some Messages directly
+ for (int i = 0; i < 2; i++) {
+ long ts = i;
+ Message message = new Message();
+ message.setDevice("d" + i);
+ message.setTimestamp(ts);
+ message.setMeasurements(Arrays.asList("s1", "s2"));
+ message.setValues(Arrays.asList("4.0" + i, "5.0" + i));
+ ret.add(message);
+ }
+ return ret;
+ }
+
+ @Override
+ public String getName() {
+ // set the value of mqtt_payload_formatter in iotdb-system.properties as the following string:
+ return "CustomizedJson";
+ }
+}
+```
+3. modify the file in `src/main/resources/META-INF/services/org.apache.iotdb.db.protocol.mqtt.PayloadFormatter`:
+ clean the file and put your implementation class name into the file.
+ In this example, the content is: `org.apache.iotdb.mqtt.server.CustomizedJsonPayloadFormatter`
+4. compile your implementation as a jar file: `mvn package -DskipTests`
+
+
+Then, in your server:
+1. Create ${IOTDB_HOME}/ext/mqtt/ folder, and put the jar into this folder.
+2. Update configuration to enable MQTT service. (`enable_mqtt_service=true` in `conf/iotdb-system.properties`)
+3. Set the value of `mqtt_payload_formatter` in `conf/iotdb-system.properties` as the value of getName() in your implementation
+ , in this example, the value is `CustomizedJson`
+4. Launch the IoTDB server.
+5. Now IoTDB will use your implementation to parse the MQTT message.
+
+More: the message format can be anything you want. For example, if it is a binary format,
+just use `payload.forEachByte()` or `payload.array` to get bytes content.
+
+
+
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-NodeJS-Native-API.md b/src/UserGuide/V2.0.1/Tree/API/Programming-NodeJS-Native-API.md
new file mode 100644
index 000000000..35c7964cd
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-NodeJS-Native-API.md
@@ -0,0 +1,181 @@
+
+
+# Node.js Native API
+
+Apache IoTDB uses Thrift as a cross-language RPC-framework so access to IoTDB can be achieved through the interfaces provided by Thrift.
+This document will introduce how to generate a native Node.js interface that can be used to access IoTDB.
+
+## Dependents
+
+ * JDK >= 1.8
+ * Node.js >= 16.0.0
+ * Linux、Macos or like unix
+ * Windows+bash
+
+## Generate the Node.js native interface
+
+1. Find the `pom.xml` file in the root directory of the IoTDB source code folder.
+2. Open the `pom.xml` file and find the following content:
+ ```xml
+
+ generate-thrift-sources-python
+ generate-sources
+
+ compile
+
+
+ py
+ ${project.build.directory}/generated-sources-python/
+
+
+ ```
+3. Duplicate this block and change the `id`, `generator` and `outputDirectory` to this:
+ ```xml
+
+ generate-thrift-sources-nodejs
+ generate-sources
+
+ compile
+
+
+ js:node
+ ${project.build.directory}/generated-sources-nodejs/
+
+
+ ```
+4. In the root directory of the IoTDB source code folder,run `mvn clean generate-sources`.
+
+This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files.
+The newly generated JavaScript sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-nodejs` in the various modules of the `iotdb-protocol` module.
+
+## Using the Node.js native interface
+
+Simply copy the files in `iotdb/iotdb-protocol/thrift/target/generated-sources-nodejs/` and `iotdb/iotdb-protocol/thrift-commons/target/generated-sources-nodejs/` into your project.
+
+## rpc interface
+
+```
+// open a session
+TSOpenSessionResp openSession(1:TSOpenSessionReq req);
+
+// close a session
+TSStatus closeSession(1:TSCloseSessionReq req);
+
+// run an SQL statement in batch
+TSExecuteStatementResp executeStatement(1:TSExecuteStatementReq req);
+
+// execute SQL statement in batch
+TSStatus executeBatchStatement(1:TSExecuteBatchStatementReq req);
+
+// execute query SQL statement
+TSExecuteStatementResp executeQueryStatement(1:TSExecuteStatementReq req);
+
+// execute insert, delete and update SQL statement
+TSExecuteStatementResp executeUpdateStatement(1:TSExecuteStatementReq req);
+
+// fetch next query result
+TSFetchResultsResp fetchResults(1:TSFetchResultsReq req)
+
+// fetch meta data
+TSFetchMetadataResp fetchMetadata(1:TSFetchMetadataReq req)
+
+// cancel a query
+TSStatus cancelOperation(1:TSCancelOperationReq req);
+
+// close a query dataset
+TSStatus closeOperation(1:TSCloseOperationReq req);
+
+// get time zone
+TSGetTimeZoneResp getTimeZone(1:i64 sessionId);
+
+// set time zone
+TSStatus setTimeZone(1:TSSetTimeZoneReq req);
+
+// get server's properties
+ServerProperties getProperties();
+
+// CREATE DATABASE
+TSStatus setStorageGroup(1:i64 sessionId, 2:string storageGroup);
+
+// create timeseries
+TSStatus createTimeseries(1:TSCreateTimeseriesReq req);
+
+// create multi timeseries
+TSStatus createMultiTimeseries(1:TSCreateMultiTimeseriesReq req);
+
+// delete timeseries
+TSStatus deleteTimeseries(1:i64 sessionId, 2:list path)
+
+// delete sttorage groups
+TSStatus deleteStorageGroups(1:i64 sessionId, 2:list storageGroup);
+
+// insert record
+TSStatus insertRecord(1:TSInsertRecordReq req);
+
+// insert record in string format
+TSStatus insertStringRecord(1:TSInsertStringRecordReq req);
+
+// insert tablet
+TSStatus insertTablet(1:TSInsertTabletReq req);
+
+// insert tablets in batch
+TSStatus insertTablets(1:TSInsertTabletsReq req);
+
+// insert records in batch
+TSStatus insertRecords(1:TSInsertRecordsReq req);
+
+// insert records of one device
+TSStatus insertRecordsOfOneDevice(1:TSInsertRecordsOfOneDeviceReq req);
+
+// insert records in batch as string format
+TSStatus insertStringRecords(1:TSInsertStringRecordsReq req);
+
+// test the latency of innsert tablet,caution:no data will be inserted, only for test latency
+TSStatus testInsertTablet(1:TSInsertTabletReq req);
+
+// test the latency of innsert tablets,caution:no data will be inserted, only for test latency
+TSStatus testInsertTablets(1:TSInsertTabletsReq req);
+
+// test the latency of innsert record,caution:no data will be inserted, only for test latency
+TSStatus testInsertRecord(1:TSInsertRecordReq req);
+
+// test the latency of innsert record in string format,caution:no data will be inserted, only for test latency
+TSStatus testInsertStringRecord(1:TSInsertStringRecordReq req);
+
+// test the latency of innsert records,caution:no data will be inserted, only for test latency
+TSStatus testInsertRecords(1:TSInsertRecordsReq req);
+
+// test the latency of innsert records of one device,caution:no data will be inserted, only for test latency
+TSStatus testInsertRecordsOfOneDevice(1:TSInsertRecordsOfOneDeviceReq req);
+
+// test the latency of innsert records in string formate,caution:no data will be inserted, only for test latency
+TSStatus testInsertStringRecords(1:TSInsertStringRecordsReq req);
+
+// delete data
+TSStatus deleteData(1:TSDeleteDataReq req);
+
+// execute raw data query
+TSExecuteStatementResp executeRawDataQuery(1:TSRawDataQueryReq req);
+
+// request a statement id from server
+i64 requestStatementId(1:i64 sessionId);
+```
\ No newline at end of file
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-ODBC.md b/src/UserGuide/V2.0.1/Tree/API/Programming-ODBC.md
new file mode 100644
index 000000000..8e0d74852
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-ODBC.md
@@ -0,0 +1,146 @@
+
+
+# ODBC
+With IoTDB JDBC, IoTDB can be accessed using the ODBC-JDBC bridge.
+
+## Dependencies
+* IoTDB-JDBC's jar-with-dependency package
+* ODBC-JDBC bridge (e.g. ZappySys JDBC Bridge)
+
+## Deployment
+### Preparing JDBC package
+Download the source code of IoTDB, and execute the following command in root directory:
+```shell
+mvn clean package -pl iotdb-client/jdbc -am -DskipTests -P get-jar-with-dependencies
+```
+Then, you can see the output `iotdb-jdbc-1.3.2-SNAPSHOT-jar-with-dependencies.jar` under `iotdb-client/jdbc/target` directory.
+
+### Preparing ODBC-JDBC Bridge
+*Note: Here we only provide one kind of ODBC-JDBC bridge as the instance. Readers can use other ODBC-JDBC bridges to access IoTDB with the IOTDB-JDBC.*
+1. **Download Zappy-Sys ODBC-JDBC Bridge**:
+ Enter the https://zappysys.com/products/odbc-powerpack/odbc-jdbc-bridge-driver/ website, and click "download".
+
+ ![ZappySys_website.jpg](https://alioss.timecho.com/upload/ZappySys_website.jpg)
+
+2. **Prepare IoTDB**: Set up IoTDB cluster, and write a row of data arbitrarily.
+ ```sql
+ IoTDB > insert into root.ln.wf02.wt02(timestamp,status) values(1,true)
+ ```
+
+3. **Deploy and Test the Bridge**:
+ 1. Open ODBC Data Sources(32/64 bit), depending on the bits of Windows. One possible position is `C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools`.
+
+ ![ODBC_ADD_EN.jpg](https://alioss.timecho.com/upload/ODBC_ADD_EN.jpg)
+
+ 2. Click on "add" and select ZappySys JDBC Bridge.
+
+ ![ODBC_CREATE_EN.jpg](https://alioss.timecho.com/upload/ODBC_CREATE_EN.jpg)
+
+ 3. Fill in the following settings:
+
+ | Property | Content | Example |
+ |---------------------|-----------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
+ | Connection String | jdbc:iotdb://\:\/ | jdbc:iotdb://127.0.0.1:6667/ |
+ | Driver Class | org.apache.iotdb.jdbc.IoTDBDriver | org.apache.iotdb.jdbc.IoTDBDriver |
+ | JDBC driver file(s) | The path of IoTDB JDBC jar-with-dependencies | C:\Users\13361\Documents\GitHub\iotdb\iotdb-client\jdbc\target\iotdb-jdbc-1.3.2-SNAPSHOT-jar-with-dependencies.jar |
+ | User name | IoTDB's user name | root |
+ | User password | IoTDB's password | root |
+
+ ![ODBC_CONNECTION.png](https://alioss.timecho.com/upload/ODBC_CONNECTION.png)
+
+ 4. Click on "Test Connection" button, and a "Test Connection: SUCCESSFUL" should appear.
+
+ ![ODBC_CONFIG_EN.jpg](https://alioss.timecho.com/upload/ODBC_CONFIG_EN.jpg)
+
+ 5. Click the "Preview" button above, and replace the original query text with `select * from root.**`, then click "Preview Data", and the query result should correctly.
+
+ ![ODBC_TEST.jpg](https://alioss.timecho.com/upload/ODBC_TEST.jpg)
+
+4. **Operate IoTDB's data with ODBC**: After correct deployment, you can use Microsoft's ODBC library to operate IoTDB's data. Here's an example written in C#:
+ ```C#
+ using System.Data.Odbc;
+
+ // Get a connection
+ var dbConnection = new OdbcConnection("DSN=ZappySys JDBC Bridge");
+ dbConnection.Open();
+
+ // Execute the write commands to prepare data
+ var dbCommand = dbConnection.CreateCommand();
+ dbCommand.CommandText = "insert into root.Keller.Flur.Energieversorgung(time, s1) values(1715670861634, 1)";
+ dbCommand.ExecuteNonQuery();
+ dbCommand.CommandText = "insert into root.Keller.Flur.Energieversorgung(time, s2) values(1715670861634, true)";
+ dbCommand.ExecuteNonQuery();
+ dbCommand.CommandText = "insert into root.Keller.Flur.Energieversorgung(time, s3) values(1715670861634, 3.1)";
+ dbCommand.ExecuteNonQuery();
+
+ // Execute the read command
+ dbCommand.CommandText = "SELECT * FROM root.Keller.Flur.Energieversorgung";
+ var dbReader = dbCommand.ExecuteReader();
+
+ // Write the output header
+ var fCount = dbReader.FieldCount;
+ Console.Write(":");
+ for(var i = 0; i < fCount; i++)
+ {
+ var fName = dbReader.GetName(i);
+ Console.Write(fName + ":");
+ }
+ Console.WriteLine();
+
+ // Output the content
+ while (dbReader.Read())
+ {
+ Console.Write(":");
+ for(var i = 0; i < fCount; i++)
+ {
+ var fieldType = dbReader.GetFieldType(i);
+ switch (fieldType.Name)
+ {
+ case "DateTime":
+ var dateTime = dbReader.GetInt64(i);
+ Console.Write(dateTime + ":");
+ break;
+ case "Double":
+ if (dbReader.IsDBNull(i))
+ {
+ Console.Write("null:");
+ }
+ else
+ {
+ var fValue = dbReader.GetDouble(i);
+ Console.Write(fValue + ":");
+ }
+ break;
+ default:
+ Console.Write(fieldType.Name + ":");
+ break;
+ }
+ }
+ Console.WriteLine();
+ }
+
+ // Shut down gracefully
+ dbReader.Close();
+ dbCommand.Dispose();
+ dbConnection.Close();
+ ```
+ This program can write data into IoTDB, and query the data we have just written.
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-OPC-UA_timecho.md b/src/UserGuide/V2.0.1/Tree/API/Programming-OPC-UA_timecho.md
new file mode 100644
index 000000000..703b47c68
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-OPC-UA_timecho.md
@@ -0,0 +1,262 @@
+
+
+# OPC UA Protocol
+
+## OPC UA
+
+OPC UA is a technical specification used in the automation field for communication between different devices and systems, enabling cross platform, cross language, and cross network operations, providing a reliable and secure data exchange foundation for the Industrial Internet of Things. IoTDB supports OPC UA protocol, and IoTDB OPC Server supports both Client/Server and Pub/Sub communication modes.
+
+### OPC UA Client/Server Mode
+
+- **Client/Server Mode**:In this mode, IoTDB's stream processing engine establishes a connection with the OPC UA Server via an OPC UA Sink. The OPC UA Server maintains data within its Address Space, from which IoTDB can request and retrieve data. Additionally, other OPC UA Clients can access the data on the server.
+
+
+
+
+
+
+- Features:
+
+ - OPC UA will organize the device information received from Sink into folders under the Objects folder according to a tree model.
+
+ - Each measurement point is recorded as a variable node and the latest value in the current database is recorded.
+
+### OPC UA Pub/Sub Mode
+
+- **Pub/Sub Mode**: In this mode, IoTDB's stream processing engine sends data change events to the OPC UA Server through an OPC UA Sink. These events are published to the server's message queue and managed through Event Nodes. Other OPC UA Clients can subscribe to these Event Nodes to receive notifications upon data changes.
+
+
+
+
+
+- Features:
+
+ - Each measurement point is wrapped as an Event Node in OPC UA.
+
+
+ - The relevant fields and their meanings are as follows:
+
+ | Field | Meaning | Type (Milo) | Example |
+ | :--------- | :--------------- | :------------ | :-------------------- |
+ | Time | Timestamp | DateTime | 1698907326198 |
+ | SourceName | Full path of the measurement point | String | root.test.opc.sensor0 |
+ | SourceNode | Data type of the measurement point | NodeId | Int32 |
+ | Message | Data | LocalizedText | 3.0 |
+
+ - Events are only sent to clients that are already listening; if a client is not connected, the Event will be ignored.
+
+
+## IoTDB OPC Server Startup method
+
+### Syntax
+
+The syntax for creating the Sink is as follows:
+
+
+```SQL
+create pipe p1
+ with source (...)
+ with processor (...)
+ with sink ('sink' = 'opc-ua-sink',
+ 'sink.opcua.tcp.port' = '12686',
+ 'sink.opcua.https.port' = '8443',
+ 'sink.user' = 'root',
+ 'sink.password' = 'root',
+ 'sink.opcua.security.dir' = '...'
+ )
+```
+
+### Parameters
+
+| key | value | value range | required or not | default value |
+| :------------------------------ | :----------------------------------------------------------- | :------------------------------------- | :------- | :------------- |
+| sink | OPC UA SINK | String: opc-ua-sink | Required | |
+| sink.opcua.model | OPC UA model used | String: client-server / pub-sub | Optional | client-server |
+| sink.opcua.tcp.port | OPC UA's TCP port | Integer: [0, 65536] | Optional | 12686 |
+| sink.opcua.https.port | OPC UA's HTTPS port | Integer: [0, 65536] | Optional | 8443 |
+| sink.opcua.security.dir | Directory for OPC UA's keys and certificates | String: Path, supports absolute and relative directories | Optional | Opc_security folder/in the conf directory of the DataNode related to iotdb If there is no conf directory for iotdb (such as launching DataNode in IDEA), it will be the iotdb_opc_Security folder/in the user's home directory |
+| sink.opcua.enable-anonymous-access | Whether OPC UA allows anonymous access | Boolean | Optional | true |
+| sink.user | User for OPC UA, specified in the configuration | String | Optional | root |
+| sink.password | Password for OPC UA, specified in the configuration | String | Optional | root |
+
+### 示例
+
+```Bash
+create pipe p1
+ with sink ('sink' = 'opc-ua-sink',
+ 'sink.user' = 'root',
+ 'sink.password' = 'root');
+start pipe p1;
+```
+
+### Usage Limitations
+
+1. **DataRegion Requirement**: The OPC UA server will only start if there is a DataRegion in IoTDB. For an empty IoTDB, a data entry is necessary for the OPC UA server to become effective.
+
+2. **Data Availability**: Clients subscribing to the server will not receive data written to IoTDB before their connection.
+
+3. **Multiple DataNodes may have scattered sending/conflict issues**:
+
+ - For IoTDB clusters with multiple dataRegions and scattered across different DataNode IPs, data will be sent in a dispersed manner on the leaders of the dataRegions. The client needs to listen to the configuration ports of the DataNode IP separately.。
+
+ - Suggest using this OPC UA server under 1C1D.
+
+4. **Does not support deleting data and modifying measurement point types:** In Client Server mode, OPC UA cannot delete data or change data type settings. In Pub Sub mode, if data is deleted, information cannot be pushed to the client.
+
+## IoTDB OPC Server Example
+
+### Client / Server Mode
+
+#### Preparation Work
+
+1. Take UAExpert client as an example, download the UAExpert client: https://www.unified-automation.com/downloads/opc-ua-clients.html
+
+2. Install UAExpert and fill in your own certificate information.
+
+#### Quick Start
+
+1. Use the following SQL to create and start the OPC UA Sink in client-server mode. For detailed syntax, please refer to: [IoTDB OPC Server Syntax](#syntax)
+
+```SQL
+create pipe p1 with sink ('sink'='opc-ua-sink');
+```
+
+2. Write some data.
+
+```SQL
+insert into root.test.db(time, s2) values(now(), 2)
+```
+
+ The metadata is automatically created and enabled here.
+
+3. Configure the connection to IoTDB in UAExpert, where the password should be set to the one defined in the sink.password parameter (using the default password "root" as an example):
+
+
+
+
+
+
+
+
+
+4. After trusting the server's certificate, you can see the written data in the Objects folder on the left.
+
+
+
+
+
+
+
+
+
+5. You can drag the node on the left to the center and display the latest value of that node:
+
+
+
+
+
+### Pub / Sub Mode
+
+#### Preparation Work
+
+The code is located in the [opc-ua-sink 文件夹](https://github.com/apache/iotdb/tree/master/example/pipe-opc-ua-sink/src/main/java/org/apache/iotdb/opcua) under the iotdb-example package.
+
+The code includes:
+
+- The main class (ClientTest)
+- Client certificate-related logic(IoTDBKeyStoreLoaderClient)
+- Client configuration and startup logic(ClientExampleRunner)
+- The parent class of ClientTest(ClientExample)
+
+### Quick Start
+
+The steps are as follows:
+
+1. Start IoTDB and write some data.
+
+```SQL
+insert into root.a.b(time, c, d) values(now(), 1, 2);
+```
+
+ The metadata is automatically created and enabled here.
+
+2. Use the following SQL to create and start the OPC UA Sink in Pub-Sub mode. For detailed syntax, please refer to: [IoTDB OPC Server Syntax](#syntax)
+
+```SQL
+create pipe p1 with sink ('sink'='opc-ua-sink',
+ 'sink.opcua.model'='pub-sub');
+start pipe p1;
+```
+
+ At this point, you can see that the opc certificate-related directory has been created under the server's conf directory.
+
+
+
+
+
+3. Run the Client connection directly; the Client's certificate will be rejected by the server.
+
+
+
+
+
+4. Go to the server's sink.opcua.security.dir directory, then to the pki's rejected directory, where the Client's certificate should have been generated.
+
+
+
+
+
+5. Move (not copy) the client's certificate into (not into a subdirectory of) the trusted directory's certs folder in the same directory.
+
+
+
+
+
+6. Open the Client connection again; the server's certificate should now be rejected by the Client.
+
+
+
+
+
+7. Go to the client's /client/security directory, then to the pki's rejected directory, and move the server's certificate into (not into a subdirectory of) the trusted directory.
+
+
+
+
+
+8. Open the Client, and now the two-way trust is successful, and the Client can connect to the server.
+
+9. Write data to the server, and the Client will print out the received data.
+
+
+
+
+
+
+### Notes
+
+1. **stand alone and cluster:**It is recommended to use a 1C1D (one coordinator and one data node) single machine version. If there are multiple DataNodes in the cluster, data may be sent in a scattered manner across various DataNodes, and it may not be possible to listen to all the data.
+
+2. **No Need to Operate Root Directory Certificates:** During the certificate operation process, there is no need to operate the `iotdb-server.pfx` certificate under the IoTDB security root directory and the `example-client.pfx` directory under the client security directory. When the Client and Server connect bidirectionally, they will send the root directory certificate to each other. If it is the first time the other party sees this certificate, it will be placed in the reject dir. If the certificate is in the trusted/certs, then the other party can trust it.
+
+3. **It is Recommended to Use Java 17+:**
+In JVM 8 versions, there may be a key length restriction, resulting in an "Illegal key size" error. For specific versions (such as jdk.1.8u151+), you can add `Security.`*`setProperty`*`("crypto.policy", "unlimited");`; in the create client of ClientExampleRunner to solve this, or you can download the unlimited package `local_policy.jar` and `US_export_policy ` to replace the packages in the `JDK/jre/lib/security `. Download link:https://www.oracle.com/java/technologies/javase-jce8-downloads.html。
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-Python-Native-API.md b/src/UserGuide/V2.0.1/Tree/API/Programming-Python-Native-API.md
new file mode 100644
index 000000000..b17d73ea8
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-Python-Native-API.md
@@ -0,0 +1,732 @@
+
+
+# Python Native API
+
+## Requirements
+
+You have to install thrift (>=0.13) before using the package.
+
+
+
+## How to use (Example)
+
+First, download the package: `pip3 install apache-iotdb`
+
+You can get an example of using the package to read and write data at here: [Example](https://github.com/apache/iotdb/blob/master/iotdb-client/client-py/SessionExample.py)
+
+An example of aligned timeseries: [Aligned Timeseries Session Example](https://github.com/apache/iotdb/blob/master/iotdb-client/client-py/SessionAlignedTimeseriesExample.py)
+
+(you need to add `import iotdb` in the head of the file)
+
+Or:
+
+```python
+from iotdb.Session import Session
+
+ip = "127.0.0.1"
+port_ = "6667"
+username_ = "root"
+password_ = "root"
+session = Session(ip, port_, username_, password_)
+session.open(False)
+zone = session.get_time_zone()
+session.close()
+```
+
+## Initialization
+
+* Initialize a Session
+
+```python
+session = Session(
+ ip="127.0.0.1",
+ port="6667",
+ user="root",
+ password="root",
+ fetch_size=1024,
+ zone_id="UTC+8",
+ enable_redirection=True
+)
+```
+
+* Initialize a Session to connect multiple nodes
+
+```python
+session = Session.init_from_node_urls(
+ node_urls=["127.0.0.1:6667", "127.0.0.1:6668", "127.0.0.1:6669"],
+ user="root",
+ password="root",
+ fetch_size=1024,
+ zone_id="UTC+8",
+ enable_redirection=True
+)
+```
+
+* Open a session, with a parameter to specify whether to enable RPC compression
+
+```python
+session.open(enable_rpc_compression=False)
+```
+
+Notice: this RPC compression status of client must comply with that of IoTDB server
+
+* Close a Session
+
+```python
+session.close()
+```
+## Managing Session through SessionPool
+
+Utilizing SessionPool to manage sessions eliminates the need to worry about session reuse. When the number of session connections reaches the maximum capacity of the pool, requests for acquiring a session will be blocked, and you can set the blocking wait time through parameters. After using a session, it should be returned to the SessionPool using the `putBack` method for proper management.
+
+### Create SessionPool
+
+```python
+pool_config = PoolConfig(host=ip,port=port, user_name=username,
+ password=password, fetch_size=1024,
+ time_zone="UTC+8", max_retry=3)
+max_pool_size = 5
+wait_timeout_in_ms = 3000
+
+# # Create the connection pool
+session_pool = SessionPool(pool_config, max_pool_size, wait_timeout_in_ms)
+```
+### Create a SessionPool using distributed nodes.
+```python
+pool_config = PoolConfig(node_urls=node_urls=["127.0.0.1:6667", "127.0.0.1:6668", "127.0.0.1:6669"], user_name=username,
+ password=password, fetch_size=1024,
+ time_zone="UTC+8", max_retry=3)
+max_pool_size = 5
+wait_timeout_in_ms = 3000
+```
+### Acquiring a session through SessionPool and manually calling PutBack after use
+
+```python
+session = session_pool.get_session()
+session.set_storage_group(STORAGE_GROUP_NAME)
+session.create_time_series(
+ TIMESERIES_PATH, TSDataType.BOOLEAN, TSEncoding.PLAIN, Compressor.SNAPPY
+)
+# After usage, return the session using putBack
+session_pool.put_back(session)
+# When closing the sessionPool, all managed sessions will be closed as well
+session_pool.close()
+```
+
+## Data Definition Interface (DDL Interface)
+
+### Database Management
+
+* CREATE DATABASE
+
+```python
+session.set_storage_group(group_name)
+```
+
+* Delete one or several databases
+
+```python
+session.delete_storage_group(group_name)
+session.delete_storage_groups(group_name_lst)
+```
+### Timeseries Management
+
+* Create one or multiple timeseries
+
+```python
+session.create_time_series(ts_path, data_type, encoding, compressor,
+ props=None, tags=None, attributes=None, alias=None)
+
+session.create_multi_time_series(
+ ts_path_lst, data_type_lst, encoding_lst, compressor_lst,
+ props_lst=None, tags_lst=None, attributes_lst=None, alias_lst=None
+)
+```
+
+* Create aligned timeseries
+
+```python
+session.create_aligned_time_series(
+ device_id, measurements_lst, data_type_lst, encoding_lst, compressor_lst
+)
+```
+
+Attention: Alias of measurements are **not supported** currently.
+
+* Delete one or several timeseries
+
+```python
+session.delete_time_series(paths_list)
+```
+
+* Check whether the specific timeseries exists
+
+```python
+session.check_time_series_exists(path)
+```
+
+## Data Manipulation Interface (DML Interface)
+
+### Insert
+
+It is recommended to use insertTablet to help improve write efficiency.
+
+* Insert a Tablet,which is multiple rows of a device, each row has the same measurements
+ * **Better Write Performance**
+ * **Support null values**: fill the null value with any value, and then mark the null value via BitMap (from v0.13)
+
+
+We have two implementations of Tablet in Python API.
+
+* Normal Tablet
+
+```python
+values_ = [
+ [False, 10, 11, 1.1, 10011.1, "test01"],
+ [True, 100, 11111, 1.25, 101.0, "test02"],
+ [False, 100, 1, 188.1, 688.25, "test03"],
+ [True, 0, 0, 0, 6.25, "test04"],
+]
+timestamps_ = [1, 2, 3, 4]
+tablet_ = Tablet(
+ device_id, measurements_, data_types_, values_, timestamps_
+)
+session.insert_tablet(tablet_)
+
+values_ = [
+ [None, 10, 11, 1.1, 10011.1, "test01"],
+ [True, None, 11111, 1.25, 101.0, "test02"],
+ [False, 100, None, 188.1, 688.25, "test03"],
+ [True, 0, 0, 0, None, None],
+]
+timestamps_ = [16, 17, 18, 19]
+tablet_ = Tablet(
+ device_id, measurements_, data_types_, values_, timestamps_
+)
+session.insert_tablet(tablet_)
+```
+* Numpy Tablet
+
+Comparing with Tablet, Numpy Tablet is using [numpy.ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) to record data.
+With less memory footprint and time cost of serialization, the insert performance will be better.
+
+**Notice**
+1. time and numerical value columns in Tablet is ndarray
+2. recommended to use the specific dtypes to each ndarray, see the example below
+ (if not, the default dtypes are also ok).
+
+```python
+import numpy as np
+data_types_ = [
+ TSDataType.BOOLEAN,
+ TSDataType.INT32,
+ TSDataType.INT64,
+ TSDataType.FLOAT,
+ TSDataType.DOUBLE,
+ TSDataType.TEXT,
+]
+np_values_ = [
+ np.array([False, True, False, True], TSDataType.BOOLEAN.np_dtype()),
+ np.array([10, 100, 100, 0], TSDataType.INT32.np_dtype()),
+ np.array([11, 11111, 1, 0], TSDataType.INT64.np_dtype()),
+ np.array([1.1, 1.25, 188.1, 0], TSDataType.FLOAT.np_dtype()),
+ np.array([10011.1, 101.0, 688.25, 6.25], TSDataType.DOUBLE.np_dtype()),
+ np.array(["test01", "test02", "test03", "test04"], TSDataType.TEXT.np_dtype()),
+]
+np_timestamps_ = np.array([1, 2, 3, 4], TSDataType.INT64.np_dtype())
+np_tablet_ = NumpyTablet(
+ device_id, measurements_, data_types_, np_values_, np_timestamps_
+)
+session.insert_tablet(np_tablet_)
+
+# insert one numpy tablet with None into the database.
+np_values_ = [
+ np.array([False, True, False, True], TSDataType.BOOLEAN.np_dtype()),
+ np.array([10, 100, 100, 0], TSDataType.INT32.np_dtype()),
+ np.array([11, 11111, 1, 0], TSDataType.INT64.np_dtype()),
+ np.array([1.1, 1.25, 188.1, 0], TSDataType.FLOAT.np_dtype()),
+ np.array([10011.1, 101.0, 688.25, 6.25], TSDataType.DOUBLE.np_dtype()),
+ np.array(["test01", "test02", "test03", "test04"], TSDataType.TEXT.np_dtype()),
+]
+np_timestamps_ = np.array([98, 99, 100, 101], TSDataType.INT64.np_dtype())
+np_bitmaps_ = []
+for i in range(len(measurements_)):
+ np_bitmaps_.append(BitMap(len(np_timestamps_)))
+np_bitmaps_[0].mark(0)
+np_bitmaps_[1].mark(1)
+np_bitmaps_[2].mark(2)
+np_bitmaps_[4].mark(3)
+np_bitmaps_[5].mark(3)
+np_tablet_with_none = NumpyTablet(
+ device_id, measurements_, data_types_, np_values_, np_timestamps_, np_bitmaps_
+)
+session.insert_tablet(np_tablet_with_none)
+```
+
+* Insert multiple Tablets
+
+```python
+session.insert_tablets(tablet_lst)
+```
+
+* Insert a Record
+
+```python
+session.insert_record(device_id, timestamp, measurements_, data_types_, values_)
+```
+
+* Insert multiple Records
+
+```python
+session.insert_records(
+ device_ids_, time_list_, measurements_list_, data_type_list_, values_list_
+)
+```
+
+* Insert multiple Records that belong to the same device.
+ With type info the server has no need to do type inference, which leads a better performance
+
+
+```python
+session.insert_records_of_one_device(device_id, time_list, measurements_list, data_types_list, values_list)
+```
+
+### Insert with type inference
+
+When the data is of String type, we can use the following interface to perform type inference based on the value of the value itself. For example, if value is "true" , it can be automatically inferred to be a boolean type. If value is "3.2" , it can be automatically inferred as a flout type. Without type information, server has to do type inference, which may cost some time.
+
+* Insert a Record, which contains multiple measurement value of a device at a timestamp
+
+```python
+session.insert_str_record(device_id, timestamp, measurements, string_values)
+```
+
+### Insert of Aligned Timeseries
+
+The Insert of aligned timeseries uses interfaces like insert_aligned_XXX, and others are similar to the above interfaces:
+
+* insert_aligned_record
+* insert_aligned_records
+* insert_aligned_records_of_one_device
+* insert_aligned_tablet
+* insert_aligned_tablets
+
+
+## IoTDB-SQL Interface
+
+* Execute query statement
+
+```python
+session.execute_query_statement(sql)
+```
+
+* Execute non query statement
+
+```python
+session.execute_non_query_statement(sql)
+```
+
+* Execute statement
+
+```python
+session.execute_statement(sql)
+```
+
+## Schema Template
+### Create Schema Template
+The step for creating a metadata template is as follows
+1. Create the template class
+2. Adding MeasurementNode
+3. Execute create schema template function
+
+```python
+template = Template(name=template_name, share_time=True)
+
+m_node_x = MeasurementNode("x", TSDataType.FLOAT, TSEncoding.RLE, Compressor.SNAPPY)
+m_node_y = MeasurementNode("y", TSDataType.FLOAT, TSEncoding.RLE, Compressor.SNAPPY)
+m_node_z = MeasurementNode("z", TSDataType.FLOAT, TSEncoding.RLE, Compressor.SNAPPY)
+
+template.add_template(m_node_x)
+template.add_template(m_node_y)
+template.add_template(m_node_z)
+
+session.create_schema_template(template)
+```
+### Modify Schema Template measurements
+Modify measurements in a template, the template must be already created. These are functions that add or delete some measurement nodes.
+* add node in template
+```python
+session.add_measurements_in_template(template_name, measurements_path, data_types, encodings, compressors, is_aligned)
+```
+
+* delete node in template
+```python
+session.delete_node_in_template(template_name, path)
+```
+
+### Set Schema Template
+```python
+session.set_schema_template(template_name, prefix_path)
+```
+
+### Uset Schema Template
+```python
+session.unset_schema_template(template_name, prefix_path)
+```
+
+### Show Schema Template
+* Show all schema templates
+```python
+session.show_all_templates()
+```
+* Count all measurements in templates
+```python
+session.count_measurements_in_template(template_name)
+```
+
+* Judge whether the path is measurement or not in templates, This measurement must be in the template
+```python
+session.count_measurements_in_template(template_name, path)
+```
+
+* Judge whether the path is exist or not in templates, This path may not belong to the template
+```python
+session.is_path_exist_in_template(template_name, path)
+```
+
+* Show nodes under in schema template
+```python
+session.show_measurements_in_template(template_name)
+```
+
+* Show the path prefix where a schema template is set
+```python
+session.show_paths_template_set_on(template_name)
+```
+
+* Show the path prefix where a schema template is used (i.e. the time series has been created)
+```python
+session.show_paths_template_using_on(template_name)
+```
+
+### Drop Schema Template
+Delete an existing metadata template,dropping an already set template is not supported
+```python
+session.drop_schema_template("template_python")
+```
+
+
+## Pandas Support
+
+To easily transform a query result to a [Pandas Dataframe](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html)
+the SessionDataSet has a method `.todf()` which consumes the dataset and transforms it to a pandas dataframe.
+
+Example:
+
+```python
+from iotdb.Session import Session
+
+ip = "127.0.0.1"
+port_ = "6667"
+username_ = "root"
+password_ = "root"
+session = Session(ip, port_, username_, password_)
+session.open(False)
+result = session.execute_query_statement("SELECT * FROM root.*")
+
+# Transform to Pandas Dataset
+df = result.todf()
+
+session.close()
+
+# Now you can work with the dataframe
+df = ...
+```
+
+
+## IoTDB Testcontainer
+
+The Test Support is based on the lib `testcontainers` (https://testcontainers-python.readthedocs.io/en/latest/index.html) which you need to install in your project if you want to use the feature.
+
+To start (and stop) an IoTDB Database in a Docker container simply do:
+```python
+class MyTestCase(unittest.TestCase):
+
+ def test_something(self):
+ with IoTDBContainer() as c:
+ session = Session("localhost", c.get_exposed_port(6667), "root", "root")
+ session.open(False)
+ result = session.execute_query_statement("SHOW TIMESERIES")
+ print(result)
+ session.close()
+```
+
+by default it will load the image `apache/iotdb:latest`, if you want a specific version just pass it like e.g. `IoTDBContainer("apache/iotdb:0.12.0")` to get version `0.12.0` running.
+
+## IoTDB DBAPI
+
+IoTDB DBAPI implements the Python DB API 2.0 specification (https://peps.python.org/pep-0249/), which defines a common
+interface for accessing databases in Python.
+
+### Examples
++ Initialization
+
+The initialized parameters are consistent with the session part (except for the sqlalchemy_mode).
+```python
+from iotdb.dbapi import connect
+
+ip = "127.0.0.1"
+port_ = "6667"
+username_ = "root"
+password_ = "root"
+conn = connect(ip, port_, username_, password_,fetch_size=1024,zone_id="UTC+8",sqlalchemy_mode=False)
+cursor = conn.cursor()
+```
++ simple SQL statement execution
+```python
+cursor.execute("SELECT ** FROM root")
+for row in cursor.fetchall():
+ print(row)
+```
+
++ execute SQL with parameter
+
+IoTDB DBAPI supports pyformat style parameters
+```python
+cursor.execute("SELECT ** FROM root WHERE time < %(time)s",{"time":"2017-11-01T00:08:00.000"})
+for row in cursor.fetchall():
+ print(row)
+```
+
++ execute SQL with parameter sequences
+```python
+seq_of_parameters = [
+ {"timestamp": 1, "temperature": 1},
+ {"timestamp": 2, "temperature": 2},
+ {"timestamp": 3, "temperature": 3},
+ {"timestamp": 4, "temperature": 4},
+ {"timestamp": 5, "temperature": 5},
+]
+sql = "insert into root.cursor(timestamp,temperature) values(%(timestamp)s,%(temperature)s)"
+cursor.executemany(sql,seq_of_parameters)
+```
+
++ close the connection and cursor
+```python
+cursor.close()
+conn.close()
+```
+
+## IoTDB SQLAlchemy Dialect (Experimental)
+The SQLAlchemy dialect of IoTDB is written to adapt to Apache Superset.
+This part is still being improved.
+Please do not use it in the production environment!
+### Mapping of the metadata
+The data model used by SQLAlchemy is a relational data model, which describes the relationships between different entities through tables.
+While the data model of IoTDB is a hierarchical data model, which organizes the data through a tree structure.
+In order to adapt IoTDB to the dialect of SQLAlchemy, the original data model in IoTDB needs to be reorganized.
+Converting the data model of IoTDB into the data model of SQLAlchemy.
+
+The metadata in the IoTDB are:
+
+1. Database
+2. Path
+3. Entity
+4. Measurement
+
+The metadata in the SQLAlchemy are:
+1. Schema
+2. Table
+3. Column
+
+The mapping relationship between them is:
+
+| The metadata in the SQLAlchemy | The metadata in the IoTDB |
+| -------------------- | -------------------------------------------- |
+| Schema | Database |
+| Table | Path ( from database to entity ) + Entity |
+| Column | Measurement |
+
+The following figure shows the relationship between the two more intuitively:
+
+![sqlalchemy-to-iotdb](https://alioss.timecho.com/docs/img/UserGuide/API/IoTDB-SQLAlchemy/sqlalchemy-to-iotdb.png?raw=true)
+
+### Data type mapping
+| data type in IoTDB | data type in SQLAlchemy |
+|--------------------|-------------------------|
+| BOOLEAN | Boolean |
+| INT32 | Integer |
+| INT64 | BigInteger |
+| FLOAT | Float |
+| DOUBLE | Float |
+| TEXT | Text |
+| LONG | BigInteger |
+
+### Example
+
++ execute statement
+
+```python
+from sqlalchemy import create_engine
+
+engine = create_engine("iotdb://root:root@127.0.0.1:6667")
+connect = engine.connect()
+result = connect.execute("SELECT ** FROM root")
+for row in result.fetchall():
+ print(row)
+```
+
++ ORM (now only simple queries are supported)
+
+```python
+from sqlalchemy import create_engine, Column, Float, BigInteger, MetaData
+from sqlalchemy.ext.declarative import declarative_base
+from sqlalchemy.orm import sessionmaker
+
+metadata = MetaData(
+ schema='root.factory'
+)
+Base = declarative_base(metadata=metadata)
+
+
+class Device(Base):
+ __tablename__ = "room2.device1"
+ Time = Column(BigInteger, primary_key=True)
+ temperature = Column(Float)
+ status = Column(Float)
+
+
+engine = create_engine("iotdb://root:root@127.0.0.1:6667")
+
+DbSession = sessionmaker(bind=engine)
+session = DbSession()
+
+res = session.query(Device.status).filter(Device.temperature > 1)
+
+for row in res:
+ print(row)
+```
+
+
+## Developers
+
+### Introduction
+
+This is an example of how to connect to IoTDB with python, using the thrift rpc interfaces. Things are almost the same on Windows or Linux, but pay attention to the difference like path separator.
+
+
+
+### Prerequisites
+
+Python3.7 or later is preferred.
+
+You have to install Thrift (0.11.0 or later) to compile our thrift file into python code. Below is the official tutorial of installation, eventually, you should have a thrift executable.
+
+```
+http://thrift.apache.org/docs/install/
+```
+
+Before starting you need to install `requirements_dev.txt` in your python environment, e.g. by calling
+```shell
+pip install -r requirements_dev.txt
+```
+
+
+
+### Compile the thrift library and Debug
+
+In the root of IoTDB's source code folder, run `mvn clean generate-sources -pl iotdb-client/client-py -am`.
+
+This will automatically delete and repopulate the folder `iotdb/thrift` with the generated thrift files.
+This folder is ignored from git and should **never be pushed to git!**
+
+**Notice** Do not upload `iotdb/thrift` to the git repo.
+
+
+
+
+### Session Client & Example
+
+We packed up the Thrift interface in `client-py/src/iotdb/Session.py` (similar with its Java counterpart), also provided an example file `client-py/src/SessionExample.py` of how to use the session module. please read it carefully.
+
+
+Or, another simple example:
+
+```python
+from iotdb.Session import Session
+
+ip = "127.0.0.1"
+port_ = "6667"
+username_ = "root"
+password_ = "root"
+session = Session(ip, port_, username_, password_)
+session.open(False)
+zone = session.get_time_zone()
+session.close()
+```
+
+
+
+### Tests
+
+Please add your custom tests in `tests` folder.
+
+To run all defined tests just type `pytest .` in the root folder.
+
+**Notice** Some tests need docker to be started on your system as a test instance is started in a docker container using [testcontainers](https://testcontainers-python.readthedocs.io/en/latest/index.html).
+
+
+
+### Futher Tools
+
+[black](https://pypi.org/project/black/) and [flake8](https://pypi.org/project/flake8/) are installed for autoformatting and linting.
+Both can be run by `black .` or `flake8 .` respectively.
+
+
+
+## Releasing
+
+To do a release just ensure that you have the right set of generated thrift files.
+Then run linting and auto-formatting.
+Then, ensure that all tests work (via `pytest .`).
+Then you are good to go to do a release!
+
+
+
+### Preparing your environment
+
+First, install all necessary dev dependencies via `pip install -r requirements_dev.txt`.
+
+
+
+### Doing the Release
+
+There is a convenient script `release.sh` to do all steps for a release.
+Namely, these are
+
+* Remove all transient directories from last release (if exists)
+* (Re-)generate all generated sources via mvn
+* Run Linting (flake8)
+* Run Tests via pytest
+* Build
+* Release to pypi
+
diff --git a/src/UserGuide/V2.0.1/Tree/API/Programming-Rust-Native-API.md b/src/UserGuide/V2.0.1/Tree/API/Programming-Rust-Native-API.md
new file mode 100644
index 000000000..f58df68fc
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/Programming-Rust-Native-API.md
@@ -0,0 +1,188 @@
+
+
+# Rust Native API Native API
+
+IoTDB uses Thrift as a cross language RPC framework, so access to IoTDB can be achieved through the interface provided by Thrift.
+This document will introduce how to generate a native Rust interface that can access IoTDB.
+
+## Dependents
+
+ * JDK >= 1.8
+ * Rust >= 1.0.0
+ * thrift 0.14.1
+ * Linux、Macos or like unix
+ * Windows+bash
+
+Thrift (0.14.1 or higher) must be installed to compile Thrift files into Rust code. The following is the official installation tutorial, and in the end, you should receive a Thrift executable file.
+
+```
+http://thrift.apache.org/docs/install/
+```
+
+## Compile the Thrift library and generate the Rust native interface
+
+1. Find the `pom.xml` file in the root directory of the IoTDB source code folder.
+2. Open the `pom.xml` file and find the following content:
+ ```xml
+
+ generate-thrift-sources-python
+ generate-sources
+
+ compile
+
+
+ py
+ ${project.build.directory}/generated-sources-python/
+
+
+ ```
+3. Duplicate this block and change the `id`, `generator` and `outputDirectory` to this:
+ ```xml
+
+ generate-thrift-sources-rust
+ generate-sources
+
+ compile
+
+
+ rs
+ ${project.build.directory}/generated-sources-rust/
+
+
+ ```
+4. In the root directory of the IoTDB source code folder,run `mvn clean generate-sources`.
+
+This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files.
+The newly generated Rust sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-rust` in the various modules of the `iotdb-protocol` module.
+
+## Using the Rust native interface
+
+Copy `iotdb/iotdb-protocol/thrift/target/generated-sources-rust/` and `iotdb/iotdb-protocol/thrift-commons/target/generated-sources-rust/` into your project。
+
+## RPC interface
+
+```
+// open a session
+TSOpenSessionResp openSession(1:TSOpenSessionReq req);
+
+// close a session
+TSStatus closeSession(1:TSCloseSessionReq req);
+
+// run an SQL statement in batch
+TSExecuteStatementResp executeStatement(1:TSExecuteStatementReq req);
+
+// execute SQL statement in batch
+TSStatus executeBatchStatement(1:TSExecuteBatchStatementReq req);
+
+// execute query SQL statement
+TSExecuteStatementResp executeQueryStatement(1:TSExecuteStatementReq req);
+
+// execute insert, delete and update SQL statement
+TSExecuteStatementResp executeUpdateStatement(1:TSExecuteStatementReq req);
+
+// fetch next query result
+TSFetchResultsResp fetchResults(1:TSFetchResultsReq req)
+
+// fetch meta data
+TSFetchMetadataResp fetchMetadata(1:TSFetchMetadataReq req)
+
+// cancel a query
+TSStatus cancelOperation(1:TSCancelOperationReq req);
+
+// close a query dataset
+TSStatus closeOperation(1:TSCloseOperationReq req);
+
+// get time zone
+TSGetTimeZoneResp getTimeZone(1:i64 sessionId);
+
+// set time zone
+TSStatus setTimeZone(1:TSSetTimeZoneReq req);
+
+// get server's properties
+ServerProperties getProperties();
+
+// CREATE DATABASE
+TSStatus setStorageGroup(1:i64 sessionId, 2:string storageGroup);
+
+// create timeseries
+TSStatus createTimeseries(1:TSCreateTimeseriesReq req);
+
+// create multi timeseries
+TSStatus createMultiTimeseries(1:TSCreateMultiTimeseriesReq req);
+
+// delete timeseries
+TSStatus deleteTimeseries(1:i64 sessionId, 2:list path)
+
+// delete sttorage groups
+TSStatus deleteStorageGroups(1:i64 sessionId, 2:list storageGroup);
+
+// insert record
+TSStatus insertRecord(1:TSInsertRecordReq req);
+
+// insert record in string format
+TSStatus insertStringRecord(1:TSInsertStringRecordReq req);
+
+// insert tablet
+TSStatus insertTablet(1:TSInsertTabletReq req);
+
+// insert tablets in batch
+TSStatus insertTablets(1:TSInsertTabletsReq req);
+
+// insert records in batch
+TSStatus insertRecords(1:TSInsertRecordsReq req);
+
+// insert records of one device
+TSStatus insertRecordsOfOneDevice(1:TSInsertRecordsOfOneDeviceReq req);
+
+// insert records in batch as string format
+TSStatus insertStringRecords(1:TSInsertStringRecordsReq req);
+
+// test the latency of innsert tablet,caution:no data will be inserted, only for test latency
+TSStatus testInsertTablet(1:TSInsertTabletReq req);
+
+// test the latency of innsert tablets,caution:no data will be inserted, only for test latency
+TSStatus testInsertTablets(1:TSInsertTabletsReq req);
+
+// test the latency of innsert record,caution:no data will be inserted, only for test latency
+TSStatus testInsertRecord(1:TSInsertRecordReq req);
+
+// test the latency of innsert record in string format,caution:no data will be inserted, only for test latency
+TSStatus testInsertStringRecord(1:TSInsertStringRecordReq req);
+
+// test the latency of innsert records,caution:no data will be inserted, only for test latency
+TSStatus testInsertRecords(1:TSInsertRecordsReq req);
+
+// test the latency of innsert records of one device,caution:no data will be inserted, only for test latency
+TSStatus testInsertRecordsOfOneDevice(1:TSInsertRecordsOfOneDeviceReq req);
+
+// test the latency of innsert records in string formate,caution:no data will be inserted, only for test latency
+TSStatus testInsertStringRecords(1:TSInsertStringRecordsReq req);
+
+// delete data
+TSStatus deleteData(1:TSDeleteDataReq req);
+
+// execute raw data query
+TSExecuteStatementResp executeRawDataQuery(1:TSRawDataQueryReq req);
+
+// request a statement id from server
+i64 requestStatementId(1:i64 sessionId);
+```
diff --git a/src/UserGuide/V2.0.1/Tree/API/RestServiceV1.md b/src/UserGuide/V2.0.1/Tree/API/RestServiceV1.md
new file mode 100644
index 000000000..738448e87
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/RestServiceV1.md
@@ -0,0 +1,930 @@
+
+
+# RESTful API V1(Not Recommend)
+IoTDB's RESTful services can be used for query, write, and management operations, using the OpenAPI standard to define interfaces and generate frameworks.
+
+## Enable RESTful Services
+
+RESTful services are disabled by default.
+
+* Developer
+
+ Find the `IoTDBrestServiceConfig` class under `org.apache.iotdb.db.conf.rest` in the sever module, and modify `enableRestService=true`.
+
+* User
+
+ Find the `conf/conf/iotdb-system.properties` file under the IoTDB installation directory and set `enable_rest_service` to `true` to enable the module.
+
+ ```properties
+ enable_rest_service=true
+ ```
+
+## Authentication
+Except the liveness probe API `/ping`, RESTful services use the basic authentication. Each URL request needs to carry `'Authorization': 'Basic ' + base64.encode(username + ':' + password)`.
+
+The username used in the following examples is: `root`, and password is: `root`.
+
+And the authorization header is
+
+```
+Authorization: Basic cm9vdDpyb290
+```
+
+- If a user authorized with incorrect username or password, the following error is returned:
+
+ HTTP Status Code:`401`
+
+ HTTP response body:
+ ```json
+ {
+ "code": 600,
+ "message": "WRONG_LOGIN_PASSWORD_ERROR"
+ }
+ ```
+
+- If the `Authorization` header is missing,the following error is returned:
+
+ HTTP Status Code:`401`
+
+ HTTP response body:
+ ```json
+ {
+ "code": 603,
+ "message": "UNINITIALIZED_AUTH_ERROR"
+ }
+ ```
+
+## Interface
+
+### ping
+
+The `/ping` API can be used for service liveness probing.
+
+Request method: `GET`
+
+Request path: `http://ip:port/ping`
+
+The user name used in the example is: root, password: root
+
+Example request:
+
+```shell
+$ curl http://127.0.0.1:18080/ping
+```
+
+Response status codes:
+
+- `200`: The service is alive.
+- `503`: The service cannot accept any requests now.
+
+Response parameters:
+
+|parameter name |parameter type |parameter describe|
+|:--- | :--- | :---|
+|code | integer | status code |
+| message | string | message |
+
+Sample response:
+
+- With HTTP status code `200`:
+
+ ```json
+ {
+ "code": 200,
+ "message": "SUCCESS_STATUS"
+ }
+ ```
+
+- With HTTP status code `503`:
+
+ ```json
+ {
+ "code": 500,
+ "message": "thrift service is unavailable"
+ }
+ ```
+
+> `/ping` can be accessed without authorization.
+
+### query
+
+The query interface can be used to handle data queries and metadata queries.
+
+Request method: `POST`
+
+Request header: `application/json`
+
+Request path: `http://ip:port/rest/v1/query`
+
+Parameter Description:
+
+| parameter name | parameter type | required | parameter description |
+|----------------| -------------- | -------- | ------------------------------------------------------------ |
+| sql | string | yes | |
+| rowLimit | integer | no | The maximum number of rows in the result set that can be returned by a query. If this parameter is not set, the `rest_query_default_row_size_limit` of the configuration file will be used as the default value. When the number of rows in the returned result set exceeds the limit, the status code `411` will be returned. |
+
+Response parameters:
+
+| parameter name | parameter type | parameter description |
+|----------------| -------------- | ------------------------------------------------------------ |
+| expressions | array | Array of result set column names for data query, `null` for metadata query |
+| columnNames | array | Array of column names for metadata query result set, `null` for data query |
+| timestamps | array | Timestamp column, `null` for metadata query |
+| values | array | A two-dimensional array, the first dimension has the same length as the result set column name array, and the second dimension array represents a column of the result set |
+
+**Examples:**
+
+Tip: Statements like `select * from root.xx.**` are not recommended because those statements may cause OOM.
+
+**Expression query**
+
+ ```shell
+ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select s3, s4, s3 + 1 from root.sg27 limit 2"}' http://127.0.0.1:18080/rest/v1/query
+ ````
+Response instance
+ ```json
+ {
+ "expressions": [
+ "root.sg27.s3",
+ "root.sg27.s4",
+ "root.sg27.s3 + 1"
+ ],
+ "columnNames": null,
+ "timestamps": [
+ 1635232143960,
+ 1635232153960
+ ],
+ "values": [
+ [
+ 11,
+ null
+ ],
+ [
+ false,
+ true
+ ],
+ [
+ 12.0,
+ null
+ ]
+ ]
+ }
+ ```
+
+**Show child paths**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show child paths root"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "child paths"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27",
+ "root.sg28"
+ ]
+ ]
+}
+```
+
+**Show child nodes**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show child nodes root"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "child nodes"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "sg27",
+ "sg28"
+ ]
+ ]
+}
+```
+
+**Show all ttl**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show all ttl"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "database",
+ "ttl"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27",
+ "root.sg28"
+ ],
+ [
+ null,
+ null
+ ]
+ ]
+}
+```
+
+**Show ttl**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show ttl on root.sg27"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "database",
+ "ttl"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27"
+ ],
+ [
+ null
+ ]
+ ]
+}
+```
+
+**Show functions**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show functions"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "function name",
+ "function type",
+ "class name (UDF)"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "ABS",
+ "ACOS",
+ "ASIN",
+ ...
+ ],
+ [
+ "built-in UDTF",
+ "built-in UDTF",
+ "built-in UDTF",
+ ...
+ ],
+ [
+ "org.apache.iotdb.db.query.udf.builtin.UDTFAbs",
+ "org.apache.iotdb.db.query.udf.builtin.UDTFAcos",
+ "org.apache.iotdb.db.query.udf.builtin.UDTFAsin",
+ ...
+ ]
+ ]
+}
+```
+
+**Show timeseries**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show timeseries"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "timeseries",
+ "alias",
+ "database",
+ "dataType",
+ "encoding",
+ "compression",
+ "tags",
+ "attributes"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27.s3",
+ "root.sg27.s4",
+ "root.sg28.s3",
+ "root.sg28.s4"
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ],
+ [
+ "root.sg27",
+ "root.sg27",
+ "root.sg28",
+ "root.sg28"
+ ],
+ [
+ "INT32",
+ "BOOLEAN",
+ "INT32",
+ "BOOLEAN"
+ ],
+ [
+ "RLE",
+ "RLE",
+ "RLE",
+ "RLE"
+ ],
+ [
+ "SNAPPY",
+ "SNAPPY",
+ "SNAPPY",
+ "SNAPPY"
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ]
+ ]
+}
+```
+
+**Show latest timeseries**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show latest timeseries"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "timeseries",
+ "alias",
+ "database",
+ "dataType",
+ "encoding",
+ "compression",
+ "tags",
+ "attributes"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg28.s4",
+ "root.sg27.s4",
+ "root.sg28.s3",
+ "root.sg27.s3"
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ],
+ [
+ "root.sg28",
+ "root.sg27",
+ "root.sg28",
+ "root.sg27"
+ ],
+ [
+ "BOOLEAN",
+ "BOOLEAN",
+ "INT32",
+ "INT32"
+ ],
+ [
+ "RLE",
+ "RLE",
+ "RLE",
+ "RLE"
+ ],
+ [
+ "SNAPPY",
+ "SNAPPY",
+ "SNAPPY",
+ "SNAPPY"
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ]
+ ]
+}
+```
+
+**Count timeseries**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"count timeseries root.**"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "count"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ 4
+ ]
+ ]
+}
+```
+
+**Count nodes**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"count nodes root.** level=2"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "count"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ 4
+ ]
+ ]
+}
+```
+
+**Show devices**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show devices"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "devices",
+ "isAligned"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27",
+ "root.sg28"
+ ],
+ [
+ "false",
+ "false"
+ ]
+ ]
+}
+```
+
+**Show devices with database**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show devices with database"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "devices",
+ "database",
+ "isAligned"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27",
+ "root.sg28"
+ ],
+ [
+ "root.sg27",
+ "root.sg28"
+ ],
+ [
+ "false",
+ "false"
+ ]
+ ]
+}
+```
+
+**List user**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"list user"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "user"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root"
+ ]
+ ]
+}
+```
+
+**Aggregation**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select count(*) from root.sg27"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": [
+ "count(root.sg27.s3)",
+ "count(root.sg27.s4)"
+ ],
+ "columnNames": null,
+ "timestamps": [
+ 0
+ ],
+ "values": [
+ [
+ 1
+ ],
+ [
+ 2
+ ]
+ ]
+}
+```
+
+**Group by level**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select count(*) from root.** group by level = 1"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "count(root.sg27.*)",
+ "count(root.sg28.*)"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ 3
+ ],
+ [
+ 3
+ ]
+ ]
+}
+```
+
+**Group by**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select count(*) from root.sg27 group by([1635232143960,1635232153960),1s)"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": [
+ "count(root.sg27.s3)",
+ "count(root.sg27.s4)"
+ ],
+ "columnNames": null,
+ "timestamps": [
+ 1635232143960,
+ 1635232144960,
+ 1635232145960,
+ 1635232146960,
+ 1635232147960,
+ 1635232148960,
+ 1635232149960,
+ 1635232150960,
+ 1635232151960,
+ 1635232152960
+ ],
+ "values": [
+ [
+ 1,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0
+ ],
+ [
+ 1,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0
+ ]
+ ]
+}
+```
+
+**Last**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select last s3 from root.sg27"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "expressions": null,
+ "columnNames": [
+ "timeseries",
+ "value",
+ "dataType"
+ ],
+ "timestamps": [
+ 1635232143960
+ ],
+ "values": [
+ [
+ "root.sg27.s3"
+ ],
+ [
+ "11"
+ ],
+ [
+ "INT32"
+ ]
+ ]
+}
+```
+
+**Disable align**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select * from root.sg27 disable align"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "code": 407,
+ "message": "disable align clauses are not supported."
+}
+```
+
+**Align by device**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select count(s3) from root.sg27 align by device"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "code": 407,
+ "message": "align by device clauses are not supported."
+}
+```
+
+**Select into**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select s3, s4 into root.sg29.s1, root.sg29.s2 from root.sg27"}' http://127.0.0.1:18080/rest/v1/query
+```
+
+```json
+{
+ "code": 407,
+ "message": "select into clauses are not supported."
+}
+```
+
+### nonQuery
+
+Request method: `POST`
+
+Request header: `application/json`
+
+Request path: `http://ip:port/rest/v1/nonQuery`
+
+Parameter Description:
+
+|parameter name |parameter type |parameter describe|
+|:--- | :--- | :---|
+| sql | string | query content |
+
+Example request:
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"CREATE DATABASE root.ln"}' http://127.0.0.1:18080/rest/v1/nonQuery
+```
+
+Response parameters:
+
+|parameter name |parameter type |parameter describe|
+|:--- | :--- | :---|
+| code | integer | status code |
+| message | string | message |
+
+Sample response:
+```json
+{
+ "code": 200,
+ "message": "SUCCESS_STATUS"
+}
+```
+
+
+
+### insertTablet
+
+Request method: `POST`
+
+Request header: `application/json`
+
+Request path: `http://ip:port/rest/v1/insertTablet`
+
+Parameter Description:
+
+| parameter name |parameter type |is required|parameter describe|
+|:---------------| :--- | :---| :---|
+| timestamps | array | yes | Time column |
+| measurements | array | yes | The name of the measuring point |
+| dataTypes | array | yes | The data type |
+| values | array | yes | Value columns, the values in each column can be `null` |
+| isAligned | boolean | yes | Whether to align the timeseries |
+| deviceId | string | yes | Device name |
+
+Example request:
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"timestamps":[1635232143960,1635232153960],"measurements":["s3","s4"],"dataTypes":["INT32","BOOLEAN"],"values":[[11,null],[false,true]],"isAligned":false,"deviceId":"root.sg27"}' http://127.0.0.1:18080/rest/v1/insertTablet
+```
+
+Sample response:
+
+|parameter name |parameter type |parameter describe|
+|:--- | :--- | :---|
+| code | integer | status code |
+| message | string | message |
+
+Sample response:
+```json
+{
+ "code": 200,
+ "message": "SUCCESS_STATUS"
+}
+```
+
+## Configuration
+
+The configuration is located in 'iotdb-system.properties'.
+
+* Set 'enable_rest_service' to 'true' to enable the module, and 'false' to disable the module. By default, this value is' false '.
+
+```properties
+enable_rest_service=true
+```
+
+* This parameter is valid only when 'enable_REST_service =true'. Set 'rest_service_port' to a number (1025 to 65535) to customize the REST service socket port. By default, the value is 18080.
+
+```properties
+rest_service_port=18080
+```
+
+* Set 'enable_swagger' to 'true' to display rest service interface information through swagger, and 'false' to do not display the rest service interface information through the swagger. By default, this value is' false '.
+
+```properties
+enable_swagger=false
+```
+
+* The maximum number of rows in the result set that can be returned by a query. When the number of rows in the returned result set exceeds the limit, the status code `411` is returned.
+
+````properties
+rest_query_default_row_size_limit=10000
+````
+
+* Expiration time for caching customer login information (used to speed up user authentication, in seconds, 8 hours by default)
+
+```properties
+cache_expire=28800
+```
+
+
+* Maximum number of users stored in the cache (default: 100)
+
+```properties
+cache_max_num=100
+```
+
+* Initial cache size (default: 10)
+
+```properties
+cache_init_num=10
+```
+
+* REST Service whether to enable SSL configuration, set 'enable_https' to' true 'to enable the module, and set' false 'to disable the module. By default, this value is' false '.
+
+```properties
+enable_https=false
+```
+
+* keyStore location path (optional)
+
+```properties
+key_store_path=
+```
+
+
+* keyStore password (optional)
+
+```properties
+key_store_pwd=
+```
+
+
+* trustStore location path (optional)
+
+```properties
+trust_store_path=
+```
+
+* trustStore password (optional)
+
+```properties
+trust_store_pwd=
+```
+
+
+* SSL timeout period, in seconds
+
+```properties
+idle_timeout=5000
+```
diff --git a/src/UserGuide/V2.0.1/Tree/API/RestServiceV2.md b/src/UserGuide/V2.0.1/Tree/API/RestServiceV2.md
new file mode 100644
index 000000000..b4c733fb6
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/API/RestServiceV2.md
@@ -0,0 +1,970 @@
+
+
+# RESTful API V2
+IoTDB's RESTful services can be used for query, write, and management operations, using the OpenAPI standard to define interfaces and generate frameworks.
+
+## Enable RESTful Services
+
+RESTful services are disabled by default.
+
+* Developer
+
+ Find the `IoTDBrestServiceConfig` class under `org.apache.iotdb.db.conf.rest` in the sever module, and modify `enableRestService=true`.
+
+* User
+
+ Find the `conf/iotdb-system.properties` file under the IoTDB installation directory and set `enable_rest_service` to `true` to enable the module.
+
+ ```properties
+ enable_rest_service=true
+ ```
+
+## Authentication
+Except the liveness probe API `/ping`, RESTful services use the basic authentication. Each URL request needs to carry `'Authorization': 'Basic ' + base64.encode(username + ':' + password)`.
+
+The username used in the following examples is: `root`, and password is: `root`.
+
+And the authorization header is
+
+```
+Authorization: Basic cm9vdDpyb290
+```
+
+- If a user authorized with incorrect username or password, the following error is returned:
+
+ HTTP Status Code:`401`
+
+ HTTP response body:
+ ```json
+ {
+ "code": 600,
+ "message": "WRONG_LOGIN_PASSWORD_ERROR"
+ }
+ ```
+
+- If the `Authorization` header is missing,the following error is returned:
+
+ HTTP Status Code:`401`
+
+ HTTP response body:
+ ```json
+ {
+ "code": 603,
+ "message": "UNINITIALIZED_AUTH_ERROR"
+ }
+ ```
+
+## Interface
+
+### ping
+
+The `/ping` API can be used for service liveness probing.
+
+Request method: `GET`
+
+Request path: `http://ip:port/ping`
+
+The user name used in the example is: root, password: root
+
+Example request:
+
+```shell
+$ curl http://127.0.0.1:18080/ping
+```
+
+Response status codes:
+
+- `200`: The service is alive.
+- `503`: The service cannot accept any requests now.
+
+Response parameters:
+
+|parameter name |parameter type |parameter describe|
+|:--- | :--- | :---|
+|code | integer | status code |
+| message | string | message |
+
+Sample response:
+
+- With HTTP status code `200`:
+
+ ```json
+ {
+ "code": 200,
+ "message": "SUCCESS_STATUS"
+ }
+ ```
+
+- With HTTP status code `503`:
+
+ ```json
+ {
+ "code": 500,
+ "message": "thrift service is unavailable"
+ }
+ ```
+
+> `/ping` can be accessed without authorization.
+
+### query
+
+The query interface can be used to handle data queries and metadata queries.
+
+Request method: `POST`
+
+Request header: `application/json`
+
+Request path: `http://ip:port/rest/v2/query`
+
+Parameter Description:
+
+| parameter name | parameter type | required | parameter description |
+|----------------| -------------- | -------- | ------------------------------------------------------------ |
+| sql | string | yes | |
+| row_limit | integer | no | The maximum number of rows in the result set that can be returned by a query. If this parameter is not set, the `rest_query_default_row_size_limit` of the configuration file will be used as the default value. When the number of rows in the returned result set exceeds the limit, the status code `411` will be returned. |
+
+Response parameters:
+
+| parameter name | parameter type | parameter description |
+|----------------| -------------- | ------------------------------------------------------------ |
+| expressions | array | Array of result set column names for data query, `null` for metadata query |
+| column_names | array | Array of column names for metadata query result set, `null` for data query |
+| timestamps | array | Timestamp column, `null` for metadata query |
+| values | array | A two-dimensional array, the first dimension has the same length as the result set column name array, and the second dimension array represents a column of the result set |
+
+**Examples:**
+
+Tip: Statements like `select * from root.xx.**` are not recommended because those statements may cause OOM.
+
+**Expression query**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select s3, s4, s3 + 1 from root.sg27 limit 2"}' http://127.0.0.1:18080/rest/v2/query
+````
+
+```json
+{
+ "expressions": [
+ "root.sg27.s3",
+ "root.sg27.s4",
+ "root.sg27.s3 + 1"
+ ],
+ "column_names": null,
+ "timestamps": [
+ 1635232143960,
+ 1635232153960
+ ],
+ "values": [
+ [
+ 11,
+ null
+ ],
+ [
+ false,
+ true
+ ],
+ [
+ 12.0,
+ null
+ ]
+ ]
+}
+```
+
+**Show child paths**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show child paths root"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "child paths"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27",
+ "root.sg28"
+ ]
+ ]
+}
+```
+
+**Show child nodes**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show child nodes root"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "child nodes"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "sg27",
+ "sg28"
+ ]
+ ]
+}
+```
+
+**Show all ttl**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show all ttl"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "database",
+ "ttl"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27",
+ "root.sg28"
+ ],
+ [
+ null,
+ null
+ ]
+ ]
+}
+```
+
+**Show ttl**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show ttl on root.sg27"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "database",
+ "ttl"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27"
+ ],
+ [
+ null
+ ]
+ ]
+}
+```
+
+**Show functions**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show functions"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "function name",
+ "function type",
+ "class name (UDF)"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "ABS",
+ "ACOS",
+ "ASIN",
+ ...
+ ],
+ [
+ "built-in UDTF",
+ "built-in UDTF",
+ "built-in UDTF",
+ ...
+ ],
+ [
+ "org.apache.iotdb.db.query.udf.builtin.UDTFAbs",
+ "org.apache.iotdb.db.query.udf.builtin.UDTFAcos",
+ "org.apache.iotdb.db.query.udf.builtin.UDTFAsin",
+ ...
+ ]
+ ]
+}
+```
+
+**Show timeseries**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show timeseries"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "timeseries",
+ "alias",
+ "database",
+ "dataType",
+ "encoding",
+ "compression",
+ "tags",
+ "attributes"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27.s3",
+ "root.sg27.s4",
+ "root.sg28.s3",
+ "root.sg28.s4"
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ],
+ [
+ "root.sg27",
+ "root.sg27",
+ "root.sg28",
+ "root.sg28"
+ ],
+ [
+ "INT32",
+ "BOOLEAN",
+ "INT32",
+ "BOOLEAN"
+ ],
+ [
+ "RLE",
+ "RLE",
+ "RLE",
+ "RLE"
+ ],
+ [
+ "SNAPPY",
+ "SNAPPY",
+ "SNAPPY",
+ "SNAPPY"
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ]
+ ]
+}
+```
+
+**Show latest timeseries**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show latest timeseries"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "timeseries",
+ "alias",
+ "database",
+ "dataType",
+ "encoding",
+ "compression",
+ "tags",
+ "attributes"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg28.s4",
+ "root.sg27.s4",
+ "root.sg28.s3",
+ "root.sg27.s3"
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ],
+ [
+ "root.sg28",
+ "root.sg27",
+ "root.sg28",
+ "root.sg27"
+ ],
+ [
+ "BOOLEAN",
+ "BOOLEAN",
+ "INT32",
+ "INT32"
+ ],
+ [
+ "RLE",
+ "RLE",
+ "RLE",
+ "RLE"
+ ],
+ [
+ "SNAPPY",
+ "SNAPPY",
+ "SNAPPY",
+ "SNAPPY"
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ],
+ [
+ null,
+ null,
+ null,
+ null
+ ]
+ ]
+}
+```
+
+**Count timeseries**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"count timeseries root.**"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "count"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ 4
+ ]
+ ]
+}
+```
+
+**Count nodes**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"count nodes root.** level=2"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "count"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ 4
+ ]
+ ]
+}
+```
+
+**Show devices**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show devices"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "devices",
+ "isAligned"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27",
+ "root.sg28"
+ ],
+ [
+ "false",
+ "false"
+ ]
+ ]
+}
+```
+
+**Show devices with database**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"show devices with database"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "devices",
+ "database",
+ "isAligned"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root.sg27",
+ "root.sg28"
+ ],
+ [
+ "root.sg27",
+ "root.sg28"
+ ],
+ [
+ "false",
+ "false"
+ ]
+ ]
+}
+```
+
+**List user**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"list user"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "user"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ "root"
+ ]
+ ]
+}
+```
+
+**Aggregation**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select count(*) from root.sg27"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": [
+ "count(root.sg27.s3)",
+ "count(root.sg27.s4)"
+ ],
+ "column_names": null,
+ "timestamps": [
+ 0
+ ],
+ "values": [
+ [
+ 1
+ ],
+ [
+ 2
+ ]
+ ]
+}
+```
+
+**Group by level**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select count(*) from root.** group by level = 1"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "count(root.sg27.*)",
+ "count(root.sg28.*)"
+ ],
+ "timestamps": null,
+ "values": [
+ [
+ 3
+ ],
+ [
+ 3
+ ]
+ ]
+}
+```
+
+**Group by**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select count(*) from root.sg27 group by([1635232143960,1635232153960),1s)"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": [
+ "count(root.sg27.s3)",
+ "count(root.sg27.s4)"
+ ],
+ "column_names": null,
+ "timestamps": [
+ 1635232143960,
+ 1635232144960,
+ 1635232145960,
+ 1635232146960,
+ 1635232147960,
+ 1635232148960,
+ 1635232149960,
+ 1635232150960,
+ 1635232151960,
+ 1635232152960
+ ],
+ "values": [
+ [
+ 1,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0
+ ],
+ [
+ 1,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0
+ ]
+ ]
+}
+```
+
+**Last**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select last s3 from root.sg27"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "expressions": null,
+ "column_names": [
+ "timeseries",
+ "value",
+ "dataType"
+ ],
+ "timestamps": [
+ 1635232143960
+ ],
+ "values": [
+ [
+ "root.sg27.s3"
+ ],
+ [
+ "11"
+ ],
+ [
+ "INT32"
+ ]
+ ]
+}
+```
+
+**Disable align**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select * from root.sg27 disable align"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "code": 407,
+ "message": "disable align clauses are not supported."
+}
+```
+
+**Align by device**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select count(s3) from root.sg27 align by device"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "code": 407,
+ "message": "align by device clauses are not supported."
+}
+```
+
+**Select into**
+
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select s3, s4 into root.sg29.s1, root.sg29.s2 from root.sg27"}' http://127.0.0.1:18080/rest/v2/query
+```
+
+```json
+{
+ "code": 407,
+ "message": "select into clauses are not supported."
+}
+```
+
+### nonQuery
+
+Request method: `POST`
+
+Request header: `application/json`
+
+Request path: `http://ip:port/rest/v2/nonQuery`
+
+Parameter Description:
+
+|parameter name |parameter type |parameter describe|
+|:--- | :--- | :---|
+| sql | string | query content |
+
+Example request:
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"CREATE DATABASE root.ln"}' http://127.0.0.1:18080/rest/v2/nonQuery
+```
+
+Response parameters:
+
+|parameter name |parameter type |parameter describe|
+|:--- | :--- | :---|
+| code | integer | status code |
+| message | string | message |
+
+Sample response:
+```json
+{
+ "code": 200,
+ "message": "SUCCESS_STATUS"
+}
+```
+
+
+
+### insertTablet
+
+Request method: `POST`
+
+Request header: `application/json`
+
+Request path: `http://ip:port/rest/v2/insertTablet`
+
+Parameter Description:
+
+| parameter name |parameter type |is required|parameter describe|
+|:---------------| :--- | :---| :---|
+| timestamps | array | yes | Time column |
+| measurements | array | yes | The name of the measuring point |
+| data_types | array | yes | The data type |
+| values | array | yes | Value columns, the values in each column can be `null` |
+| is_aligned | boolean | yes | Whether to align the timeseries |
+| device | string | yes | Device name |
+
+Example request:
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"timestamps":[1635232143960,1635232153960],"measurements":["s3","s4"],"data_types":["INT32","BOOLEAN"],"values":[[11,null],[false,true]],"is_aligned":false,"device":"root.sg27"}' http://127.0.0.1:18080/rest/v2/insertTablet
+```
+
+Sample response:
+
+|parameter name |parameter type |parameter describe|
+|:--- | :--- | :---|
+| code | integer | status code |
+| message | string | message |
+
+Sample response:
+```json
+{
+ "code": 200,
+ "message": "SUCCESS_STATUS"
+}
+```
+
+### insertRecords
+
+Request method: `POST`
+
+Request header: `application/json`
+
+Request path: `http://ip:port/rest/v2/insertRecords`
+
+Parameter Description:
+
+| parameter name |parameter type |is required|parameter describe|
+|:------------------| :--- | :---| :---|
+| timestamps | array | yes | Time column |
+| measurements_list | array | yes | The name of the measuring point |
+| data_types_list | array | yes | The data type |
+| values_list | array | yes | Value columns, the values in each column can be `null` |
+| devices | string | yes | Device name |
+| is_aligned | boolean | yes | Whether to align the timeseries |
+
+Example request:
+```shell
+curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"timestamps":[1635232113960,1635232151960,1635232143960,1635232143960],"measurements_list":[["s33","s44"],["s55","s66"],["s77","s88"],["s771","s881"]],"data_types_list":[["INT32","INT64"],["FLOAT","DOUBLE"],["FLOAT","DOUBLE"],["BOOLEAN","TEXT"]],"values_list":[[1,11],[2.1,2],[4,6],[false,"cccccc"]],"is_aligned":false,"devices":["root.s1","root.s1","root.s1","root.s3"]}' http://127.0.0.1:18080/rest/v2/insertRecords
+```
+
+Sample response:
+
+|parameter name |parameter type |parameter describe|
+|:--- | :--- | :---|
+| code | integer | status code |
+| message | string | message |
+
+Sample response:
+```json
+{
+ "code": 200,
+ "message": "SUCCESS_STATUS"
+}
+```
+
+
+## Configuration
+
+The configuration is located in 'iotdb-system.properties'.
+
+* Set 'enable_rest_service' to 'true' to enable the module, and 'false' to disable the module. By default, this value is' false '.
+
+```properties
+enable_rest_service=true
+```
+
+* This parameter is valid only when 'enable_REST_service =true'. Set 'rest_service_port' to a number (1025 to 65535) to customize the REST service socket port. By default, the value is 18080.
+
+```properties
+rest_service_port=18080
+```
+
+* Set 'enable_swagger' to 'true' to display rest service interface information through swagger, and 'false' to do not display the rest service interface information through the swagger. By default, this value is' false '.
+
+```properties
+enable_swagger=false
+```
+
+* The maximum number of rows in the result set that can be returned by a query. When the number of rows in the returned result set exceeds the limit, the status code `411` is returned.
+
+````properties
+rest_query_default_row_size_limit=10000
+````
+
+* Expiration time for caching customer login information (used to speed up user authentication, in seconds, 8 hours by default)
+
+```properties
+cache_expire=28800
+```
+
+
+* Maximum number of users stored in the cache (default: 100)
+
+```properties
+cache_max_num=100
+```
+
+* Initial cache size (default: 10)
+
+```properties
+cache_init_num=10
+```
+
+* REST Service whether to enable SSL configuration, set 'enable_https' to' true 'to enable the module, and set' false 'to disable the module. By default, this value is' false '.
+
+```properties
+enable_https=false
+```
+
+* keyStore location path (optional)
+
+```properties
+key_store_path=
+```
+
+
+* keyStore password (optional)
+
+```properties
+key_store_pwd=
+```
+
+
+* trustStore location path (optional)
+
+```properties
+trust_store_path=
+```
+
+* trustStore password (optional)
+
+```properties
+trust_store_pwd=
+```
+
+
+* SSL timeout period, in seconds
+
+```properties
+idle_timeout=5000
+```
diff --git a/src/UserGuide/V2.0.1/Tree/Basic-Concept/Data-Model-and-Terminology.md b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Data-Model-and-Terminology.md
new file mode 100644
index 000000000..015a4035a
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Data-Model-and-Terminology.md
@@ -0,0 +1,150 @@
+
+
+# Data Model
+
+A wind power IoT scenario is taken as an example to illustrate how to create a correct data model in IoTDB.
+
+According to the enterprise organization structure and equipment entity hierarchy, it is expressed as an attribute hierarchy structure, as shown below. The hierarchical from top to bottom is: power group layer - power plant layer - entity layer - measurement layer. ROOT is the root node, and each node of measurement layer is a leaf node. In the process of using IoTDB, the attributes on the path from ROOT node is directly connected to each leaf node with ".", thus forming the name of a timeseries in IoTDB. For example, The left-most path in Figure 2.1 can generate a timeseries named `root.ln.wf01.wt01.status`.
+
+
+
+Here are the basic concepts of the model involved in IoTDB.
+
+## Measurement, Entity, Database, Path
+
+### Measurement (Also called field)
+
+It is information measured by detection equipment in an actual scene and can transform the sensed information into an electrical signal or other desired form of information output and send it to IoTDB. In IoTDB, all data and paths stored are organized in units of measurements.
+
+### Entity (Also called device)
+
+**An entity** is an equipped with measurements in real scenarios. In IoTDB, all measurements should have their corresponding entities. Entities do not need to be created manually, the default is the second last layer.
+
+### Database
+
+**A group of entities.** Users can create any prefix path as a database. Provided that there are four timeseries `root.ln.wf01.wt01.status`, `root.ln.wf01.wt01.temperature`, `root.ln.wf02.wt02.hardware`, `root.ln.wf02.wt02.status`, two devices `wf01`, `wf02` under the path `root.ln` may belong to the same owner or the same manufacturer, so d1 and d2 are closely related. At this point, the prefix path root.vehicle can be designated as a database, which will enable IoTDB to store all devices under it in the same folder. Newly added devices under `root.ln` will also belong to this database.
+
+In general, it is recommended to create 1 database.
+
+> Note1: A full path (`root.ln.wf01.wt01.status` as in the above example) is not allowed to be set as a database.
+>
+> Note2: The prefix of a timeseries must belong to a database. Before creating a timeseries, users must set which database the series belongs to. Only timeseries whose database is set can be persisted to disk.
+>
+> Note3: The number of character in the path as database, including `root.`, shall not exceed 64.
+
+Once a prefix path is set as a database, the database settings cannot be changed.
+
+After a database is set, the ancestral layers, children and descendant layers of the corresponding prefix path are not allowed to be set up again (for example, after `root.ln` is set as the database, the root layer and `root.ln.wf01` are not allowed to be created as database).
+
+The Layer Name of database can only consist of characters, numbers, and underscores, like `root.storagegroup_1`.
+
+### Path
+
+A `path` is an expression that conforms to the following constraints:
+
+```sql
+path
+ : nodeName ('.' nodeName)*
+ ;
+
+nodeName
+ : wildcard? identifier wildcard?
+ | wildcard
+ ;
+
+wildcard
+ : '*'
+ | '**'
+ ;
+```
+
+We call the part of a path divided by `'.'` as a `node` or `nodeName`. For example: `root.a.b.c` is a path with 4 nodes.
+
+The following are the constraints on the `nodeName`:
+
+* `root` is a reserved character, and it is only allowed to appear at the beginning layer of the time series mentioned below. If `root` appears in other layers, it cannot be parsed and an error will be reported.
+* Except for the beginning layer (`root`) of the time series, the characters supported in other layers are as follows:
+
+ * [ 0-9 a-z A-Z _ ] (letters, numbers, underscore)
+ * ['\u2E80'..'\u9FFF'] (Chinese characters)
+* In particular, if the system is deployed on a Windows machine, the database layer name will be case-insensitive. For example, creating both `root.ln` and `root.LN` at the same time is not allowed.
+
+### Special characters (Reverse quotation marks)
+
+If you need to use special characters in the path node name, you can use reverse quotation marks to reference the path node name. For specific usage, please refer to [Reverse Quotation Marks](../Reference/Syntax-Rule.md#reverse-quotation-marks).
+
+### Path Pattern
+
+In order to make it easier and faster to express multiple timeseries paths, IoTDB provides users with the path pattern. Users can construct a path pattern by using wildcard `*` and `**`. Wildcard can appear in any node of the path.
+
+`*` represents one node. For example, `root.vehicle.*.sensor1` represents a 4-node path which is prefixed with `root.vehicle` and suffixed with `sensor1`.
+
+`**` represents (`*`)+, which is one or more nodes of `*`. For example, `root.vehicle.device1.**` represents all paths prefixed by `root.vehicle.device1` with nodes num greater than or equal to 4, like `root.vehicle.device1.*`, `root.vehicle.device1.*.*`, `root.vehicle.device1.*.*.*`, etc; `root.vehicle.**.sensor1` represents a path which is prefixed with `root.vehicle` and suffixed with `sensor1` and has at least 4 nodes.
+
+> Note1: Wildcard `*` and `**` cannot be placed at the beginning of the path.
+
+
+## Timeseries
+
+### Timestamp
+
+The timestamp is the time point at which data is produced. It includes absolute timestamps and relative timestamps. For detailed description, please go to [Data Type doc](./Data-Type.md).
+
+### Data point
+
+**A "time-value" pair**.
+
+### Timeseries
+
+**The record of a measurement of an entity on the time axis.** Timeseries is a series of data points.
+
+A measurement of an entity corresponds to a timeseries.
+
+Also called meter, timeline, and tag, parameter in real time database.
+
+The number of measurements managed by IoTDB can reach more than billions.
+
+For example, if entity wt01 in power plant wf01 of power group ln has a measurement named status, its timeseries can be expressed as: `root.ln.wf01.wt01.status`.
+
+### Aligned timeseries
+
+There is a situation that multiple measurements of an entity are sampled simultaneously in practical applications, forming multiple timeseries with the same time column. Such a group of timeseries can be modeled as aligned timeseries in Apache IoTDB.
+
+The timestamp columns of a group of aligned timeseries need to be stored only once in memory and disk when inserting data, instead of once per timeseries.
+
+It would be best if you created a group of aligned timeseries at the same time.
+
+You cannot create non-aligned timeseries under the entity to which the aligned timeseries belong, nor can you create aligned timeseries under the entity to which the non-aligned timeseries belong.
+
+When querying, you can query each timeseries separately.
+
+When inserting data, it is allowed to insert null value in the aligned timeseries.
+
+
+
+In the following chapters of data definition language, data operation language and Java Native Interface, various operations related to aligned timeseries will be introduced one by one.
+
+## Schema Template
+
+In the actual scenario, many entities collect the same measurements, that is, they have the same measurements name and type. A **schema template** can be declared to define the collectable measurements set. Schema template helps save memory by implementing schema sharing. For detailed description, please refer to [Schema Template doc](../User-Manual/Operate-Metadata_timecho.md#Device-Template).
+
+In the following chapters of, data definition language, data operation language and Java Native Interface, various operations related to schema template will be introduced one by one.
diff --git a/src/UserGuide/V2.0.1/Tree/Basic-Concept/Navigating_Time_Series_Data.md b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Navigating_Time_Series_Data.md
new file mode 100644
index 000000000..20aaef327
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Navigating_Time_Series_Data.md
@@ -0,0 +1,64 @@
+
+# Entering Time Series Data
+
+## What Is Time Series Data?
+
+In today's era of the Internet of Things, various scenarios such as the Internet of Things and industrial scenarios are undergoing digital transformation. People collect various states of devices by installing sensors on them. If the motor collects voltage and current, the blade speed, angular velocity, and power generation of the fan; Vehicle collection of latitude and longitude, speed, and fuel consumption; The vibration frequency, deflection, displacement, etc. of the bridge. The data collection of sensors has penetrated into various industries.
+
+![](https://alioss.timecho.com/docs/img/20240505154735.png)
+
+Generally speaking, we refer to each collection point as a measurement point (also known as a physical quantity, time series, timeline, signal quantity, indicator, measurement value, etc.). Each measurement point continuously collects new data information over time, forming a time series. In the form of a table, each time series is a table formed by two columns: time and value; In a graphical way, each time series is a trend chart formed over time, which can also be vividly referred to as the device's electrocardiogram.
+
+![](https://alioss.timecho.com/docs/img/20240505154843.png)
+
+The massive time series data generated by sensors is the foundation of digital transformation in various industries, so our modeling of time series data mainly focuses on equipment and sensors.
+
+## Key Concepts of Time Series Data
+The main concepts involved in time-series data can be divided from bottom to top: data points, measurement points, and equipment.
+
+![](https://alioss.timecho.com/docs/img/20240505154513.png)
+
+### Data Point
+
+- Definition: Consists of a timestamp and a value, where the timestamp is of type long and the value can be of various types such as BOOLEAN, FLOAT, INT32, etc.
+- Example: A row of a time series in the form of a table in the above figure, or a point of a time series in the form of a graph, is a data point.
+
+![](https://alioss.timecho.com/docs/img/20240505154432.png)
+
+### Measurement Points
+
+- Definition: It is a time series formed by multiple data points arranged in increments according to timestamps. Usually, a measuring point represents a collection point and can regularly collect physical quantities of the environment it is located in.
+- Also known as: physical quantity, time series, timeline, semaphore, indicator, measurement value, etc
+- Example:
+ - Electricity scenario: current, voltage
+ - Energy scenario: wind speed, rotational speed
+ - Vehicle networking scenarios: fuel consumption, vehicle speed, longitude, dimensions
+ - Factory scenario: temperature, humidity
+
+### Device
+
+- Definition: Corresponding to a physical device in an actual scene, usually a collection of measurement points, identified by one to multiple labels
+- Example:
+ - Vehicle networking scenario: Vehicles identified by vehicle identification code (VIN)
+ - Factory scenario: robotic arm, unique ID identification generated by IoT platform
+ - Energy scenario: Wind turbines, identified by region, station, line, model, instance, etc
+ - Monitoring scenario: CPU, identified by machine room, rack, Hostname, device type, etc
\ No newline at end of file
diff --git a/src/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_apache.md b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_apache.md
new file mode 100644
index 000000000..58c01a886
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_apache.md
@@ -0,0 +1,1253 @@
+
+
+# Timeseries Management
+
+## Database Management
+
+### Create Database
+
+According to the storage model we can set up the corresponding database. Two SQL statements are supported for creating databases, as follows:
+
+```
+IoTDB > create database root.ln
+IoTDB > create database root.sgcc
+```
+
+We can thus create two databases using the above two SQL statements.
+
+It is worth noting that 1 database is recommended.
+
+When the path itself or the parent/child layer of the path is already created as database, the path is then not allowed to be created as database. For example, it is not feasible to create `root.ln.wf01` as database when two databases `root.ln` and `root.sgcc` exist. The system gives the corresponding error prompt as shown below:
+
+```
+IoTDB> CREATE DATABASE root.ln.wf01
+Msg: 300: root.ln has already been created as database.
+IoTDB> create database root.ln.wf01
+Msg: 300: root.ln has already been created as database.
+```
+
+The LayerName of database can only be chinese or english characters, numbers, underscores, dots and backticks. If you want to set it to pure numbers or contain backticks or dots, you need to enclose the database name with backticks (` `` `). In ` `` `,2 backticks represents one, i.e. ` ```` ` represents `` ` ``.
+
+Besides, if deploy on Windows system, the LayerName is case-insensitive, which means it's not allowed to create databases `root.ln` and `root.LN` at the same time.
+
+### Show Databases
+
+After creating the database, we can use the [SHOW DATABASES](../SQL-Manual/SQL-Manual.md) statement and [SHOW DATABASES \](../SQL-Manual/SQL-Manual.md) to view the databases. The SQL statements are as follows:
+
+```
+IoTDB> SHOW DATABASES
+IoTDB> SHOW DATABASES root.**
+```
+
+The result is as follows:
+
+```
++-------------+----+-------------------------+-----------------------+-----------------------+
+|database| ttl|schema_replication_factor|data_replication_factor|time_partition_interval|
++-------------+----+-------------------------+-----------------------+-----------------------+
+| root.sgcc|null| 2| 2| 604800|
+| root.ln|null| 2| 2| 604800|
++-------------+----+-------------------------+-----------------------+-----------------------+
+Total line number = 2
+It costs 0.060s
+```
+
+### Delete Database
+
+User can use the `DELETE DATABASE ` statement to delete all databases matching the pathPattern. Please note the data in the database will also be deleted.
+
+```
+IoTDB > DELETE DATABASE root.ln
+IoTDB > DELETE DATABASE root.sgcc
+// delete all data, all timeseries and all databases
+IoTDB > DELETE DATABASE root.**
+```
+
+### Count Databases
+
+User can use the `COUNT DATABASE ` statement to count the number of databases. It is allowed to specify `PathPattern` to count the number of databases matching the `PathPattern`.
+
+SQL statement is as follows:
+
+```
+IoTDB> count databases
+IoTDB> count databases root.*
+IoTDB> count databases root.sgcc.*
+IoTDB> count databases root.sgcc
+```
+
+The result is as follows:
+
+```
++-------------+
+| database|
++-------------+
+| root.sgcc|
+| root.turbine|
+| root.ln|
++-------------+
+Total line number = 3
+It costs 0.003s
+
++-------------+
+| database|
++-------------+
+| 3|
++-------------+
+Total line number = 1
+It costs 0.003s
+
++-------------+
+| database|
++-------------+
+| 3|
++-------------+
+Total line number = 1
+It costs 0.002s
+
++-------------+
+| database|
++-------------+
+| 0|
++-------------+
+Total line number = 1
+It costs 0.002s
+
++-------------+
+| database|
++-------------+
+| 1|
++-------------+
+Total line number = 1
+It costs 0.002s
+```
+
+### Setting up heterogeneous databases (Advanced operations)
+
+Under the premise of familiar with IoTDB metadata modeling,
+users can set up heterogeneous databases in IoTDB to cope with different production needs.
+
+Currently, the following database heterogeneous parameters are supported:
+
+| Parameter | Type | Description |
+| ------------------------- | ------- | --------------------------------------------- |
+| TTL | Long | TTL of the Database |
+| SCHEMA_REPLICATION_FACTOR | Integer | The schema replication number of the Database |
+| DATA_REPLICATION_FACTOR | Integer | The data replication number of the Database |
+| SCHEMA_REGION_GROUP_NUM | Integer | The SchemaRegionGroup number of the Database |
+| DATA_REGION_GROUP_NUM | Integer | The DataRegionGroup number of the Database |
+
+Note the following when configuring heterogeneous parameters:
+
++ TTL and TIME_PARTITION_INTERVAL must be positive integers.
++ SCHEMA_REPLICATION_FACTOR and DATA_REPLICATION_FACTOR must be smaller than or equal to the number of deployed DataNodes.
++ The function of SCHEMA_REGION_GROUP_NUM and DATA_REGION_GROUP_NUM are related to the parameter `schema_region_group_extension_policy` and `data_region_group_extension_policy` in iotdb-system.properties configuration file. Take DATA_REGION_GROUP_NUM as an example:
+ If `data_region_group_extension_policy=CUSTOM` is set, DATA_REGION_GROUP_NUM serves as the number of DataRegionGroups owned by the Database.
+ If `data_region_group_extension_policy=AUTO`, DATA_REGION_GROUP_NUM is used as the lower bound of the DataRegionGroup quota owned by the Database. That is, when the Database starts writing data, it will have at least this number of DataRegionGroups.
+
+Users can set any heterogeneous parameters when creating a Database, or adjust some heterogeneous parameters during a stand-alone/distributed IoTDB run.
+
+#### Set heterogeneous parameters when creating a Database
+
+The user can set any of the above heterogeneous parameters when creating a Database. The SQL statement is as follows:
+
+```
+CREATE DATABASE prefixPath (WITH databaseAttributeClause (COMMA? databaseAttributeClause)*)?
+```
+
+For example:
+
+```
+CREATE DATABASE root.db WITH SCHEMA_REPLICATION_FACTOR=1, DATA_REPLICATION_FACTOR=3, SCHEMA_REGION_GROUP_NUM=1, DATA_REGION_GROUP_NUM=2;
+```
+
+#### Adjust heterogeneous parameters at run time
+
+Users can adjust some heterogeneous parameters during the IoTDB runtime, as shown in the following SQL statement:
+
+```
+ALTER DATABASE prefixPath WITH databaseAttributeClause (COMMA? databaseAttributeClause)*
+```
+
+For example:
+
+```
+ALTER DATABASE root.db WITH SCHEMA_REGION_GROUP_NUM=1, DATA_REGION_GROUP_NUM=2;
+```
+
+Note that only the following heterogeneous parameters can be adjusted at runtime:
+
++ SCHEMA_REGION_GROUP_NUM
++ DATA_REGION_GROUP_NUM
+
+#### Show heterogeneous databases
+
+The user can query the specific heterogeneous configuration of each Database, and the SQL statement is as follows:
+
+```
+SHOW DATABASES DETAILS prefixPath?
+```
+
+For example:
+
+```
+IoTDB> SHOW DATABASES DETAILS
++--------+--------+-----------------------+---------------------+---------------------+--------------------+-----------------------+-----------------------+------------------+---------------------+---------------------+
+|Database| TTL|SchemaReplicationFactor|DataReplicationFactor|TimePartitionInterval|SchemaRegionGroupNum|MinSchemaRegionGroupNum|MaxSchemaRegionGroupNum|DataRegionGroupNum|MinDataRegionGroupNum|MaxDataRegionGroupNum|
++--------+--------+-----------------------+---------------------+---------------------+--------------------+-----------------------+-----------------------+------------------+---------------------+---------------------+
+|root.db1| null| 1| 3| 604800000| 0| 1| 1| 0| 2| 2|
+|root.db2|86400000| 1| 1| 604800000| 0| 1| 1| 0| 2| 2|
+|root.db3| null| 1| 1| 604800000| 0| 1| 1| 0| 2| 2|
++--------+--------+-----------------------+---------------------+---------------------+--------------------+-----------------------+-----------------------+------------------+---------------------+---------------------+
+Total line number = 3
+It costs 0.058s
+```
+
+The query results in each column are as follows:
+
++ The name of the Database
++ The TTL of the Database
++ The schema replication number of the Database
++ The data replication number of the Database
++ The time partition interval of the Database
++ The current SchemaRegionGroup number of the Database
++ The required minimum SchemaRegionGroup number of the Database
++ The permitted maximum SchemaRegionGroup number of the Database
++ The current DataRegionGroup number of the Database
++ The required minimum DataRegionGroup number of the Database
++ The permitted maximum DataRegionGroup number of the Database
+
+### TTL
+
+IoTDB supports device-level TTL settings, which means it is able to delete old data automatically and periodically. The benefit of using TTL is that hopefully you can control the total disk space usage and prevent the machine from running out of disks. Moreover, the query performance may downgrade as the total number of files goes up and the memory usage also increases as there are more files. Timely removing such files helps to keep at a high query performance level and reduce memory usage.
+
+The default unit of TTL is milliseconds. If the time precision in the configuration file changes to another, the TTL is still set to milliseconds.
+
+When setting TTL, the system will look for all devices included in the set path and set TTL for these devices. The system will delete expired data at the device granularity.
+After the device data expires, it will not be queryable. The data in the disk file cannot be guaranteed to be deleted immediately, but it can be guaranteed to be deleted eventually.
+However, due to operational costs, the expired data will not be physically deleted right after expiring. The physical deletion is delayed until compaction.
+Therefore, before the data is physically deleted, if the TTL is reduced or lifted, it may cause data that was previously invisible due to TTL to reappear.
+The system can only set up to 1000 TTL rules, and when this limit is reached, some TTL rules need to be deleted before new rules can be set.
+
+#### TTL Path Rule
+The path can only be prefix paths (i.e., the path cannot contain \* , except \*\* in the last level).
+This path will match devices and also allows users to specify paths without asterisks as specific databases or devices.
+When the path does not contain asterisks, the system will check if it matches a database; if it matches a database, both the path and path.\*\* will be set at the same time. Note: Device TTL settings do not verify the existence of metadata, i.e., it is allowed to set TTL for a non-existent device.
+```
+qualified paths:
+root.**
+root.db.**
+root.db.group1.**
+root.db
+root.db.group1.d1
+
+unqualified paths:
+root.*.db
+root.**.db.*
+root.db.*
+```
+#### TTL Applicable Rules
+When a device is subject to multiple TTL rules, the more precise and longer rules are prioritized. For example, for the device "root.bj.hd.dist001.turbine001", the rule "root.bj.hd.dist001.turbine001" takes precedence over "root.bj.hd.dist001.\*\*", and the rule "root.bj.hd.dist001.\*\*" takes precedence over "root.bj.hd.**".
+#### Set TTL
+The set ttl operation can be understood as setting a TTL rule, for example, setting ttl to root.sg.group1.** is equivalent to mounting ttl for all devices that can match this path pattern.
+The unset ttl operation indicates unmounting TTL for the corresponding path pattern; if there is no corresponding TTL, nothing will be done.
+If you want to set TTL to be infinitely large, you can use the INF keyword.
+The SQL Statement for setting TTL is as follow:
+```
+set ttl to pathPattern 360000;
+```
+Set the Time to Live (TTL) to a pathPattern of 360,000 milliseconds; the pathPattern should not contain a wildcard (\*) in the middle and must end with a double asterisk (\*\*). The pathPattern is used to match corresponding devices.
+To maintain compatibility with older SQL syntax, if the user-provided pathPattern matches a database (db), the path pattern is automatically expanded to include all sub-paths denoted by path.\*\*.
+For instance, writing "set ttl to root.sg 360000" will automatically be transformed into "set ttl to root.sg.\*\* 360000", which sets the TTL for all devices under root.sg. However, if the specified pathPattern does not match a database, the aforementioned logic will not apply. For example, writing "set ttl to root.sg.group 360000" will not be expanded to "root.sg.group.\*\*" since root.sg.group does not match a database.
+It is also permissible to specify a particular device without a wildcard (*).
+#### Unset TTL
+
+To unset TTL, we can use follwing SQL statement:
+
+```
+IoTDB> unset ttl from root.ln
+```
+
+After unset TTL, all data will be accepted in `root.ln`.
+```
+IoTDB> unset ttl from root.sgcc.**
+```
+
+Unset the TTL in the `root.sgcc` path.
+
+New syntax
+```
+IoTDB> unset ttl from root.**
+```
+
+Old syntax
+```
+IoTDB> unset ttl to root.**
+```
+There is no functional difference between the old and new syntax, and they are compatible with each other.
+The new syntax is just more conventional in terms of wording.
+
+Unset the TTL setting for all path pattern.
+
+#### Show TTL
+
+To Show TTL, we can use following SQL statement:
+
+show all ttl
+
+```
+IoTDB> SHOW ALL TTL
++--------------+--------+
+| path| TTL|
+| root.**|55555555|
+| root.sg2.a.**|44440000|
++--------------+--------+
+```
+
+show ttl on pathPattern
+```
+IoTDB> SHOW TTL ON root.db.**;
++--------------+--------+
+| path| TTL|
+| root.db.**|55555555|
+| root.db.a.**|44440000|
++--------------+--------+
+```
+
+The SHOW ALL TTL example gives the TTL for all path patterns.
+The SHOW TTL ON pathPattern shows the TTL for the path pattern specified.
+
+Display devices' ttl
+```
+IoTDB> show devices
++---------------+---------+---------+
+| Device|IsAligned| TTL|
++---------------+---------+---------+
+|root.sg.device1| false| 36000000|
+|root.sg.device2| true| INF|
++---------------+---------+---------+
+```
+All devices will definitely have a TTL, meaning it cannot be null. INF represents infinity.
+
+## Device Template
+
+IoTDB supports the device template function, enabling different entities of the same type to share metadata, reduce the memory usage of metadata, and simplify the management of numerous entities and measurements.
+
+![img](https://alioss.timecho.com/docs/img/%E6%A8%A1%E6%9D%BF.png)
+
+![img](https://alioss.timecho.com/docs/img/templateEN.jpg)
+
+### Create Device Template
+
+The SQL syntax for creating a metadata template is as follows:
+
+```sql
+CREATE DEVICE TEMPLATE ALIGNED? '(' [',' ]+ ')'
+```
+
+**Example 1:** Create a template containing two non-aligned timeseries
+
+```shell
+IoTDB> create device template t1 (temperature FLOAT encoding=RLE, status BOOLEAN encoding=PLAIN compression=SNAPPY)
+```
+
+**Example 2:** Create a template containing a group of aligned timeseries
+
+```shell
+IoTDB> create device template t2 aligned (lat FLOAT encoding=Gorilla, lon FLOAT encoding=Gorilla)
+```
+
+The` lat` and `lon` measurements are aligned.
+
+
+### Set Device Template
+
+After a device template is created, it should be set to specific path before creating related timeseries or insert data.
+
+**It should be ensured that the related database has been set before setting template.**
+
+**It is recommended to set device template to database path. It is not suggested to set device template to some path above database**
+
+**It is forbidden to create timeseries under a path setting s tedeviceplate. Device template shall not be set on a prefix path of an existing timeseries.**
+
+The SQL Statement for setting device template is as follow:
+
+```shell
+IoTDB> set device template t1 to root.sg1.d1
+```
+
+### Activate Device Template
+
+After setting the device template, with the system enabled to auto create schema, you can insert data into the timeseries. For example, suppose there's a database root.sg1 and t1 has been set to root.sg1.d1, then timeseries like root.sg1.d1.temperature and root.sg1.d1.status are available and data points can be inserted.
+
+
+**Attention**: Before inserting data or the system not enabled to auto create schema, timeseries defined by the device template will not be created. You can use the following SQL statement to create the timeseries or activate the templdeviceate, act before inserting data:
+
+```shell
+IoTDB> create timeseries using device template on root.sg1.d1
+```
+
+**Example:** Execute the following statement
+
+```shell
+IoTDB> set device template t1 to root.sg1.d1
+IoTDB> set device template t2 to root.sg1.d2
+IoTDB> create timeseries using device template on root.sg1.d1
+IoTDB> create timeseries using device template on root.sg1.d2
+```
+
+Show the time series:
+
+```sql
+show timeseries root.sg1.**
+````
+
+```shell
++-----------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression|tags|attributes|deadband|deadband parameters|
++-----------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+|root.sg1.d1.temperature| null| root.sg1| FLOAT| RLE| SNAPPY|null| null| null| null|
+| root.sg1.d1.status| null| root.sg1| BOOLEAN| PLAIN| SNAPPY|null| null| null| null|
+| root.sg1.d2.lon| null| root.sg1| FLOAT| GORILLA| SNAPPY|null| null| null| null|
+| root.sg1.d2.lat| null| root.sg1| FLOAT| GORILLA| SNAPPY|null| null| null| null|
++-----------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+```
+
+Show the devices:
+
+```sql
+show devices root.sg1.**
+````
+
+```shell
++---------------+---------+
+| devices|isAligned|
++---------------+---------+
+| root.sg1.d1| false|
+| root.sg1.d2| true|
++---------------+---------+
+````
+
+### Show Device Template
+
+- Show all device templates
+
+The SQL statement looks like this:
+
+```shell
+IoTDB> show device templates
+```
+
+The execution result is as follows:
+
+```shell
++-------------+
+|template name|
++-------------+
+| t2|
+| t1|
++-------------+
+```
+
+- Show nodes under in device template
+
+The SQL statement looks like this:
+
+```shell
+IoTDB> show nodes in device template t1
+```
+
+The execution result is as follows:
+
+```shell
++-----------+--------+--------+-----------+
+|child nodes|dataType|encoding|compression|
++-----------+--------+--------+-----------+
+|temperature| FLOAT| RLE| SNAPPY|
+| status| BOOLEAN| PLAIN| SNAPPY|
++-----------+--------+--------+-----------+
+```
+
+- Show the path prefix where a device template is set
+
+```shell
+IoTDB> show paths set device template t1
+```
+
+The execution result is as follows:
+
+```shell
++-----------+
+|child paths|
++-----------+
+|root.sg1.d1|
++-----------+
+```
+
+- Show the path prefix where a device template is used (i.e. the time series has been created)
+
+```shell
+IoTDB> show paths using device template t1
+```
+
+The execution result is as follows:
+
+```shell
++-----------+
+|child paths|
++-----------+
+|root.sg1.d1|
++-----------+
+```
+
+### Deactivate device Template
+
+To delete a group of timeseries represented by device template, namely deactivate the device template, use the following SQL statement:
+
+```shell
+IoTDB> delete timeseries of device template t1 from root.sg1.d1
+```
+
+or
+
+```shell
+IoTDB> deactivate device template t1 from root.sg1.d1
+```
+
+The deactivation supports batch process.
+
+```shell
+IoTDB> delete timeseries of device template t1 from root.sg1.*, root.sg2.*
+```
+
+or
+
+```shell
+IoTDB> deactivate device template t1 from root.sg1.*, root.sg2.*
+```
+
+If the template name is not provided in sql, all template activation on paths matched by given path pattern will be removed.
+
+### Unset Device Template
+
+The SQL Statement for unsetting device template is as follow:
+
+```shell
+IoTDB> unset device template t1 from root.sg1.d1
+```
+
+**Attention**: It should be guaranteed that none of the timeseries represented by the target device template exists, before unset it. It can be achieved by deactivation operation.
+
+### Drop Device Template
+
+The SQL Statement for dropping device template is as follow:
+
+```shell
+IoTDB> drop device template t1
+```
+
+**Attention**: Dropping an already set template is not supported.
+
+### Alter Device Template
+
+In a scenario where measurements need to be added, you can modify the template to add measurements to all devicesdevice using the device template.
+
+The SQL Statement for altering device template is as follow:
+
+```shell
+IoTDB> alter device template t1 add (speed FLOAT encoding=RLE, FLOAT TEXT encoding=PLAIN compression=SNAPPY)
+```
+
+**When executing data insertion to devices with device template set on related prefix path and there are measurements not present in this device template, the measurements will be auto added to this device template.**
+
+## Timeseries Management
+
+### Create Timeseries
+
+According to the storage model selected before, we can create corresponding timeseries in the two databases respectively. The SQL statements for creating timeseries are as follows:
+
+```
+IoTDB > create timeseries root.ln.wf01.wt01.status with datatype=BOOLEAN,encoding=PLAIN
+IoTDB > create timeseries root.ln.wf01.wt01.temperature with datatype=FLOAT,encoding=RLE
+IoTDB > create timeseries root.ln.wf02.wt02.hardware with datatype=TEXT,encoding=PLAIN
+IoTDB > create timeseries root.ln.wf02.wt02.status with datatype=BOOLEAN,encoding=PLAIN
+IoTDB > create timeseries root.sgcc.wf03.wt01.status with datatype=BOOLEAN,encoding=PLAIN
+IoTDB > create timeseries root.sgcc.wf03.wt01.temperature with datatype=FLOAT,encoding=RLE
+```
+
+From v0.13, you can use a simplified version of the SQL statements to create timeseries:
+
+```
+IoTDB > create timeseries root.ln.wf01.wt01.status BOOLEAN encoding=PLAIN
+IoTDB > create timeseries root.ln.wf01.wt01.temperature FLOAT encoding=RLE
+IoTDB > create timeseries root.ln.wf02.wt02.hardware TEXT encoding=PLAIN
+IoTDB > create timeseries root.ln.wf02.wt02.status BOOLEAN encoding=PLAIN
+IoTDB > create timeseries root.sgcc.wf03.wt01.status BOOLEAN encoding=PLAIN
+IoTDB > create timeseries root.sgcc.wf03.wt01.temperature FLOAT encoding=RLE
+```
+
+Notice that when in the CREATE TIMESERIES statement the encoding method conflicts with the data type, the system gives the corresponding error prompt as shown below:
+
+```
+IoTDB > create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODING=TS_2DIFF
+error: encoding TS_2DIFF does not support BOOLEAN
+```
+
+Please refer to [Encoding](../Basic-Concept/Encoding-and-Compression.md) for correspondence between data type and encoding.
+
+### Create Aligned Timeseries
+
+The SQL statement for creating a group of timeseries are as follows:
+
+```
+IoTDB> CREATE ALIGNED TIMESERIES root.ln.wf01.GPS(latitude FLOAT encoding=PLAIN compressor=SNAPPY, longitude FLOAT encoding=PLAIN compressor=SNAPPY)
+```
+
+You can set different datatype, encoding, and compression for the timeseries in a group of aligned timeseries
+
+It is also supported to set an alias, tag, and attribute for aligned timeseries.
+
+### Delete Timeseries
+
+To delete the timeseries we created before, we are able to use `(DELETE | DROP) TimeSeries ` statement.
+
+The usage are as follows:
+
+```
+IoTDB> delete timeseries root.ln.wf01.wt01.status
+IoTDB> delete timeseries root.ln.wf01.wt01.temperature, root.ln.wf02.wt02.hardware
+IoTDB> delete timeseries root.ln.wf02.*
+IoTDB> drop timeseries root.ln.wf02.*
+```
+
+### Show Timeseries
+
+* SHOW LATEST? TIMESERIES pathPattern? whereClause? limitClause?
+
+ There are four optional clauses added in SHOW TIMESERIES, return information of time series
+
+Timeseries information includes: timeseries path, alias of measurement, database it belongs to, data type, encoding type, compression type, tags and attributes.
+
+Examples:
+
+* SHOW TIMESERIES
+
+ presents all timeseries information in JSON form
+
+* SHOW TIMESERIES <`PathPattern`>
+
+ returns all timeseries information matching the given <`PathPattern`>. SQL statements are as follows:
+
+```
+IoTDB> show timeseries root.**
+IoTDB> show timeseries root.ln.**
+```
+
+The results are shown below respectively:
+
+```
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| timeseries| alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+|root.sgcc.wf03.wt01.temperature| null| root.sgcc| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.sgcc.wf03.wt01.status| null| root.sgcc| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
+| root.turbine.d1.s1|newAlias| root.turbine| FLOAT| RLE| SNAPPY|{"newTag1":"newV1","tag4":"v4","tag3":"v3"}|{"attr2":"v2","attr1":"newV1","attr4":"v4","attr3":"v3"}| null| null|
+| root.ln.wf02.wt02.hardware| null| root.ln| TEXT| PLAIN| SNAPPY| null| null| null| null|
+| root.ln.wf02.wt02.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
+| root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+Total line number = 7
+It costs 0.016s
+
++-----------------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression|tags|attributes|deadband|deadband parameters|
++-----------------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+| root.ln.wf02.wt02.hardware| null| root.ln| TEXT| PLAIN| SNAPPY|null| null| null| null|
+| root.ln.wf02.wt02.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY|null| null| null| null|
+|root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY|null| null| null| null|
+| root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY|null| null| null| null|
++-----------------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+Total line number = 4
+It costs 0.004s
+```
+
+* SHOW TIMESERIES LIMIT INT OFFSET INT
+
+ returns all the timeseries information start from the offset and limit the number of series returned. For example,
+
+```
+show timeseries root.ln.** limit 10 offset 10
+```
+
+* SHOW TIMESERIES WHERE TIMESERIES contains 'containStr'
+
+ The query result set is filtered by string fuzzy matching based on the names of the timeseries. For example:
+
+```
+show timeseries root.ln.** where timeseries contains 'wf01.wt'
+```
+
+The result is shown below:
+
+```
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| timeseries| alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+Total line number = 2
+It costs 0.016s
+```
+
+* SHOW TIMESERIES WHERE DataType=type
+
+ The query result set is filtered by data type. For example:
+
+```
+show timeseries root.ln.** where dataType=FLOAT
+```
+
+The result is shown below:
+
+```
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| timeseries| alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+|root.sgcc.wf03.wt01.temperature| null| root.sgcc| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.turbine.d1.s1|newAlias| root.turbine| FLOAT| RLE| SNAPPY|{"newTag1":"newV1","tag4":"v4","tag3":"v3"}|{"attr2":"v2","attr1":"newV1","attr4":"v4","attr3":"v3"}| null| null|
+| root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY| null| null| null| null|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+Total line number = 3
+It costs 0.016s
+
+```
+
+
+* SHOW LATEST TIMESERIES
+
+ all the returned timeseries information should be sorted in descending order of the last timestamp of timeseries
+
+It is worth noting that when the queried path does not exist, the system will return no timeseries.
+
+
+### Count Timeseries
+
+IoTDB is able to use `COUNT TIMESERIES ` to count the number of timeseries matching the path. SQL statements are as follows:
+
+* `WHERE` condition could be used to fuzzy match a time series name with the following syntax: `COUNT TIMESERIES WHERE TIMESERIES contains 'containStr'`.
+* `WHERE` condition could be used to filter result by data type with the syntax: `COUNT TIMESERIES WHERE DataType='`.
+* `WHERE` condition could be used to filter result by tags with the syntax: `COUNT TIMESERIES WHERE TAGS(key)='value'` or `COUNT TIMESERIES WHERE TAGS(key) contains 'value'`.
+* `LEVEL` could be defined to show count the number of timeseries of each node at the given level in current Metadata Tree. This could be used to query the number of sensors under each device. The grammar is: `COUNT TIMESERIES GROUP BY LEVEL=`.
+
+
+```
+IoTDB > COUNT TIMESERIES root.**
+IoTDB > COUNT TIMESERIES root.ln.**
+IoTDB > COUNT TIMESERIES root.ln.*.*.status
+IoTDB > COUNT TIMESERIES root.ln.wf01.wt01.status
+IoTDB > COUNT TIMESERIES root.** WHERE TIMESERIES contains 'sgcc'
+IoTDB > COUNT TIMESERIES root.** WHERE DATATYPE = INT64
+IoTDB > COUNT TIMESERIES root.** WHERE TAGS(unit) contains 'c'
+IoTDB > COUNT TIMESERIES root.** WHERE TAGS(unit) = 'c'
+IoTDB > COUNT TIMESERIES root.** WHERE TIMESERIES contains 'sgcc' group by level = 1
+```
+
+For example, if there are several timeseries (use `show timeseries` to show all timeseries):
+
+```
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| timeseries| alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+|root.sgcc.wf03.wt01.temperature| null| root.sgcc| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.sgcc.wf03.wt01.status| null| root.sgcc| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
+| root.turbine.d1.s1|newAlias| root.turbine| FLOAT| RLE| SNAPPY|{"newTag1":"newV1","tag4":"v4","tag3":"v3"}|{"attr2":"v2","attr1":"newV1","attr4":"v4","attr3":"v3"}| null| null|
+| root.ln.wf02.wt02.hardware| null| root.ln| TEXT| PLAIN| SNAPPY| {"unit":"c"}| null| null| null|
+| root.ln.wf02.wt02.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| {"description":"test1"}| null| null| null|
+| root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+Total line number = 7
+It costs 0.004s
+```
+
+Then the Metadata Tree will be as below:
+
+
+
+As can be seen, `root` is considered as `LEVEL=0`. So when you enter statements such as:
+
+```
+IoTDB > COUNT TIMESERIES root.** GROUP BY LEVEL=1
+IoTDB > COUNT TIMESERIES root.ln.** GROUP BY LEVEL=2
+IoTDB > COUNT TIMESERIES root.ln.wf01.* GROUP BY LEVEL=2
+```
+
+You will get following results:
+
+```
++------------+-----------------+
+| column|count(timeseries)|
++------------+-----------------+
+| root.sgcc| 2|
+|root.turbine| 1|
+| root.ln| 4|
++------------+-----------------+
+Total line number = 3
+It costs 0.002s
+
++------------+-----------------+
+| column|count(timeseries)|
++------------+-----------------+
+|root.ln.wf02| 2|
+|root.ln.wf01| 2|
++------------+-----------------+
+Total line number = 2
+It costs 0.002s
+
++------------+-----------------+
+| column|count(timeseries)|
++------------+-----------------+
+|root.ln.wf01| 2|
++------------+-----------------+
+Total line number = 1
+It costs 0.002s
+```
+
+> Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level.
+
+### Tag and Attribute Management
+
+We can also add an alias, extra tag and attribute information while creating one timeseries.
+
+The differences between tag and attribute are:
+
+* Tag could be used to query the path of timeseries, we will maintain an inverted index in memory on the tag: Tag -> Timeseries
+* Attribute could only be queried by timeseries path : Timeseries -> Attribute
+
+The SQL statements for creating timeseries with extra tag and attribute information are extended as follows:
+
+```
+create timeseries root.turbine.d1.s1(temprature) with datatype=FLOAT, encoding=RLE, compression=SNAPPY tags(tag1=v1, tag2=v2) attributes(attr1=v1, attr2=v2)
+```
+
+The `temprature` in the brackets is an alias for the sensor `s1`. So we can use `temprature` to replace `s1` anywhere.
+
+> IoTDB also supports using AS function to set alias. The difference between the two is: the alias set by the AS function is used to replace the whole time series name, temporary and not bound with the time series; while the alias mentioned above is only used as the alias of the sensor, which is bound with it and can be used equivalent to the original sensor name.
+
+> Notice that the size of the extra tag and attribute information shouldn't exceed the `tag_attribute_total_size`.
+
+We can update the tag information after creating it as following:
+
+* Rename the tag/attribute key
+
+```
+ALTER timeseries root.turbine.d1.s1 RENAME tag1 TO newTag1
+```
+
+* Reset the tag/attribute value
+
+```
+ALTER timeseries root.turbine.d1.s1 SET newTag1=newV1, attr1=newV1
+```
+
+* Delete the existing tag/attribute
+
+```
+ALTER timeseries root.turbine.d1.s1 DROP tag1, tag2
+```
+
+* Add new tags
+
+```
+ALTER timeseries root.turbine.d1.s1 ADD TAGS tag3=v3, tag4=v4
+```
+
+* Add new attributes
+
+```
+ALTER timeseries root.turbine.d1.s1 ADD ATTRIBUTES attr3=v3, attr4=v4
+```
+
+* Upsert alias, tags and attributes
+
+> add alias or a new key-value if the alias or key doesn't exist, otherwise, update the old one with new value.
+
+```
+ALTER timeseries root.turbine.d1.s1 UPSERT ALIAS=newAlias TAGS(tag3=v3, tag4=v4) ATTRIBUTES(attr3=v3, attr4=v4)
+```
+
+* Show timeseries using tags. Use TAGS(tagKey) to identify the tags used as filter key
+
+```
+SHOW TIMESERIES (<`PathPattern`>)? timeseriesWhereClause
+```
+
+returns all the timeseries information that satisfy the where condition and match the pathPattern. SQL statements are as follows:
+
+```
+ALTER timeseries root.ln.wf02.wt02.hardware ADD TAGS unit=c
+ALTER timeseries root.ln.wf02.wt02.status ADD TAGS description=test1
+show timeseries root.ln.** where TAGS(unit)='c'
+show timeseries root.ln.** where TAGS(description) contains 'test1'
+```
+
+The results are shown below respectly:
+
+```
++--------------------------+-----+-------------+--------+--------+-----------+------------+----------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression| tags|attributes|deadband|deadband parameters|
++--------------------------+-----+-------------+--------+--------+-----------+------------+----------+--------+-------------------+
+|root.ln.wf02.wt02.hardware| null| root.ln| TEXT| PLAIN| SNAPPY|{"unit":"c"}| null| null| null|
++--------------------------+-----+-------------+--------+--------+-----------+------------+----------+--------+-------------------+
+Total line number = 1
+It costs 0.005s
+
++------------------------+-----+-------------+--------+--------+-----------+-----------------------+----------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression| tags|attributes|deadband|deadband parameters|
++------------------------+-----+-------------+--------+--------+-----------+-----------------------+----------+--------+-------------------+
+|root.ln.wf02.wt02.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY|{"description":"test1"}| null| null| null|
++------------------------+-----+-------------+--------+--------+-----------+-----------------------+----------+--------+-------------------+
+Total line number = 1
+It costs 0.004s
+```
+
+- count timeseries using tags
+
+```
+COUNT TIMESERIES (<`PathPattern`>)? timeseriesWhereClause
+COUNT TIMESERIES (<`PathPattern`>)? timeseriesWhereClause GROUP BY LEVEL=
+```
+
+returns all the number of timeseries that satisfy the where condition and match the pathPattern. SQL statements are as follows:
+
+```
+count timeseries
+count timeseries root.** where TAGS(unit)='c'
+count timeseries root.** where TAGS(unit)='c' group by level = 2
+```
+
+The results are shown below respectly :
+
+```
+IoTDB> count timeseries
++-----------------+
+|count(timeseries)|
++-----------------+
+| 6|
++-----------------+
+Total line number = 1
+It costs 0.019s
+IoTDB> count timeseries root.** where TAGS(unit)='c'
++-----------------+
+|count(timeseries)|
++-----------------+
+| 2|
++-----------------+
+Total line number = 1
+It costs 0.020s
+IoTDB> count timeseries root.** where TAGS(unit)='c' group by level = 2
++--------------+-----------------+
+| column|count(timeseries)|
++--------------+-----------------+
+| root.ln.wf02| 2|
+| root.ln.wf01| 0|
+|root.sgcc.wf03| 0|
++--------------+-----------------+
+Total line number = 3
+It costs 0.011s
+```
+
+> Notice that, we only support one condition in the where clause. Either it's an equal filter or it is an `contains` filter. In both case, the property in the where condition must be a tag.
+
+create aligned timeseries
+
+```
+create aligned timeseries root.sg1.d1(s1 INT32 tags(tag1=v1, tag2=v2) attributes(attr1=v1, attr2=v2), s2 DOUBLE tags(tag3=v3, tag4=v4) attributes(attr3=v3, attr4=v4))
+```
+
+The execution result is as follows:
+
+```
+IoTDB> show timeseries
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+|root.sg1.d1.s1| null| root.sg1| INT32| RLE| SNAPPY|{"tag1":"v1","tag2":"v2"}|{"attr2":"v2","attr1":"v1"}| null| null|
+|root.sg1.d1.s2| null| root.sg1| DOUBLE| GORILLA| SNAPPY|{"tag4":"v4","tag3":"v3"}|{"attr4":"v4","attr3":"v3"}| null| null|
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+```
+
+Support query:
+
+```
+IoTDB> show timeseries where TAGS(tag1)='v1'
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+|root.sg1.d1.s1| null| root.sg1| INT32| RLE| SNAPPY|{"tag1":"v1","tag2":"v2"}|{"attr2":"v2","attr1":"v1"}| null| null|
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+```
+
+The above operations are supported for timeseries tag, attribute updates, etc.
+
+## Node Management
+
+### Show Child Paths
+
+```
+SHOW CHILD PATHS pathPattern
+```
+
+Return all child paths and their node types of all the paths matching pathPattern.
+
+node types: ROOT -> DB INTERNAL -> DATABASE -> INTERNAL -> DEVICE -> TIMESERIES
+
+
+Example:
+
+* return the child paths of root.ln:show child paths root.ln
+
+```
++------------+----------+
+| child paths|node types|
++------------+----------+
+|root.ln.wf01| INTERNAL|
+|root.ln.wf02| INTERNAL|
++------------+----------+
+Total line number = 2
+It costs 0.002s
+```
+
+> get all paths in form of root.xx.xx.xx:show child paths root.xx.xx
+
+### Show Child Nodes
+
+```
+SHOW CHILD NODES pathPattern
+```
+
+Return all child nodes of the pathPattern.
+
+Example:
+
+* return the child nodes of root:show child nodes root
+
+```
++------------+
+| child nodes|
++------------+
+| ln|
++------------+
+```
+
+* return the child nodes of root.ln:show child nodes root.ln
+
+```
++------------+
+| child nodes|
++------------+
+| wf01|
+| wf02|
++------------+
+```
+
+### Count Nodes
+
+IoTDB is able to use `COUNT NODES LEVEL=` to count the number of nodes at
+ the given level in current Metadata Tree considering a given pattern. IoTDB will find paths that
+ match the pattern and counts distinct nodes at the specified level among the matched paths.
+ This could be used to query the number of devices with specified measurements. The usage are as
+ follows:
+
+```
+IoTDB > COUNT NODES root.** LEVEL=2
+IoTDB > COUNT NODES root.ln.** LEVEL=2
+IoTDB > COUNT NODES root.ln.wf01.** LEVEL=3
+IoTDB > COUNT NODES root.**.temperature LEVEL=3
+```
+
+As for the above mentioned example and Metadata tree, you can get following results:
+
+```
++------------+
+|count(nodes)|
++------------+
+| 4|
++------------+
+Total line number = 1
+It costs 0.003s
+
++------------+
+|count(nodes)|
++------------+
+| 2|
++------------+
+Total line number = 1
+It costs 0.002s
+
++------------+
+|count(nodes)|
++------------+
+| 1|
++------------+
+Total line number = 1
+It costs 0.002s
+
++------------+
+|count(nodes)|
++------------+
+| 2|
++------------+
+Total line number = 1
+It costs 0.002s
+```
+
+> Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level.
+
+### Show Devices
+
+* SHOW DEVICES pathPattern? (WITH DATABASE)? devicesWhereClause? limitClause?
+
+Similar to `Show Timeseries`, IoTDB also supports two ways of viewing devices:
+
+* `SHOW DEVICES` statement presents all devices' information, which is equal to `SHOW DEVICES root.**`.
+* `SHOW DEVICES ` statement specifies the `PathPattern` and returns the devices information matching the pathPattern and under the given level.
+* `WHERE` condition supports `DEVICE contains 'xxx'` to do a fuzzy query based on the device name.
+
+SQL statement is as follows:
+
+```
+IoTDB> show devices
+IoTDB> show devices root.ln.**
+IoTDB> show devices root.ln.** where device contains 't'
+```
+
+You can get results below:
+
+```
++-------------------+---------+
+| devices|isAligned|
++-------------------+---------+
+| root.ln.wf01.wt01| false|
+| root.ln.wf02.wt02| false|
+|root.sgcc.wf03.wt01| false|
+| root.turbine.d1| false|
++-------------------+---------+
+Total line number = 4
+It costs 0.002s
+
++-----------------+---------+
+| devices|isAligned|
++-----------------+---------+
+|root.ln.wf01.wt01| false|
+|root.ln.wf02.wt02| false|
++-----------------+---------+
+Total line number = 2
+It costs 0.001s
+```
+
+`isAligned` indicates whether the timeseries under the device are aligned.
+
+To view devices' information with database, we can use `SHOW DEVICES WITH DATABASE` statement.
+
+* `SHOW DEVICES WITH DATABASE` statement presents all devices' information with their database.
+* `SHOW DEVICES WITH DATABASE` statement specifies the `PathPattern` and returns the
+ devices' information under the given level with their database information.
+
+SQL statement is as follows:
+
+```
+IoTDB> show devices with database
+IoTDB> show devices root.ln.** with database
+```
+
+You can get results below:
+
+```
++-------------------+-------------+---------+
+| devices| database|isAligned|
++-------------------+-------------+---------+
+| root.ln.wf01.wt01| root.ln| false|
+| root.ln.wf02.wt02| root.ln| false|
+|root.sgcc.wf03.wt01| root.sgcc| false|
+| root.turbine.d1| root.turbine| false|
++-------------------+-------------+---------+
+Total line number = 4
+It costs 0.003s
+
++-----------------+-------------+---------+
+| devices| database|isAligned|
++-----------------+-------------+---------+
+|root.ln.wf01.wt01| root.ln| false|
+|root.ln.wf02.wt02| root.ln| false|
++-----------------+-------------+---------+
+Total line number = 2
+It costs 0.001s
+```
+
+### Count Devices
+
+* COUNT DEVICES /
+
+The above statement is used to count the number of devices. At the same time, it is allowed to specify `PathPattern` to count the number of devices matching the `PathPattern`.
+
+SQL statement is as follows:
+
+```
+IoTDB> show devices
+IoTDB> count devices
+IoTDB> count devices root.ln.**
+```
+
+You can get results below:
+
+```
++-------------------+---------+
+| devices|isAligned|
++-------------------+---------+
+|root.sgcc.wf03.wt03| false|
+| root.turbine.d1| false|
+| root.ln.wf02.wt02| false|
+| root.ln.wf01.wt01| false|
++-------------------+---------+
+Total line number = 4
+It costs 0.024s
+
++--------------+
+|count(devices)|
++--------------+
+| 4|
++--------------+
+Total line number = 1
+It costs 0.004s
+
++--------------+
+|count(devices)|
++--------------+
+| 2|
++--------------+
+Total line number = 1
+It costs 0.004s
+```
+
diff --git a/src/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_timecho.md b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_timecho.md
new file mode 100644
index 000000000..8d57facb1
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Operate-Metadata_timecho.md
@@ -0,0 +1,1324 @@
+
+
+# Timeseries Management
+
+## Database Management
+
+### Create Database
+
+According to the storage model we can set up the corresponding database. Two SQL statements are supported for creating databases, as follows:
+
+```
+IoTDB > create database root.ln
+IoTDB > create database root.sgcc
+```
+
+We can thus create two databases using the above two SQL statements.
+
+It is worth noting that 1 database is recommended.
+
+When the path itself or the parent/child layer of the path is already created as database, the path is then not allowed to be created as database. For example, it is not feasible to create `root.ln.wf01` as database when two databases `root.ln` and `root.sgcc` exist. The system gives the corresponding error prompt as shown below:
+
+```
+IoTDB> CREATE DATABASE root.ln.wf01
+Msg: 300: root.ln has already been created as database.
+IoTDB> create database root.ln.wf01
+Msg: 300: root.ln has already been created as database.
+```
+
+The LayerName of database can only be chinese or english characters, numbers, underscores, dots and backticks. If you want to set it to pure numbers or contain backticks or dots, you need to enclose the database name with backticks (` `` `). In ` `` `,2 backticks represents one, i.e. ` ```` ` represents `` ` ``.
+
+Besides, if deploy on Windows system, the LayerName is case-insensitive, which means it's not allowed to create databases `root.ln` and `root.LN` at the same time.
+
+### Show Databases
+
+After creating the database, we can use the [SHOW DATABASES](../SQL-Manual/SQL-Manual.md) statement and [SHOW DATABASES \](../SQL-Manual/SQL-Manual.md) to view the databases. The SQL statements are as follows:
+
+```
+IoTDB> SHOW DATABASES
+IoTDB> SHOW DATABASES root.**
+```
+
+The result is as follows:
+
+```
++-------------+----+-------------------------+-----------------------+-----------------------+
+|database| ttl|schema_replication_factor|data_replication_factor|time_partition_interval|
++-------------+----+-------------------------+-----------------------+-----------------------+
+| root.sgcc|null| 2| 2| 604800|
+| root.ln|null| 2| 2| 604800|
++-------------+----+-------------------------+-----------------------+-----------------------+
+Total line number = 2
+It costs 0.060s
+```
+
+### Delete Database
+
+User can use the `DELETE DATABASE ` statement to delete all databases matching the pathPattern. Please note the data in the database will also be deleted.
+
+```
+IoTDB > DELETE DATABASE root.ln
+IoTDB > DELETE DATABASE root.sgcc
+// delete all data, all timeseries and all databases
+IoTDB > DELETE DATABASE root.**
+```
+
+### Count Databases
+
+User can use the `COUNT DATABASE ` statement to count the number of databases. It is allowed to specify `PathPattern` to count the number of databases matching the `PathPattern`.
+
+SQL statement is as follows:
+
+```
+IoTDB> count databases
+IoTDB> count databases root.*
+IoTDB> count databases root.sgcc.*
+IoTDB> count databases root.sgcc
+```
+
+The result is as follows:
+
+```
++-------------+
+| database|
++-------------+
+| root.sgcc|
+| root.turbine|
+| root.ln|
++-------------+
+Total line number = 3
+It costs 0.003s
+
++-------------+
+| database|
++-------------+
+| 3|
++-------------+
+Total line number = 1
+It costs 0.003s
+
++-------------+
+| database|
++-------------+
+| 3|
++-------------+
+Total line number = 1
+It costs 0.002s
+
++-------------+
+| database|
++-------------+
+| 0|
++-------------+
+Total line number = 1
+It costs 0.002s
+
++-------------+
+| database|
++-------------+
+| 1|
++-------------+
+Total line number = 1
+It costs 0.002s
+```
+
+### Setting up heterogeneous databases (Advanced operations)
+
+Under the premise of familiar with IoTDB metadata modeling,
+users can set up heterogeneous databases in IoTDB to cope with different production needs.
+
+Currently, the following database heterogeneous parameters are supported:
+
+| Parameter | Type | Description |
+| ------------------------- | ------- | --------------------------------------------- |
+| TTL | Long | TTL of the Database |
+| SCHEMA_REPLICATION_FACTOR | Integer | The schema replication number of the Database |
+| DATA_REPLICATION_FACTOR | Integer | The data replication number of the Database |
+| SCHEMA_REGION_GROUP_NUM | Integer | The SchemaRegionGroup number of the Database |
+| DATA_REGION_GROUP_NUM | Integer | The DataRegionGroup number of the Database |
+
+Note the following when configuring heterogeneous parameters:
+
++ TTL and TIME_PARTITION_INTERVAL must be positive integers.
++ SCHEMA_REPLICATION_FACTOR and DATA_REPLICATION_FACTOR must be smaller than or equal to the number of deployed DataNodes.
++ The function of SCHEMA_REGION_GROUP_NUM and DATA_REGION_GROUP_NUM are related to the parameter `schema_region_group_extension_policy` and `data_region_group_extension_policy` in iotdb-common.properties configuration file. Take DATA_REGION_GROUP_NUM as an example:
+ If `data_region_group_extension_policy=CUSTOM` is set, DATA_REGION_GROUP_NUM serves as the number of DataRegionGroups owned by the Database.
+ If `data_region_group_extension_policy=AUTO`, DATA_REGION_GROUP_NUM is used as the lower bound of the DataRegionGroup quota owned by the Database. That is, when the Database starts writing data, it will have at least this number of DataRegionGroups.
+
+Users can set any heterogeneous parameters when creating a Database, or adjust some heterogeneous parameters during a stand-alone/distributed IoTDB run.
+
+#### Set heterogeneous parameters when creating a Database
+
+The user can set any of the above heterogeneous parameters when creating a Database. The SQL statement is as follows:
+
+```
+CREATE DATABASE prefixPath (WITH databaseAttributeClause (COMMA? databaseAttributeClause)*)?
+```
+
+For example:
+
+```
+CREATE DATABASE root.db WITH SCHEMA_REPLICATION_FACTOR=1, DATA_REPLICATION_FACTOR=3, SCHEMA_REGION_GROUP_NUM=1, DATA_REGION_GROUP_NUM=2;
+```
+
+#### Adjust heterogeneous parameters at run time
+
+Users can adjust some heterogeneous parameters during the IoTDB runtime, as shown in the following SQL statement:
+
+```
+ALTER DATABASE prefixPath WITH databaseAttributeClause (COMMA? databaseAttributeClause)*
+```
+
+For example:
+
+```
+ALTER DATABASE root.db WITH SCHEMA_REGION_GROUP_NUM=1, DATA_REGION_GROUP_NUM=2;
+```
+
+Note that only the following heterogeneous parameters can be adjusted at runtime:
+
++ SCHEMA_REGION_GROUP_NUM
++ DATA_REGION_GROUP_NUM
+
+#### Show heterogeneous databases
+
+The user can query the specific heterogeneous configuration of each Database, and the SQL statement is as follows:
+
+```
+SHOW DATABASES DETAILS prefixPath?
+```
+
+For example:
+
+```
+IoTDB> SHOW DATABASES DETAILS
++--------+--------+-----------------------+---------------------+---------------------+--------------------+-----------------------+-----------------------+------------------+---------------------+---------------------+
+|Database| TTL|SchemaReplicationFactor|DataReplicationFactor|TimePartitionInterval|SchemaRegionGroupNum|MinSchemaRegionGroupNum|MaxSchemaRegionGroupNum|DataRegionGroupNum|MinDataRegionGroupNum|MaxDataRegionGroupNum|
++--------+--------+-----------------------+---------------------+---------------------+--------------------+-----------------------+-----------------------+------------------+---------------------+---------------------+
+|root.db1| null| 1| 3| 604800000| 0| 1| 1| 0| 2| 2|
+|root.db2|86400000| 1| 1| 604800000| 0| 1| 1| 0| 2| 2|
+|root.db3| null| 1| 1| 604800000| 0| 1| 1| 0| 2| 2|
++--------+--------+-----------------------+---------------------+---------------------+--------------------+-----------------------+-----------------------+------------------+---------------------+---------------------+
+Total line number = 3
+It costs 0.058s
+```
+
+The query results in each column are as follows:
+
++ The name of the Database
++ The TTL of the Database
++ The schema replication number of the Database
++ The data replication number of the Database
++ The time partition interval of the Database
++ The current SchemaRegionGroup number of the Database
++ The required minimum SchemaRegionGroup number of the Database
++ The permitted maximum SchemaRegionGroup number of the Database
++ The current DataRegionGroup number of the Database
++ The required minimum DataRegionGroup number of the Database
++ The permitted maximum DataRegionGroup number of the Database
+
+### TTL
+
+IoTDB supports device-level TTL settings, which means it is able to delete old data automatically and periodically. The benefit of using TTL is that hopefully you can control the total disk space usage and prevent the machine from running out of disks. Moreover, the query performance may downgrade as the total number of files goes up and the memory usage also increases as there are more files. Timely removing such files helps to keep at a high query performance level and reduce memory usage.
+
+The default unit of TTL is milliseconds. If the time precision in the configuration file changes to another, the TTL is still set to milliseconds.
+
+When setting TTL, the system will look for all devices included in the set path and set TTL for these devices. The system will delete expired data at the device granularity.
+After the device data expires, it will not be queryable. The data in the disk file cannot be guaranteed to be deleted immediately, but it can be guaranteed to be deleted eventually.
+However, due to operational costs, the expired data will not be physically deleted right after expiring. The physical deletion is delayed until compaction.
+Therefore, before the data is physically deleted, if the TTL is reduced or lifted, it may cause data that was previously invisible due to TTL to reappear.
+The system can only set up to 1000 TTL rules, and when this limit is reached, some TTL rules need to be deleted before new rules can be set.
+
+#### TTL Path Rule
+The path can only be prefix paths (i.e., the path cannot contain \* , except \*\* in the last level).
+This path will match devices and also allows users to specify paths without asterisks as specific databases or devices.
+When the path does not contain asterisks, the system will check if it matches a database; if it matches a database, both the path and path.\*\* will be set at the same time. Note: Device TTL settings do not verify the existence of metadata, i.e., it is allowed to set TTL for a non-existent device.
+```
+qualified paths:
+root.**
+root.db.**
+root.db.group1.**
+root.db
+root.db.group1.d1
+
+unqualified paths:
+root.*.db
+root.**.db.*
+root.db.*
+```
+#### TTL Applicable Rules
+When a device is subject to multiple TTL rules, the more precise and longer rules are prioritized. For example, for the device "root.bj.hd.dist001.turbine001", the rule "root.bj.hd.dist001.turbine001" takes precedence over "root.bj.hd.dist001.\*\*", and the rule "root.bj.hd.dist001.\*\*" takes precedence over "root.bj.hd.**".
+#### Set TTL
+The set ttl operation can be understood as setting a TTL rule, for example, setting ttl to root.sg.group1.** is equivalent to mounting ttl for all devices that can match this path pattern.
+The unset ttl operation indicates unmounting TTL for the corresponding path pattern; if there is no corresponding TTL, nothing will be done.
+If you want to set TTL to be infinitely large, you can use the INF keyword.
+The SQL Statement for setting TTL is as follow:
+```
+set ttl to pathPattern 360000;
+```
+Set the Time to Live (TTL) to a pathPattern of 360,000 milliseconds; the pathPattern should not contain a wildcard (\*) in the middle and must end with a double asterisk (\*\*). The pathPattern is used to match corresponding devices.
+To maintain compatibility with older SQL syntax, if the user-provided pathPattern matches a database (db), the path pattern is automatically expanded to include all sub-paths denoted by path.\*\*.
+For instance, writing "set ttl to root.sg 360000" will automatically be transformed into "set ttl to root.sg.\*\* 360000", which sets the TTL for all devices under root.sg. However, if the specified pathPattern does not match a database, the aforementioned logic will not apply. For example, writing "set ttl to root.sg.group 360000" will not be expanded to "root.sg.group.\*\*" since root.sg.group does not match a database.
+It is also permissible to specify a particular device without a wildcard (*).
+#### Unset TTL
+
+To unset TTL, we can use follwing SQL statement:
+
+```
+IoTDB> unset ttl from root.ln
+```
+
+After unset TTL, all data will be accepted in `root.ln`.
+```
+IoTDB> unset ttl from root.sgcc.**
+```
+
+Unset the TTL in the `root.sgcc` path.
+
+New syntax
+```
+IoTDB> unset ttl from root.**
+```
+
+Old syntax
+```
+IoTDB> unset ttl to root.**
+```
+There is no functional difference between the old and new syntax, and they are compatible with each other.
+The new syntax is just more conventional in terms of wording.
+
+Unset the TTL setting for all path pattern.
+
+#### Show TTL
+
+To Show TTL, we can use following SQL statement:
+
+show all ttl
+
+```
+IoTDB> SHOW ALL TTL
++--------------+--------+
+| path| TTL|
+| root.**|55555555|
+| root.sg2.a.**|44440000|
++--------------+--------+
+```
+
+show ttl on pathPattern
+```
+IoTDB> SHOW TTL ON root.db.**;
++--------------+--------+
+| path| TTL|
+| root.db.**|55555555|
+| root.db.a.**|44440000|
++--------------+--------+
+```
+
+The SHOW ALL TTL example gives the TTL for all path patterns.
+The SHOW TTL ON pathPattern shows the TTL for the path pattern specified.
+
+Display devices' ttl
+```
+IoTDB> show devices
++---------------+---------+---------+
+| Device|IsAligned| TTL|
++---------------+---------+---------+
+|root.sg.device1| false| 36000000|
+|root.sg.device2| true| INF|
++---------------+---------+---------+
+```
+All devices will definitely have a TTL, meaning it cannot be null. INF represents infinity.
+
+
+## Device Template
+
+IoTDB supports the device template function, enabling different entities of the same type to share metadata, reduce the memory usage of metadata, and simplify the management of numerous entities and measurements.
+
+
+### Create Device Template
+
+The SQL syntax for creating a metadata template is as follows:
+
+```sql
+CREATE DEVICE TEMPLATE ALIGNED? '(' [',' ]+ ')'
+```
+
+**Example 1:** Create a template containing two non-aligned timeseries
+
+```shell
+IoTDB> create device template t1 (temperature FLOAT encoding=RLE, status BOOLEAN encoding=PLAIN compression=SNAPPY)
+```
+
+**Example 2:** Create a template containing a group of aligned timeseries
+
+```shell
+IoTDB> create device template t2 aligned (lat FLOAT encoding=Gorilla, lon FLOAT encoding=Gorilla)
+```
+
+The` lat` and `lon` measurements are aligned.
+
+![img](https://alioss.timecho.com/docs/img/%E6%A8%A1%E6%9D%BF.png)
+
+![img](https://alioss.timecho.com/docs/img/templateEN.jpg)
+
+### Set Device Template
+
+After a device template is created, it should be set to specific path before creating related timeseries or insert data.
+
+**It should be ensured that the related database has been set before setting template.**
+
+**It is recommended to set device template to database path. It is not suggested to set device template to some path above database**
+
+**It is forbidden to create timeseries under a path setting s tedeviceplate. Device template shall not be set on a prefix path of an existing timeseries.**
+
+The SQL Statement for setting device template is as follow:
+
+```shell
+IoTDB> set device template t1 to root.sg1.d1
+```
+
+### Activate Device Template
+
+After setting the device template, with the system enabled to auto create schema, you can insert data into the timeseries. For example, suppose there's a database root.sg1 and t1 has been set to root.sg1.d1, then timeseries like root.sg1.d1.temperature and root.sg1.d1.status are available and data points can be inserted.
+
+
+**Attention**: Before inserting data or the system not enabled to auto create schema, timeseries defined by the device template will not be created. You can use the following SQL statement to create the timeseries or activate the templdeviceate, act before inserting data:
+
+```shell
+IoTDB> create timeseries using device template on root.sg1.d1
+```
+
+**Example:** Execute the following statement
+
+```shell
+IoTDB> set device template t1 to root.sg1.d1
+IoTDB> set device template t2 to root.sg1.d2
+IoTDB> create timeseries using device template on root.sg1.d1
+IoTDB> create timeseries using device template on root.sg1.d2
+```
+
+Show the time series:
+
+```sql
+show timeseries root.sg1.**
+````
+
+```shell
++-----------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression|tags|attributes|deadband|deadband parameters|
++-----------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+|root.sg1.d1.temperature| null| root.sg1| FLOAT| RLE| SNAPPY|null| null| null| null|
+| root.sg1.d1.status| null| root.sg1| BOOLEAN| PLAIN| SNAPPY|null| null| null| null|
+| root.sg1.d2.lon| null| root.sg1| FLOAT| GORILLA| SNAPPY|null| null| null| null|
+| root.sg1.d2.lat| null| root.sg1| FLOAT| GORILLA| SNAPPY|null| null| null| null|
++-----------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+```
+
+Show the devices:
+
+```sql
+show devices root.sg1.**
+````
+
+```shell
++---------------+---------+
+| devices|isAligned|
++---------------+---------+
+| root.sg1.d1| false|
+| root.sg1.d2| true|
++---------------+---------+
+````
+
+### Show Device Template
+
+- Show all device templates
+
+The SQL statement looks like this:
+
+```shell
+IoTDB> show device templates
+```
+
+The execution result is as follows:
+
+```shell
++-------------+
+|template name|
++-------------+
+| t2|
+| t1|
++-------------+
+```
+
+- Show nodes under in device template
+
+The SQL statement looks like this:
+
+```shell
+IoTDB> show nodes in device template t1
+```
+
+The execution result is as follows:
+
+```shell
++-----------+--------+--------+-----------+
+|child nodes|dataType|encoding|compression|
++-----------+--------+--------+-----------+
+|temperature| FLOAT| RLE| SNAPPY|
+| status| BOOLEAN| PLAIN| SNAPPY|
++-----------+--------+--------+-----------+
+```
+
+- Show the path prefix where a device template is set
+
+```shell
+IoTDB> show paths set device template t1
+```
+
+The execution result is as follows:
+
+```shell
++-----------+
+|child paths|
++-----------+
+|root.sg1.d1|
++-----------+
+```
+
+- Show the path prefix where a device template is used (i.e. the time series has been created)
+
+```shell
+IoTDB> show paths using device template t1
+```
+
+The execution result is as follows:
+
+```shell
++-----------+
+|child paths|
++-----------+
+|root.sg1.d1|
++-----------+
+```
+
+### Deactivate device Template
+
+To delete a group of timeseries represented by device template, namely deactivate the device template, use the following SQL statement:
+
+```shell
+IoTDB> delete timeseries of device template t1 from root.sg1.d1
+```
+
+or
+
+```shell
+IoTDB> deactivate device template t1 from root.sg1.d1
+```
+
+The deactivation supports batch process.
+
+```shell
+IoTDB> delete timeseries of device template t1 from root.sg1.*, root.sg2.*
+```
+
+or
+
+```shell
+IoTDB> deactivate device template t1 from root.sg1.*, root.sg2.*
+```
+
+If the template name is not provided in sql, all template activation on paths matched by given path pattern will be removed.
+
+### Unset Device Template
+
+The SQL Statement for unsetting device template is as follow:
+
+```shell
+IoTDB> unset device template t1 from root.sg1.d1
+```
+
+**Attention**: It should be guaranteed that none of the timeseries represented by the target device template exists, before unset it. It can be achieved by deactivation operation.
+
+### Drop Device Template
+
+The SQL Statement for dropping device template is as follow:
+
+```shell
+IoTDB> drop device template t1
+```
+
+**Attention**: Dropping an already set template is not supported.
+
+### Alter Device Template
+
+In a scenario where measurements need to be added, you can modify the template to add measurements to all devicesdevice using the device template.
+
+The SQL Statement for altering device template is as follow:
+
+```shell
+IoTDB> alter device template t1 add (speed FLOAT encoding=RLE, FLOAT TEXT encoding=PLAIN compression=SNAPPY)
+```
+
+**When executing data insertion to devices with device template set on related prefix path and there are measurements not present in this device template, the measurements will be auto added to this device template.**
+
+## Timeseries Management
+
+### Create Timeseries
+
+According to the storage model selected before, we can create corresponding timeseries in the two databases respectively. The SQL statements for creating timeseries are as follows:
+
+```
+IoTDB > create timeseries root.ln.wf01.wt01.status with datatype=BOOLEAN,encoding=PLAIN
+IoTDB > create timeseries root.ln.wf01.wt01.temperature with datatype=FLOAT,encoding=RLE
+IoTDB > create timeseries root.ln.wf02.wt02.hardware with datatype=TEXT,encoding=PLAIN
+IoTDB > create timeseries root.ln.wf02.wt02.status with datatype=BOOLEAN,encoding=PLAIN
+IoTDB > create timeseries root.sgcc.wf03.wt01.status with datatype=BOOLEAN,encoding=PLAIN
+IoTDB > create timeseries root.sgcc.wf03.wt01.temperature with datatype=FLOAT,encoding=RLE
+```
+
+From v0.13, you can use a simplified version of the SQL statements to create timeseries:
+
+```
+IoTDB > create timeseries root.ln.wf01.wt01.status BOOLEAN encoding=PLAIN
+IoTDB > create timeseries root.ln.wf01.wt01.temperature FLOAT encoding=RLE
+IoTDB > create timeseries root.ln.wf02.wt02.hardware TEXT encoding=PLAIN
+IoTDB > create timeseries root.ln.wf02.wt02.status BOOLEAN encoding=PLAIN
+IoTDB > create timeseries root.sgcc.wf03.wt01.status BOOLEAN encoding=PLAIN
+IoTDB > create timeseries root.sgcc.wf03.wt01.temperature FLOAT encoding=RLE
+```
+
+Notice that when in the CREATE TIMESERIES statement the encoding method conflicts with the data type, the system gives the corresponding error prompt as shown below:
+
+```
+IoTDB > create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODING=TS_2DIFF
+error: encoding TS_2DIFF does not support BOOLEAN
+```
+
+Please refer to [Encoding](../Basic-Concept/Encoding-and-Compression.md) for correspondence between data type and encoding.
+
+### Create Aligned Timeseries
+
+The SQL statement for creating a group of timeseries are as follows:
+
+```
+IoTDB> CREATE ALIGNED TIMESERIES root.ln.wf01.GPS(latitude FLOAT encoding=PLAIN compressor=SNAPPY, longitude FLOAT encoding=PLAIN compressor=SNAPPY)
+```
+
+You can set different datatype, encoding, and compression for the timeseries in a group of aligned timeseries
+
+It is also supported to set an alias, tag, and attribute for aligned timeseries.
+
+### Delete Timeseries
+
+To delete the timeseries we created before, we are able to use `(DELETE | DROP) TimeSeries ` statement.
+
+The usage are as follows:
+
+```
+IoTDB> delete timeseries root.ln.wf01.wt01.status
+IoTDB> delete timeseries root.ln.wf01.wt01.temperature, root.ln.wf02.wt02.hardware
+IoTDB> delete timeseries root.ln.wf02.*
+IoTDB> drop timeseries root.ln.wf02.*
+```
+
+### Show Timeseries
+
+* SHOW LATEST? TIMESERIES pathPattern? whereClause? limitClause?
+
+ There are four optional clauses added in SHOW TIMESERIES, return information of time series
+
+Timeseries information includes: timeseries path, alias of measurement, database it belongs to, data type, encoding type, compression type, tags and attributes.
+
+Examples:
+
+* SHOW TIMESERIES
+
+ presents all timeseries information in JSON form
+
+* SHOW TIMESERIES <`PathPattern`>
+
+ returns all timeseries information matching the given <`PathPattern`>. SQL statements are as follows:
+
+```
+IoTDB> show timeseries root.**
+IoTDB> show timeseries root.ln.**
+```
+
+The results are shown below respectively:
+
+```
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| timeseries| alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+|root.sgcc.wf03.wt01.temperature| null| root.sgcc| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.sgcc.wf03.wt01.status| null| root.sgcc| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
+| root.turbine.d1.s1|newAlias| root.turbine| FLOAT| RLE| SNAPPY|{"newTag1":"newV1","tag4":"v4","tag3":"v3"}|{"attr2":"v2","attr1":"newV1","attr4":"v4","attr3":"v3"}| null| null|
+| root.ln.wf02.wt02.hardware| null| root.ln| TEXT| PLAIN| SNAPPY| null| null| null| null|
+| root.ln.wf02.wt02.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
+| root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+Total line number = 7
+It costs 0.016s
+
++-----------------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression|tags|attributes|deadband|deadband parameters|
++-----------------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+| root.ln.wf02.wt02.hardware| null| root.ln| TEXT| PLAIN| SNAPPY|null| null| null| null|
+| root.ln.wf02.wt02.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY|null| null| null| null|
+|root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY|null| null| null| null|
+| root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY|null| null| null| null|
++-----------------------------+-----+-------------+--------+--------+-----------+----+----------+--------+-------------------+
+Total line number = 4
+It costs 0.004s
+```
+
+* SHOW TIMESERIES LIMIT INT OFFSET INT
+
+ returns all the timeseries information start from the offset and limit the number of series returned. For example,
+
+```
+show timeseries root.ln.** limit 10 offset 10
+```
+
+* SHOW TIMESERIES WHERE TIMESERIES contains 'containStr'
+
+ The query result set is filtered by string fuzzy matching based on the names of the timeseries. For example:
+
+```
+show timeseries root.ln.** where timeseries contains 'wf01.wt'
+```
+
+The result is shown below:
+
+```
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| timeseries| alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+Total line number = 2
+It costs 0.016s
+```
+
+* SHOW TIMESERIES WHERE DataType=type
+
+ The query result set is filtered by data type. For example:
+
+```
+show timeseries root.ln.** where dataType=FLOAT
+```
+
+The result is shown below:
+
+```
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| timeseries| alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+|root.sgcc.wf03.wt01.temperature| null| root.sgcc| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.turbine.d1.s1|newAlias| root.turbine| FLOAT| RLE| SNAPPY|{"newTag1":"newV1","tag4":"v4","tag3":"v3"}|{"attr2":"v2","attr1":"newV1","attr4":"v4","attr3":"v3"}| null| null|
+| root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY| null| null| null| null|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+Total line number = 3
+It costs 0.016s
+
+```
+
+
+* SHOW LATEST TIMESERIES
+
+ all the returned timeseries information should be sorted in descending order of the last timestamp of timeseries
+
+It is worth noting that when the queried path does not exist, the system will return no timeseries.
+
+
+### Count Timeseries
+
+IoTDB is able to use `COUNT TIMESERIES ` to count the number of timeseries matching the path. SQL statements are as follows:
+
+* `WHERE` condition could be used to fuzzy match a time series name with the following syntax: `COUNT TIMESERIES WHERE TIMESERIES contains 'containStr'`.
+* `WHERE` condition could be used to filter result by data type with the syntax: `COUNT TIMESERIES WHERE DataType='`.
+* `WHERE` condition could be used to filter result by tags with the syntax: `COUNT TIMESERIES WHERE TAGS(key)='value'` or `COUNT TIMESERIES WHERE TAGS(key) contains 'value'`.
+* `LEVEL` could be defined to show count the number of timeseries of each node at the given level in current Metadata Tree. This could be used to query the number of sensors under each device. The grammar is: `COUNT TIMESERIES GROUP BY LEVEL=`.
+
+
+```
+IoTDB > COUNT TIMESERIES root.**
+IoTDB > COUNT TIMESERIES root.ln.**
+IoTDB > COUNT TIMESERIES root.ln.*.*.status
+IoTDB > COUNT TIMESERIES root.ln.wf01.wt01.status
+IoTDB > COUNT TIMESERIES root.** WHERE TIMESERIES contains 'sgcc'
+IoTDB > COUNT TIMESERIES root.** WHERE DATATYPE = INT64
+IoTDB > COUNT TIMESERIES root.** WHERE TAGS(unit) contains 'c'
+IoTDB > COUNT TIMESERIES root.** WHERE TAGS(unit) = 'c'
+IoTDB > COUNT TIMESERIES root.** WHERE TIMESERIES contains 'sgcc' group by level = 1
+```
+
+For example, if there are several timeseries (use `show timeseries` to show all timeseries):
+
+```
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+| timeseries| alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+|root.sgcc.wf03.wt01.temperature| null| root.sgcc| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.sgcc.wf03.wt01.status| null| root.sgcc| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
+| root.turbine.d1.s1|newAlias| root.turbine| FLOAT| RLE| SNAPPY|{"newTag1":"newV1","tag4":"v4","tag3":"v3"}|{"attr2":"v2","attr1":"newV1","attr4":"v4","attr3":"v3"}| null| null|
+| root.ln.wf02.wt02.hardware| null| root.ln| TEXT| PLAIN| SNAPPY| {"unit":"c"}| null| null| null|
+| root.ln.wf02.wt02.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| {"description":"test1"}| null| null| null|
+| root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY| null| null| null| null|
+| root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY| null| null| null| null|
++-------------------------------+--------+-------------+--------+--------+-----------+-------------------------------------------+--------------------------------------------------------+--------+-------------------+
+Total line number = 7
+It costs 0.004s
+```
+
+Then the Metadata Tree will be as below:
+
+
+
+As can be seen, `root` is considered as `LEVEL=0`. So when you enter statements such as:
+
+```
+IoTDB > COUNT TIMESERIES root.** GROUP BY LEVEL=1
+IoTDB > COUNT TIMESERIES root.ln.** GROUP BY LEVEL=2
+IoTDB > COUNT TIMESERIES root.ln.wf01.* GROUP BY LEVEL=2
+```
+
+You will get following results:
+
+```
++------------+-----------------+
+| column|count(timeseries)|
++------------+-----------------+
+| root.sgcc| 2|
+|root.turbine| 1|
+| root.ln| 4|
++------------+-----------------+
+Total line number = 3
+It costs 0.002s
+
++------------+-----------------+
+| column|count(timeseries)|
++------------+-----------------+
+|root.ln.wf02| 2|
+|root.ln.wf01| 2|
++------------+-----------------+
+Total line number = 2
+It costs 0.002s
+
++------------+-----------------+
+| column|count(timeseries)|
++------------+-----------------+
+|root.ln.wf01| 2|
++------------+-----------------+
+Total line number = 1
+It costs 0.002s
+```
+
+> Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level.
+
+### Active Timeseries Query
+By adding WHERE time filter conditions to the existing SHOW/COUNT TIMESERIES, we can obtain time series with data within the specified time range.
+
+It is important to note that in metadata queries with time filters, views are not considered; only the time series actually stored in the TsFile are taken into account.
+
+An example usage is as follows:
+```
+IoTDB> insert into root.sg.data(timestamp, s1,s2) values(15000, 1, 2);
+IoTDB> insert into root.sg.data2(timestamp, s1,s2) values(15002, 1, 2);
+IoTDB> insert into root.sg.data3(timestamp, s1,s2) values(16000, 1, 2);
+IoTDB> show timeseries;
++----------------+-----+--------+--------+--------+-----------+----+----------+--------+------------------+--------+
+| Timeseries|Alias|Database|DataType|Encoding|Compression|Tags|Attributes|Deadband|DeadbandParameters|ViewType|
++----------------+-----+--------+--------+--------+-----------+----+----------+--------+------------------+--------+
+| root.sg.data.s1| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
+| root.sg.data.s2| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
+|root.sg.data3.s1| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
+|root.sg.data3.s2| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
+|root.sg.data2.s1| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
+|root.sg.data2.s2| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
++----------------+-----+--------+--------+--------+-----------+----+----------+--------+------------------+--------+
+
+IoTDB> show timeseries where time >= 15000 and time < 16000;
++----------------+-----+--------+--------+--------+-----------+----+----------+--------+------------------+--------+
+| Timeseries|Alias|Database|DataType|Encoding|Compression|Tags|Attributes|Deadband|DeadbandParameters|ViewType|
++----------------+-----+--------+--------+--------+-----------+----+----------+--------+------------------+--------+
+| root.sg.data.s1| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
+| root.sg.data.s2| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
+|root.sg.data2.s1| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
+|root.sg.data2.s2| null| root.sg| FLOAT| GORILLA| LZ4|null| null| null| null| BASE|
++----------------+-----+--------+--------+--------+-----------+----+----------+--------+------------------+--------+
+
+IoTDB> count timeseries where time >= 15000 and time < 16000;
++-----------------+
+|count(timeseries)|
++-----------------+
+| 4|
++-----------------+
+```
+Regarding the definition of active time series, data that can be queried normally is considered active, meaning time series that have been inserted but deleted are not included.
+### Tag and Attribute Management
+
+We can also add an alias, extra tag and attribute information while creating one timeseries.
+
+The differences between tag and attribute are:
+
+* Tag could be used to query the path of timeseries, we will maintain an inverted index in memory on the tag: Tag -> Timeseries
+* Attribute could only be queried by timeseries path : Timeseries -> Attribute
+
+The SQL statements for creating timeseries with extra tag and attribute information are extended as follows:
+
+```
+create timeseries root.turbine.d1.s1(temprature) with datatype=FLOAT, encoding=RLE, compression=SNAPPY tags(tag1=v1, tag2=v2) attributes(attr1=v1, attr2=v2)
+```
+
+The `temprature` in the brackets is an alias for the sensor `s1`. So we can use `temprature` to replace `s1` anywhere.
+
+> IoTDB also supports using AS function to set alias. The difference between the two is: the alias set by the AS function is used to replace the whole time series name, temporary and not bound with the time series; while the alias mentioned above is only used as the alias of the sensor, which is bound with it and can be used equivalent to the original sensor name.
+
+> Notice that the size of the extra tag and attribute information shouldn't exceed the `tag_attribute_total_size`.
+
+We can update the tag information after creating it as following:
+
+* Rename the tag/attribute key
+
+```
+ALTER timeseries root.turbine.d1.s1 RENAME tag1 TO newTag1
+```
+
+* Reset the tag/attribute value
+
+```
+ALTER timeseries root.turbine.d1.s1 SET newTag1=newV1, attr1=newV1
+```
+
+* Delete the existing tag/attribute
+
+```
+ALTER timeseries root.turbine.d1.s1 DROP tag1, tag2
+```
+
+* Add new tags
+
+```
+ALTER timeseries root.turbine.d1.s1 ADD TAGS tag3=v3, tag4=v4
+```
+
+* Add new attributes
+
+```
+ALTER timeseries root.turbine.d1.s1 ADD ATTRIBUTES attr3=v3, attr4=v4
+```
+
+* Upsert alias, tags and attributes
+
+> add alias or a new key-value if the alias or key doesn't exist, otherwise, update the old one with new value.
+
+```
+ALTER timeseries root.turbine.d1.s1 UPSERT ALIAS=newAlias TAGS(tag3=v3, tag4=v4) ATTRIBUTES(attr3=v3, attr4=v4)
+```
+
+* Show timeseries using tags. Use TAGS(tagKey) to identify the tags used as filter key
+
+```
+SHOW TIMESERIES (<`PathPattern`>)? timeseriesWhereClause
+```
+
+returns all the timeseries information that satisfy the where condition and match the pathPattern. SQL statements are as follows:
+
+```
+ALTER timeseries root.ln.wf02.wt02.hardware ADD TAGS unit=c
+ALTER timeseries root.ln.wf02.wt02.status ADD TAGS description=test1
+show timeseries root.ln.** where TAGS(unit)='c'
+show timeseries root.ln.** where TAGS(description) contains 'test1'
+```
+
+The results are shown below respectly:
+
+```
++--------------------------+-----+-------------+--------+--------+-----------+------------+----------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression| tags|attributes|deadband|deadband parameters|
++--------------------------+-----+-------------+--------+--------+-----------+------------+----------+--------+-------------------+
+|root.ln.wf02.wt02.hardware| null| root.ln| TEXT| PLAIN| SNAPPY|{"unit":"c"}| null| null| null|
++--------------------------+-----+-------------+--------+--------+-----------+------------+----------+--------+-------------------+
+Total line number = 1
+It costs 0.005s
+
++------------------------+-----+-------------+--------+--------+-----------+-----------------------+----------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression| tags|attributes|deadband|deadband parameters|
++------------------------+-----+-------------+--------+--------+-----------+-----------------------+----------+--------+-------------------+
+|root.ln.wf02.wt02.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY|{"description":"test1"}| null| null| null|
++------------------------+-----+-------------+--------+--------+-----------+-----------------------+----------+--------+-------------------+
+Total line number = 1
+It costs 0.004s
+```
+
+- count timeseries using tags
+
+```
+COUNT TIMESERIES (<`PathPattern`>)? timeseriesWhereClause
+COUNT TIMESERIES (<`PathPattern`>)? timeseriesWhereClause GROUP BY LEVEL=
+```
+
+returns all the number of timeseries that satisfy the where condition and match the pathPattern. SQL statements are as follows:
+
+```
+count timeseries
+count timeseries root.** where TAGS(unit)='c'
+count timeseries root.** where TAGS(unit)='c' group by level = 2
+```
+
+The results are shown below respectly :
+
+```
+IoTDB> count timeseries
++-----------------+
+|count(timeseries)|
++-----------------+
+| 6|
++-----------------+
+Total line number = 1
+It costs 0.019s
+IoTDB> count timeseries root.** where TAGS(unit)='c'
++-----------------+
+|count(timeseries)|
++-----------------+
+| 2|
++-----------------+
+Total line number = 1
+It costs 0.020s
+IoTDB> count timeseries root.** where TAGS(unit)='c' group by level = 2
++--------------+-----------------+
+| column|count(timeseries)|
++--------------+-----------------+
+| root.ln.wf02| 2|
+| root.ln.wf01| 0|
+|root.sgcc.wf03| 0|
++--------------+-----------------+
+Total line number = 3
+It costs 0.011s
+```
+
+> Notice that, we only support one condition in the where clause. Either it's an equal filter or it is an `contains` filter. In both case, the property in the where condition must be a tag.
+
+create aligned timeseries
+
+```
+create aligned timeseries root.sg1.d1(s1 INT32 tags(tag1=v1, tag2=v2) attributes(attr1=v1, attr2=v2), s2 DOUBLE tags(tag3=v3, tag4=v4) attributes(attr3=v3, attr4=v4))
+```
+
+The execution result is as follows:
+
+```
+IoTDB> show timeseries
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+|root.sg1.d1.s1| null| root.sg1| INT32| RLE| SNAPPY|{"tag1":"v1","tag2":"v2"}|{"attr2":"v2","attr1":"v1"}| null| null|
+|root.sg1.d1.s2| null| root.sg1| DOUBLE| GORILLA| SNAPPY|{"tag4":"v4","tag3":"v3"}|{"attr4":"v4","attr3":"v3"}| null| null|
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+```
+
+Support query:
+
+```
+IoTDB> show timeseries where TAGS(tag1)='v1'
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+| timeseries|alias| database|dataType|encoding|compression| tags| attributes|deadband|deadband parameters|
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+|root.sg1.d1.s1| null| root.sg1| INT32| RLE| SNAPPY|{"tag1":"v1","tag2":"v2"}|{"attr2":"v2","attr1":"v1"}| null| null|
++--------------+-----+-------------+--------+--------+-----------+-------------------------+---------------------------+--------+-------------------+
+```
+
+The above operations are supported for timeseries tag, attribute updates, etc.
+
+## Node Management
+
+### Show Child Paths
+
+```
+SHOW CHILD PATHS pathPattern
+```
+
+Return all child paths and their node types of all the paths matching pathPattern.
+
+node types: ROOT -> DB INTERNAL -> DATABASE -> INTERNAL -> DEVICE -> TIMESERIES
+
+
+Example:
+
+* return the child paths of root.ln:show child paths root.ln
+
+```
++------------+----------+
+| child paths|node types|
++------------+----------+
+|root.ln.wf01| INTERNAL|
+|root.ln.wf02| INTERNAL|
++------------+----------+
+Total line number = 2
+It costs 0.002s
+```
+
+> get all paths in form of root.xx.xx.xx:show child paths root.xx.xx
+
+### Show Child Nodes
+
+```
+SHOW CHILD NODES pathPattern
+```
+
+Return all child nodes of the pathPattern.
+
+Example:
+
+* return the child nodes of root:show child nodes root
+
+```
++------------+
+| child nodes|
++------------+
+| ln|
++------------+
+```
+
+* return the child nodes of root.ln:show child nodes root.ln
+
+```
++------------+
+| child nodes|
++------------+
+| wf01|
+| wf02|
++------------+
+```
+
+### Count Nodes
+
+IoTDB is able to use `COUNT NODES LEVEL=` to count the number of nodes at
+ the given level in current Metadata Tree considering a given pattern. IoTDB will find paths that
+ match the pattern and counts distinct nodes at the specified level among the matched paths.
+ This could be used to query the number of devices with specified measurements. The usage are as
+ follows:
+
+```
+IoTDB > COUNT NODES root.** LEVEL=2
+IoTDB > COUNT NODES root.ln.** LEVEL=2
+IoTDB > COUNT NODES root.ln.wf01.** LEVEL=3
+IoTDB > COUNT NODES root.**.temperature LEVEL=3
+```
+
+As for the above mentioned example and Metadata tree, you can get following results:
+
+```
++------------+
+|count(nodes)|
++------------+
+| 4|
++------------+
+Total line number = 1
+It costs 0.003s
+
++------------+
+|count(nodes)|
++------------+
+| 2|
++------------+
+Total line number = 1
+It costs 0.002s
+
++------------+
+|count(nodes)|
++------------+
+| 1|
++------------+
+Total line number = 1
+It costs 0.002s
+
++------------+
+|count(nodes)|
++------------+
+| 2|
++------------+
+Total line number = 1
+It costs 0.002s
+```
+
+> Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level.
+
+### Show Devices
+
+* SHOW DEVICES pathPattern? (WITH DATABASE)? devicesWhereClause? limitClause?
+
+Similar to `Show Timeseries`, IoTDB also supports two ways of viewing devices:
+
+* `SHOW DEVICES` statement presents all devices' information, which is equal to `SHOW DEVICES root.**`.
+* `SHOW DEVICES ` statement specifies the `PathPattern` and returns the devices information matching the pathPattern and under the given level.
+* `WHERE` condition supports `DEVICE contains 'xxx'` to do a fuzzy query based on the device name.
+
+SQL statement is as follows:
+
+```
+IoTDB> show devices
+IoTDB> show devices root.ln.**
+IoTDB> show devices root.ln.** where device contains 't'
+```
+
+You can get results below:
+
+```
++-------------------+---------+
+| devices|isAligned|
++-------------------+---------+
+| root.ln.wf01.wt01| false|
+| root.ln.wf02.wt02| false|
+|root.sgcc.wf03.wt01| false|
+| root.turbine.d1| false|
++-------------------+---------+
+Total line number = 4
+It costs 0.002s
+
++-----------------+---------+
+| devices|isAligned|
++-----------------+---------+
+|root.ln.wf01.wt01| false|
+|root.ln.wf02.wt02| false|
++-----------------+---------+
+Total line number = 2
+It costs 0.001s
+```
+
+`isAligned` indicates whether the timeseries under the device are aligned.
+
+To view devices' information with database, we can use `SHOW DEVICES WITH DATABASE` statement.
+
+* `SHOW DEVICES WITH DATABASE` statement presents all devices' information with their database.
+* `SHOW DEVICES WITH DATABASE` statement specifies the `PathPattern` and returns the
+ devices' information under the given level with their database information.
+
+SQL statement is as follows:
+
+```
+IoTDB> show devices with database
+IoTDB> show devices root.ln.** with database
+```
+
+You can get results below:
+
+```
++-------------------+-------------+---------+
+| devices| database|isAligned|
++-------------------+-------------+---------+
+| root.ln.wf01.wt01| root.ln| false|
+| root.ln.wf02.wt02| root.ln| false|
+|root.sgcc.wf03.wt01| root.sgcc| false|
+| root.turbine.d1| root.turbine| false|
++-------------------+-------------+---------+
+Total line number = 4
+It costs 0.003s
+
++-----------------+-------------+---------+
+| devices| database|isAligned|
++-----------------+-------------+---------+
+|root.ln.wf01.wt01| root.ln| false|
+|root.ln.wf02.wt02| root.ln| false|
++-----------------+-------------+---------+
+Total line number = 2
+It costs 0.001s
+```
+
+### Count Devices
+
+* COUNT DEVICES /
+
+The above statement is used to count the number of devices. At the same time, it is allowed to specify `PathPattern` to count the number of devices matching the `PathPattern`.
+
+SQL statement is as follows:
+
+```
+IoTDB> show devices
+IoTDB> count devices
+IoTDB> count devices root.ln.**
+```
+
+You can get results below:
+
+```
++-------------------+---------+
+| devices|isAligned|
++-------------------+---------+
+|root.sgcc.wf03.wt03| false|
+| root.turbine.d1| false|
+| root.ln.wf02.wt02| false|
+| root.ln.wf01.wt01| false|
++-------------------+---------+
+Total line number = 4
+It costs 0.024s
+
++--------------+
+|count(devices)|
++--------------+
+| 4|
++--------------+
+Total line number = 1
+It costs 0.004s
+
++--------------+
+|count(devices)|
++--------------+
+| 2|
++--------------+
+Total line number = 1
+It costs 0.004s
+```
+
+### Active Device Query
+Similar to active timeseries query, we can add time filter conditions to device viewing and statistics to query active devices that have data within a certain time range. The definition of active here is the same as for active time series. An example usage is as follows:
+```
+IoTDB> insert into root.sg.data(timestamp, s1,s2) values(15000, 1, 2);
+IoTDB> insert into root.sg.data2(timestamp, s1,s2) values(15002, 1, 2);
+IoTDB> insert into root.sg.data3(timestamp, s1,s2) values(16000, 1, 2);
+IoTDB> show devices;
++-------------------+---------+
+| devices|isAligned|
++-------------------+---------+
+| root.sg.data| false|
+| root.sg.data2| false|
+| root.sg.data3| false|
++-------------------+---------+
+
+IoTDB> show devices where time >= 15000 and time < 16000;
++-------------------+---------+
+| devices|isAligned|
++-------------------+---------+
+| root.sg.data| false|
+| root.sg.data2| false|
++-------------------+---------+
+
+IoTDB> count devices where time >= 15000 and time < 16000;
++--------------+
+|count(devices)|
++--------------+
+| 2|
++--------------+
+```
\ No newline at end of file
diff --git a/src/UserGuide/V2.0.1/Tree/Basic-Concept/Query-Data.md b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Query-Data.md
new file mode 100644
index 000000000..62fc3c9f9
--- /dev/null
+++ b/src/UserGuide/V2.0.1/Tree/Basic-Concept/Query-Data.md
@@ -0,0 +1,3009 @@
+
+# Query Data
+## OVERVIEW
+
+### Syntax Definition
+
+In IoTDB, `SELECT` statement is used to retrieve data from one or more selected time series. Here is the syntax definition of `SELECT` statement:
+
+```sql
+SELECT [LAST] selectExpr [, selectExpr] ...
+ [INTO intoItem [, intoItem] ...]
+ FROM prefixPath [, prefixPath] ...
+ [WHERE whereCondition]
+ [GROUP BY {
+ ([startTime, endTime), interval [, slidingStep]) |
+ LEVEL = levelNum [, levelNum] ... |
+ TAGS(tagKey [, tagKey] ... ) |
+ VARIATION(expression[,delta][,ignoreNull=true/false]) |
+ CONDITION(expression,[keep>/>=/=/<=]threshold[,ignoreNull=true/false]) |
+ SESSION(timeInterval) |
+ COUNT(expression, size[,ignoreNull=true/false])
+ }]
+ [HAVING havingCondition]
+ [ORDER BY sortKey {ASC | DESC}]
+ [FILL ({PREVIOUS | LINEAR | constant}) (, interval=DURATION_LITERAL)?)]
+ [SLIMIT seriesLimit] [SOFFSET seriesOffset]
+ [LIMIT rowLimit] [OFFSET rowOffset]
+ [ALIGN BY {TIME | DEVICE}]
+```
+
+### Syntax Description
+
+#### `SELECT` clause
+
+- The `SELECT` clause specifies the output of the query, consisting of several `selectExpr`.
+- Each `selectExpr` defines one or more columns in the query result, which is an expression consisting of time series path suffixes, constants, functions, and operators.
+- Supports using `AS` to specify aliases for columns in the query result set.
+- Use the `LAST` keyword in the `SELECT` clause to specify that the query is the last query.
+
+#### `INTO` clause
+
+- `SELECT INTO` is used to write query results into a series of specified time series. The `INTO` clause specifies the target time series to which query results are written.
+
+#### `FROM` clause
+
+- The `FROM` clause contains the path prefix of one or more time series to be queried, and wildcards are supported.
+- When executing a query, the path prefix in the `FROM` clause and the suffix in the `SELECT` clause will be concatenated to obtain a complete query target time series.
+
+#### `WHERE` clause
+
+- The `WHERE` clause specifies the filtering conditions for data rows, consisting of a `whereCondition`.
+- `whereCondition` is a logical expression that evaluates to true for each row to be selected. If there is no `WHERE` clause, all rows will be selected.
+- In `whereCondition`, any IOTDB-supported functions and operators can be used except aggregate functions.
+
+#### `GROUP BY` clause
+
+- The `GROUP BY` clause specifies how the time series are aggregated by segment or group.
+- Segmented aggregation refers to segmenting data in the row direction according to the time dimension, aiming at the time relationship between different data points in the same time series, and obtaining an aggregated value for each segment. Currently only **group by time**、**group by variation**、**group by condition**、**group by session** and **group by count** is supported, and more segmentation methods will be supported in the future.
+- Group aggregation refers to grouping the potential business attributes of time series for different time series. Each group contains several time series, and each group gets an aggregated value. Support **group by path level** and **group by tag** two grouping methods.
+- Segment aggregation and group aggregation can be mixed.
+
+#### `HAVING` clause
+
+- The `HAVING` clause specifies the filter conditions for the aggregation results, consisting of a `havingCondition`.
+- `havingCondition` is a logical expression that evaluates to true for the aggregation results to be selected. If there is no `HAVING` clause, all aggregated results will be selected.
+- `HAVING` is to be used with aggregate functions and the `GROUP BY` clause.
+
+#### `ORDER BY` clause
+
+- The `ORDER BY` clause is used to specify how the result set is sorted.
+- In ALIGN BY TIME mode: By default, they are sorted in ascending order of timestamp size, and `ORDER BY TIME DESC` can be used to specify that the result set is sorted in descending order of timestamp.
+- In ALIGN BY DEVICE mode: arrange according to the device first, and sort each device in ascending order according to the timestamp. The ordering and priority can be adjusted by `ORDER BY` clause.
+
+#### `FILL` clause
+
+- The `FILL` clause is used to specify the filling mode in the case of missing data, allowing users to fill in empty values for the result set of any query according to a specific method.
+
+#### `SLIMIT` and `SOFFSET` clauses
+
+- `SLIMIT` specifies the number of columns of the query result, and `SOFFSET` specifies the starting column position of the query result display. `SLIMIT` and `SOFFSET` are only used to control value columns and have no effect on time and device columns.
+
+#### `LIMIT` and `OFFSET` clauses
+
+- `LIMIT` specifies the number of rows of the query result, and `OFFSET` specifies the starting row position of the query result display.
+
+#### `ALIGN BY` clause
+
+- The query result set is **ALIGN BY TIME** by default, including a time column and several value columns, and the timestamps of each column of data in each row are the same.
+- It also supports **ALIGN BY DEVICE**. The query result set contains a time column, a device column, and several value columns.
+
+### Basic Examples
+
+#### Select a Column of Data Based on a Time Interval
+
+The SQL statement is:
+
+```sql
+select temperature from root.ln.wf01.wt01 where time < 2017-11-01T00:08:00.000
+```
+
+which means:
+
+The selected device is ln group wf01 plant wt01 device; the selected timeseries is the temperature sensor (temperature). The SQL statement requires that all temperature sensor values before the time point of "2017-11-01T00:08:00.000" be selected.
+
+The execution result of this SQL statement is as follows:
+
+```
++-----------------------------+-----------------------------+
+| Time|root.ln.wf01.wt01.temperature|
++-----------------------------+-----------------------------+
+|2017-11-01T00:00:00.000+08:00| 25.96|
+|2017-11-01T00:01:00.000+08:00| 24.36|
+|2017-11-01T00:02:00.000+08:00| 20.09|
+|2017-11-01T00:03:00.000+08:00| 20.18|
+|2017-11-01T00:04:00.000+08:00| 21.13|
+|2017-11-01T00:05:00.000+08:00| 22.72|
+|2017-11-01T00:06:00.000+08:00| 20.71|
+|2017-11-01T00:07:00.000+08:00| 21.45|
++-----------------------------+-----------------------------+
+Total line number = 8
+It costs 0.026s
+```
+
+#### Select Multiple Columns of Data Based on a Time Interval
+
+The SQL statement is:
+
+```sql
+select status, temperature from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000;
+```
+
+which means:
+
+The selected device is ln group wf01 plant wt01 device; the selected timeseries is "status" and "temperature". The SQL statement requires that the status and temperature sensor values between the time point of "2017-11-01T00:05:00.000" and "2017-11-01T00:12:00.000" be selected.
+
+The execution result of this SQL statement is as follows:
+
+```
++-----------------------------+------------------------+-----------------------------+
+| Time|root.ln.wf01.wt01.status|root.ln.wf01.wt01.temperature|
++-----------------------------+------------------------+-----------------------------+
+|2017-11-01T00:06:00.000+08:00| false| 20.71|
+|2017-11-01T00:07:00.000+08:00| false| 21.45|
+|2017-11-01T00:08:00.000+08:00| false| 22.58|
+|2017-11-01T00:09:00.000+08:00| false| 20.98|
+|2017-11-01T00:10:00.000+08:00| true| 25.52|
+|2017-11-01T00:11:00.000+08:00| false| 22.91|
++-----------------------------+------------------------+-----------------------------+
+Total line number = 6
+It costs 0.018s
+```
+
+#### Select Multiple Columns of Data for the Same Device According to Multiple Time Intervals
+
+IoTDB supports specifying multiple time interval conditions in a query. Users can combine time interval conditions at will according to their needs. For example, the SQL statement is:
+
+```sql
+select status,temperature from root.ln.wf01.wt01 where (time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000) or (time >= 2017-11-01T16:35:00.000 and time <= 2017-11-01T16:37:00.000);
+```
+
+which means:
+
+The selected device is ln group wf01 plant wt01 device; the selected timeseries is "status" and "temperature"; the statement specifies two different time intervals, namely "2017-11-01T00:05:00.000 to 2017-11-01T00:12:00.000" and "2017-11-01T16:35:00.000 to 2017-11-01T16:37:00.000". The SQL statement requires that the values of selected timeseries satisfying any time interval be selected.
+
+The execution result of this SQL statement is as follows:
+
+```
++-----------------------------+------------------------+-----------------------------+
+| Time|root.ln.wf01.wt01.status|root.ln.wf01.wt01.temperature|
++-----------------------------+------------------------+-----------------------------+
+|2017-11-01T00:06:00.000+08:00| false| 20.71|
+|2017-11-01T00:07:00.000+08:00| false| 21.45|
+|2017-11-01T00:08:00.000+08:00| false| 22.58|
+|2017-11-01T00:09:00.000+08:00| false| 20.98|
+|2017-11-01T00:10:00.000+08:00| true| 25.52|
+|2017-11-01T00:11:00.000+08:00| false| 22.91|
+|2017-11-01T16:35:00.000+08:00| true| 23.44|
+|2017-11-01T16:36:00.000+08:00| false| 21.98|
+|2017-11-01T16:37:00.000+08:00| false| 21.93|
++-----------------------------+------------------------+-----------------------------+
+Total line number = 9
+It costs 0.018s
+```
+
+
+#### Choose Multiple Columns of Data for Different Devices According to Multiple Time Intervals
+
+The system supports the selection of data in any column in a query, i.e., the selected columns can come from different devices. For example, the SQL statement is:
+
+```sql
+select wf01.wt01.status,wf02.wt02.hardware from root.ln where (time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000) or (time >= 2017-11-01T16:35:00.000 and time <= 2017-11-01T16:37:00.000);
+```
+
+which means:
+
+The selected timeseries are "the power supply status of ln group wf01 plant wt01 device" and "the hardware version of ln group wf02 plant wt02 device"; the statement specifies two different time intervals, namely "2017-11-01T00:05:00.000 to 2017-11-01T00:12:00.000" and "2017-11-01T16:35:00.000 to 2017-11-01T16:37:00.000". The SQL statement requires that the values of selected timeseries satisfying any time interval be selected.
+
+The execution result of this SQL statement is as follows:
+
+```
++-----------------------------+------------------------+--------------------------+
+| Time|root.ln.wf01.wt01.status|root.ln.wf02.wt02.hardware|
++-----------------------------+------------------------+--------------------------+
+|2017-11-01T00:06:00.000+08:00| false| v1|
+|2017-11-01T00:07:00.000+08:00| false| v1|
+|2017-11-01T00:08:00.000+08:00| false| v1|
+|2017-11-01T00:09:00.000+08:00| false| v1|
+|2017-11-01T00:10:00.000+08:00| true| v2|
+|2017-11-01T00:11:00.000+08:00| false| v1|
+|2017-11-01T16:35:00.000+08:00| true| v2|
+|2017-11-01T16:36:00.000+08:00| false| v1|
+|2017-11-01T16:37:00.000+08:00| false| v1|
++-----------------------------+------------------------+--------------------------+
+Total line number = 9
+It costs 0.014s
+```
+
+#### Order By Time Query
+
+IoTDB supports the 'order by time' statement since 0.11, it's used to display results in descending order by time.
+For example, the SQL statement is:
+
+```sql
+select * from root.ln.** where time > 1 order by time desc limit 10;
+```
+
+The execution result of this SQL statement is as follows:
+
+```
++-----------------------------+--------------------------+------------------------+-----------------------------+------------------------+
+| Time|root.ln.wf02.wt02.hardware|root.ln.wf02.wt02.status|root.ln.wf01.wt01.temperature|root.ln.wf01.wt01.status|
++-----------------------------+--------------------------+------------------------+-----------------------------+------------------------+
+|2017-11-07T23:59:00.000+08:00| v1| false| 21.07| false|
+|2017-11-07T23:58:00.000+08:00| v1| false| 22.93| false|
+|2017-11-07T23:57:00.000+08:00| v2| true| 24.39| true|
+|2017-11-07T23:56:00.000+08:00| v2| true| 24.44| true|
+|2017-11-07T23:55:00.000+08:00| v2| true| 25.9| true|
+|2017-11-07T23:54:00.000+08:00| v1| false| 22.52| false|
+|2017-11-07T23:53:00.000+08:00| v2| true| 24.58| true|
+|2017-11-07T23:52:00.000+08:00| v1| false| 20.18| false|
+|2017-11-07T23:51:00.000+08:00| v1| false| 22.24| false|
+|2017-11-07T23:50:00.000+08:00| v2| true| 23.7| true|
++-----------------------------+--------------------------+------------------------+-----------------------------+------------------------+
+Total line number = 10
+It costs 0.016s
+```
+
+### Execution Interface
+
+In IoTDB, there are two ways to execute data query:
+
+- Execute queries using IoTDB-SQL.
+- Efficient execution interfaces for common queries, including time-series raw data query, last query, and aggregation query.
+
+#### Execute queries using IoTDB-SQL
+
+Data query statements can be used in SQL command-line terminals, JDBC, JAVA / C++ / Python / Go and other native APIs, and RESTful APIs.
+
+- Execute the query statement in the SQL command line terminal: start the SQL command line terminal, and directly enter the query statement to execute, see [SQL command line terminal](../Tools-System/CLI.md).
+
+- Execute query statements in JDBC, see [JDBC](../API/Programming-JDBC.md) for details.
+
+- Execute query statements in native APIs such as JAVA / C++ / Python / Go. For details, please refer to the relevant documentation in the Application Programming Interface chapter. The interface prototype is as follows:
+
+ ````java
+ SessionDataSet executeQueryStatement(String sql)
+ ````
+
+- Used in RESTful API, see [HTTP API V1](../API/RestServiceV1.md) or [HTTP API V2](../API/RestServiceV2.md) for details.
+
+#### Efficient execution interfaces
+
+The native APIs provide efficient execution interfaces for commonly used queries, which can save time-consuming operations such as SQL parsing. include:
+
+* Time-series raw data query with time range:
+ - The specified query time range is a left-closed right-open interval, including the start time but excluding the end time.
+
+```java
+SessionDataSet executeRawDataQuery(List paths, long startTime, long endTime);
+```
+
+* Last query:
+ - Query the last data, whose timestamp is greater than or equal LastTime.
+
+```java
+SessionDataSet executeLastDataQuery(List paths, long LastTime);
+```
+
+* Aggregation query:
+ - Support specified query time range: The specified query time range is a left-closed right-open interval, including the start time but not the end time.
+ - Support GROUP BY TIME.
+
+```java
+SessionDataSet executeAggregationQuery(List paths, List aggregations);
+
+SessionDataSet executeAggregationQuery(
+ List paths, List aggregations, long startTime, long endTime);
+
+SessionDataSet executeAggregationQuery(
+ List paths,
+ List aggregations,
+ long startTime,
+ long endTime,
+ long interval);
+
+SessionDataSet executeAggregationQuery(
+ List paths,
+ List aggregations,
+ long startTime,
+ long endTime,
+ long interval,
+ long slidingStep);
+```
+
+## `SELECT` CLAUSE
+The `SELECT` clause specifies the output of the query, consisting of several `selectExpr`. Each `selectExpr` defines one or more columns in the query result. For select expression details, see document [Operator-and-Expression](../SQL-Manual/Operator-and-Expression.md).
+
+- Example 1:
+
+```sql
+select temperature from root.ln.wf01.wt01
+```
+
+- Example 2:
+
+```sql
+select status, temperature from root.ln.wf01.wt01
+```
+
+### Last Query
+
+The last query is a special type of query in Apache IoTDB. It returns the data point with the largest timestamp of the specified time series. In other word, it returns the latest state of a time series. This feature is especially important in IoT data analysis scenarios. To meet the performance requirement of real-time device monitoring systems, Apache IoTDB caches the latest values of all time series to achieve microsecond read latency.
+
+The last query is to return the most recent data point of the given timeseries in a three column format.
+
+The SQL syntax is defined as:
+
+```sql
+select last [COMMA