- Docs Home
- About TiDB
- Quick Start
- Develop
- Overview
- Quick Start
- Build a TiDB Cluster in TiDB Cloud (Developer Tier)
- CRUD SQL in TiDB
- Build a Simple CRUD App with TiDB
- Example Applications
- Connect to TiDB
- Design Database Schema
- Write Data
- Read Data
- Transaction
- Optimize
- Troubleshoot
- Reference
- Cloud Native Development Environment
- Third-party Support
- Deploy
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migrate
- Overview
- Migration Tools
- Migration Scenarios
- Migrate from Aurora
- Migrate MySQL of Small Datasets
- Migrate MySQL of Large Datasets
- Migrate and Merge MySQL Shards of Small Datasets
- Migrate and Merge MySQL Shards of Large Datasets
- Migrate from CSV Files
- Migrate from SQL Files
- Migrate from One TiDB Cluster to Another TiDB Cluster
- Migrate from TiDB to MySQL-compatible Databases
- Advanced Migration
- Integrate
- Maintain
- Monitor and Alert
- Troubleshoot
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries Using Top SQL
- Identify Expensive Queries Using Logs
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Save and Restore the On-Site Information of a Cluster
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Troubleshoot Inconsistency Between Data and Indexes
- Performance Tuning
- Tuning Guide
- Configuration Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Tutorials
- TiDB Tools
- Overview
- Use Cases
- Download
- TiUP
- Documentation Map
- Overview
- Terminology and Concepts
- Manage TiUP Components
- FAQ
- Troubleshooting Guide
- Command Reference
- Overview
- TiUP Commands
- TiUP Cluster Commands
- Overview
- tiup cluster audit
- tiup cluster check
- tiup cluster clean
- tiup cluster deploy
- tiup cluster destroy
- tiup cluster disable
- tiup cluster display
- tiup cluster edit-config
- tiup cluster enable
- tiup cluster help
- tiup cluster import
- tiup cluster list
- tiup cluster patch
- tiup cluster prune
- tiup cluster reload
- tiup cluster rename
- tiup cluster replay
- tiup cluster restart
- tiup cluster scale-in
- tiup cluster scale-out
- tiup cluster start
- tiup cluster stop
- tiup cluster template
- tiup cluster upgrade
- TiUP DM Commands
- Overview
- tiup dm audit
- tiup dm deploy
- tiup dm destroy
- tiup dm disable
- tiup dm display
- tiup dm edit-config
- tiup dm enable
- tiup dm help
- tiup dm import
- tiup dm list
- tiup dm patch
- tiup dm prune
- tiup dm reload
- tiup dm replay
- tiup dm restart
- tiup dm scale-in
- tiup dm scale-out
- tiup dm start
- tiup dm stop
- tiup dm template
- tiup dm upgrade
- TiDB Cluster Topology Reference
- DM Cluster Topology Reference
- Mirror Reference Guide
- TiUP Components
- PingCAP Clinic Diagnostic Service
- TiDB Operator
- Dumpling
- TiDB Lightning
- TiDB Data Migration
- About TiDB Data Migration
- Architecture
- Quick Start
- Deploy a DM cluster
- Tutorials
- Advanced Tutorials
- Maintain
- Cluster Upgrade
- Tools
- Performance Tuning
- Manage Data Sources
- Manage Tasks
- Export and Import Data Sources and Task Configurations of Clusters
- Handle Alerts
- Daily Check
- Reference
- Architecture
- Command Line
- Configuration Files
- OpenAPI
- Compatibility Catalog
- Secure
- Monitoring and Alerts
- Error Codes
- Glossary
- Example
- Troubleshoot
- Release Notes
- Backup & Restore (BR)
- TiDB Binlog
- TiCDC
- Dumpling
- sync-diff-inspector
- TiSpark
- Reference
- Cluster Architecture
- Key Monitoring Metrics
- Secure
- Privileges
- SQL
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMNADD INDEXADMINADMIN CANCEL DDLADMIN CHECKSUM TABLEADMIN CHECK [TABLE|INDEX]ADMIN SHOW DDL [JOBS|QUERIES]ADMIN SHOW TELEMETRYALTER DATABASEALTER INDEXALTER INSTANCEALTER PLACEMENT POLICYALTER TABLEALTER TABLE COMPACTALTER USERANALYZE TABLEBACKUPBATCHBEGINCHANGE COLUMNCOMMITCHANGE DRAINERCHANGE PUMPCREATE [GLOBAL|SESSION] BINDINGCREATE DATABASECREATE INDEXCREATE PLACEMENT POLICYCREATE ROLECREATE SEQUENCECREATE TABLE LIKECREATE TABLECREATE USERCREATE VIEWDEALLOCATEDELETEDESCDESCRIBEDODROP [GLOBAL|SESSION] BINDINGDROP COLUMNDROP DATABASEDROP INDEXDROP PLACEMENT POLICYDROP ROLEDROP SEQUENCEDROP STATSDROP TABLEDROP USERDROP VIEWEXECUTEEXPLAIN ANALYZEEXPLAINFLASHBACK TABLEFLUSH PRIVILEGESFLUSH STATUSFLUSH TABLESGRANT <privileges>GRANT <role>INSERTKILL [TIDB]LOAD DATALOAD STATSMODIFY COLUMNPREPARERECOVER TABLERENAME INDEXRENAME TABLEREPLACERESTOREREVOKE <privileges>REVOKE <role>ROLLBACKSELECTSET DEFAULT ROLESET [NAMES|CHARACTER SET]SET PASSWORDSET ROLESET TRANSACTIONSET [GLOBAL|SESSION] <variable>SHOW ANALYZE STATUSSHOW [BACKUPS|RESTORES]SHOW [GLOBAL|SESSION] BINDINGSSHOW BUILTINSSHOW CHARACTER SETSHOW COLLATIONSHOW [FULL] COLUMNS FROMSHOW CONFIGSHOW CREATE PLACEMENT POLICYSHOW CREATE SEQUENCESHOW CREATE TABLESHOW CREATE USERSHOW DATABASESSHOW DRAINER STATUSSHOW ENGINESSHOW ERRORSSHOW [FULL] FIELDS FROMSHOW GRANTSSHOW INDEX [FROM|IN]SHOW INDEXES [FROM|IN]SHOW KEYS [FROM|IN]SHOW MASTER STATUSSHOW PLACEMENTSHOW PLACEMENT FORSHOW PLACEMENT LABELSSHOW PLUGINSSHOW PRIVILEGESSHOW [FULL] PROCESSSLISTSHOW PROFILESSHOW PUMP STATUSSHOW SCHEMASSHOW STATS_HEALTHYSHOW STATS_HISTOGRAMSSHOW STATS_METASHOW STATUSSHOW TABLE NEXT_ROW_IDSHOW TABLE REGIONSSHOW TABLE STATUSSHOW [FULL] TABLESSHOW [GLOBAL|SESSION] VARIABLESSHOW WARNINGSSHUTDOWNSPLIT REGIONSTART TRANSACTIONTABLETRACETRUNCATEUPDATEUSEWITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Locking Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Table Attributes
- Transactions
- Garbage Collection (GC)
- Views
- Partitioning
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Placement Rules in SQL
- System Tables
mysql- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUSCLIENT_ERRORS_SUMMARY_BY_HOSTCLIENT_ERRORS_SUMMARY_BY_USERCLIENT_ERRORS_SUMMARY_GLOBALCHARACTER_SETSCLUSTER_CONFIGCLUSTER_HARDWARECLUSTER_INFOCLUSTER_LOADCLUSTER_LOGCLUSTER_SYSTEMINFOCOLLATIONSCOLLATION_CHARACTER_SET_APPLICABILITYCOLUMNSDATA_LOCK_WAITSDDL_JOBSDEADLOCKSENGINESINSPECTION_RESULTINSPECTION_RULESINSPECTION_SUMMARYKEY_COLUMN_USAGEMETRICS_SUMMARYMETRICS_TABLESPARTITIONSPLACEMENT_POLICIESPROCESSLISTREFERENTIAL_CONSTRAINTSSCHEMATASEQUENCESSESSION_VARIABLESSLOW_QUERYSTATISTICSTABLESTABLE_CONSTRAINTSTABLE_STORAGE_STATSTIDB_HOT_REGIONSTIDB_HOT_REGIONS_HISTORYTIDB_INDEXESTIDB_SERVERS_INFOTIDB_TRXTIFLASH_REPLICATIKV_REGION_PEERSTIKV_REGION_STATUSTIKV_STORE_STATUSUSER_PRIVILEGESVIEWS
METRICS_SCHEMA
- UI
- TiDB Dashboard
- Overview
- Maintain
- Access
- Overview Page
- Cluster Info Page
- Top SQL Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Instance Profiling
- Session Management and Configuration
- FAQ
- CLI
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Telemetry
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- FAQs
- Release Notes
- All Releases
- Release Timeline
- TiDB Versioning
- v6.1
- v6.0
- v5.4
- v5.3
- v5.2
- v5.1
- v5.0
- v4.0
- v3.1
- v3.0
- v2.1
- v2.0
- v1.0
- Glossary
Topology Configuration File for DM Cluster Deployment Using TiUP
To deploy or scale a TiDB Data Migration (DM) cluster, you need to provide a topology file (sample) to describe the cluster topology.
Similarly, to modify the cluster topology, you need to modify the topology file. The difference is that, after the cluster is deployed, you can only modify a part of the fields in the topology file. This document introduces each section of the topology file and each field in each section.
File structure
A topology configuration file for DM cluster deployment using TiUP might contain the following sections:
- global: the cluster's global configuration. Some of the configuration items use the default values of the cluster, and you can configure them separately in each instance.
- server_configs: the components' global configuration. You can configure each component separately. If an instance has a configuration item with the same key, the instance's configuration item will take effect.
- master_servers: the configuration of the DM-master instance. The configuration specifies the machines to which the master service of the DM component is deployed.
- worker_servers: the configuration of the DM-worker instance. The configuration specifies the machines to which the worker service of the DM component is deployed.
- monitoring_servers: specifies the machines to which the Prometheus instances are deployed. TiUP supports deploying multiple Prometheus instances but only the first instance is used.
- grafana_servers: the configuration of the Grafana instances. The configuration specifies the machines to which the Grafana instances are deployed.
- alertmanager_servers: the configuration of the Alertemanager instances. The configuration specifies the machines to which the Alertmanager instances are deployed.
global
The global section corresponds to the cluster's global configuration and has the following fields:
user: the user to start the deployed cluster. The default value is "tidb". If the user specified in the<user>field does not exist on the target machine, TiUP will automatically try to create the user.group: the user group to which a user belongs when the user is automatically created. The default value is the same as the<user>field. If the specified group does not exist, it will be created automatically.ssh_port: the SSH port to connect to the target machine for operations. The default value is "22".deploy_dir: the deployment directory for each component. The default value is "deploy". The construction rules are as follows:- If the absolute path
deploy_diris configured at the instance level, the actual deployment directory is thedeploy_dirconfigured for the instance. - For each instance, if you do not configure
deploy_dir, the default value is the relative path<component-name>-<component-port>. - If
global.deploy_diris set to an absolute path, the component is deployed to the<global.deploy_dir>/<instance.deploy_dir>directory. - If
global.deploy_diris set to a relative path, the component is deployed to the/home/<global.user>/<global.deploy_dir>/<instance.deploy_dir>directory.
- If the absolute path
data_dir: the data directory. The default value is "data". The construction rules are as follows.- If the absolute path
data_diris configured at the instance level, the actual data directory is thedata_dirconfigured for the instance. - For each instance, if
data_diris not configured, the default value is<global.data_dir>. - If
data_diris set to a relative path, the component data is stored in<deploy_dir>/<data_dir>. For the construction rules of<deploy_dir>, see the construction rules of thedeploy_dirfield.
- If the absolute path
log_dir: the data directory. The default value is "log". The construction rules are as follows.- If the absolute path of
log_diris configured at the instance level, the actual log directory is thelog_dirconfigured for the instance. - For each instance, if
log_diris not configured by the user, the default value is<global.log_dir>. - If
log_diris a relative path, the component logs will be stored in<deploy_dir>/<log_dir>. For the construction rules of<deploy_dir>, see the construction rules of thedeploy_dirfield.
- If the absolute path of
os: the operating system of the target machine. The field controls which operating system to adapt to for the components pushed to the target machine. The default value is "linux".arch: the CPU architecture of the target machine. The field controls which platform to adapt to for the binary packages pushed to the target machine. The supported values are "amd64" and "arm64". The default value is "amd64".resource_control: runtime resource control. All configurations in this field are written to the service file of systemd. There is no limit by default. The resources that can be controlled are as follows:memory_limit: limits the maximum memory at runtime. For example, "2G" means that the maximum memory of 2 GB can be used.cpu_quota: limits the maximum CPU usage at runtime. For example, "200%".io_read_bandwidth_max: limits the maximum I/O bandwidth for disk reads. For example,"/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0:0 100M".io_write_bandwidth_max: limits the maximum I/O bandwidth for disk writes. For example,"/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0:0 100M".limit_core: controls the size of core dump.
A global configuration example:
global:
user: "tidb"
resource_control:
memory_limit: "2G"
In the example, the configuration specifies that the tidb user is used to start the cluster, and that each component is limited to a maximum of 2 GB of memory when it is running.
server_configs
server_configs is used to configure services and to generate configuration files for each component. Similar to the global section, the configurations in the server_configs section can be overwritten by the configurations with the same keys in an instance. server_configs mainly contains the following fields:
master: the configuration related to the DM-master service. For all the supported configuration items, see DM-master Configuration File.worker: the configuration related to the DM-worker service, For all the supported configuration items, see DM-worker Configuration File.
A server_configs configuration example is as follows:
server_configs:
master:
log-level: info
rpc-timeout: "30s"
rpc-rate-limit: 10.0
rpc-rate-burst: 40
worker:
log-level: info
master_servers
master_servers specifies the machines to which the master node of the DM component is deployed. You can also specify the service configuration on each machine. master_servers is an array. Each array element contains the following fields:
host: specifies the machine to deploy to. The field value is an IP address and is mandatory.ssh_port: specifies the SSH port to connect to the target machine for operations. If the field is not specified, thessh_portin theglobalsection is used.name: specifies the name of the DM-master instance. The name must be unique for different instances. Otherwise, the cluster cannot be deployed.port: specifies the port on which DM-master provides services. The default value is "8261".peer_port: specifies the port for communication between DM-masters. The default value is "8291".deploy_dir: specifies the deployment directory. If the field is not specified, or specified as a relative directory, the deployment directory is generated according to thedeploy_dirconfiguration in theglobalsection.data_dir: specifies the data directory. If the field is not specified, or specified as a relative directory, the data directory is generated according to thedata_dirconfiguration in theglobalsection.log_dir: specifies the log directory. If the field is not specified, or specified as a relative directory, the log directory is generated according to thelog_dirconfiguration in theglobalsection.numa_node: allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is a string type. The field value is the ID of the NUMA node, such as "0,1".config: the configuration rules of this field are the same as that ofmasterin theserver_configssection. Ifconfigis specified, the configuration ofconfigwill be merged with the configuration ofmasterinserver_configs(if the two fields overlap, the configuration of this field takes effect), and then the configuration file is generated and distributed to the machine specified in thehostfield.os: the operating system of the machine specified in thehostfield. If the field is not specified, the default value is theosvalue configured in theglobalsection.arch: the architecture of the machine specified in thehostfield. If the field is not specified, the default value is thearchvalue configured in theglobalsection.resource_control: resource control on this service. If this field is specified, the configuration of this field will be merged with the configuration ofresource_controlin theglobalsection (if the two fields overlap, the configuration of this field takes effect), and then the configuration file of systemd is generated and distributed to the machine specified in thehostfield. The configuration rules of this field are the same as that ofresource_controlin theglobalsection.v1_source_path: when upgrading from v1.0.x, you can specify the directory where the configuration file of the V1 source is located in this field.
In the master_servers section, the following fields cannot be modified after the deployment is completed:
hostnameportpeer_portdeploy_dirdata_dirlog_dirarchosv1_source_path
A master_servers configuration example is as follows:
master_servers:
- host: 10.0.1.11
name: master1
ssh_port: 22
port: 8261
peer_port: 8291
deploy_dir: "/dm-deploy/dm-master-8261"
data_dir: "/dm-data/dm-master-8261"
log_dir: "/dm-deploy/dm-master-8261/log"
numa_node: "0,1"
# The following configs are used to overwrite the `server_configs.master` values.
config:
log-level: info
rpc-timeout: "30s"
rpc-rate-limit: 10.0
rpc-rate-burst: 40
- host: 10.0.1.18
name: master2
- host: 10.0.1.19
name: master3
worker_servers
worker_servers specifies the machines to which the master node of the DM component is deployed. You can also specify the service configuration on each machine. worker_servers is an array. Each array element contains the following fields:
host: specifies the machine to deploy to. The field value is an IP address and is mandatory.ssh_port: specifies the SSH port to connect to the target machine for operations. If the field is not specified, thessh_portin theglobalsection is used.name: specifies the name of the DM-worker instance. The name must be unique for different instances. Otherwise, the cluster cannot be deployed.port: specifies the port on which DM-worker provides services. The default value is "8262".deploy_dir: specifies the deployment directory. If the field is not specified, or specified as a relative directory, the deployment directory is generated according to thedeploy_dirconfiguration in theglobalsection.data_dir: specifies the data directory. If the field is not specified, or specified as a relative directory, the data directory is generated according to thedata_dirconfiguration in theglobalsection.log_dir: specifies the log directory. If the field is not specified, or specified as a relative directory, the log directory is generated according to thelog_dirconfiguration in theglobalsection.numa_node: allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is a string type. The field value is the ID of the NUMA node, such as "0,1".config: the configuration rules of this field are the same as that ofworkerin theserver_configssection. Ifconfigis specified, the configuration ofconfigwill be merged with the configuration ofworkerinserver_configs(if the two fields overlap, the configuration of this field takes effect), and then the configuration file is generated and distributed to the machine specified in thehostfield.os: the operating system of the machine specified in thehostfield. If the field is not specified, the default value is theosvalue configured in theglobalsection.arch: the architecture of the machine specified in thehostfield. If the field is not specified, the default value is thearchvalue configured in theglobalsection.resource_control: resource control on this service. If this field is specified, the configuration of this field will be merged with the configuration ofresource_controlin theglobalsection (if the two fields overlap, the configuration of this field takes effect), and then the configuration file of systemd is generated and distributed to the machine specified in thehostfield. The configuration rules of this field are the same as that ofresource_controlin theglobalsection.
In the worker_servers section, the following fields cannot be modified after the deployment is completed:
hostnameportdeploy_dirdata_dirlog_dirarchos
A worker_servers configuration example is as follows:
worker_servers:
- host: 10.0.1.12
ssh_port: 22
port: 8262
deploy_dir: "/dm-deploy/dm-worker-8262"
log_dir: "/dm-deploy/dm-worker-8262/log"
numa_node: "0,1"
# config is used to overwrite the `server_configs.worker` values
config:
log-level: info
- host: 10.0.1.19
monitoring_servers
monitoring_servers specifies the machines to which the Prometheus service is deployed. You can also specify the service configuration on the machine. monitoring_servers is an array. Each array element contains the following fields:
host: specifies the machine to deploy to. The field value is an IP address and is mandatory.ssh_port: specifies the SSH port to connect to the target machine for operations. If the field is not specified, thessh_portin theglobalsection is used.port: specifies the port on which Prometheus provides services. The default value is "9090".deploy_dir: specifies the deployment directory. If the field is not specified, or specified as a relative directory, the deployment directory is generated according to thedeploy_dirconfiguration in theglobalsection.data_dir: specifies the data directory. If the field is not specified, or specified as a relative directory, the data directory is generated according to thedata_dirconfiguration in theglobalsection.log_dir: specifies the log directory. If the field is not specified, or specified as a relative directory, the log directory is generated according to thelog_dirconfiguration in theglobalsection.numa_node: allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is a string type. The field value is the ID of the NUMA node, such as "0,1"storage_retention: specifies the retention time of the Prometheus monitoring data. The default value is "15d".rule_dir: specifies a local directory where the complete*.rules.ymlfiles are located. The files in the specified directory will be sent to the target machine as the Prometheus rules during the initialization phase of the cluster configuration.remote_config: Supports writing Prometheus data to the remote, or reading data from the remote. This field has two configurations:remote_write: See the Prometheus document<remote_write>.remote_read: See the Prometheus document<remote_read>.
external_alertmanagers: If theexternal_alertmanagersfield is configured, Prometheus alerts the configuration behavior to the Alertmanager that is outside the cluster. This field is an array, each element of which is an external Alertmanager and consists of thehostandweb_portfields.os: the operating system of the machine specified in thehostfield. If the field is not specified, the default value is theosvalue configured in theglobalsection.arch: the architecture of the machine specified in thehostfield. If the field is not specified, the default value is thearchvalue configured in theglobalsection.resource_control: resource control on this service. If this field is specified, the configuration of this field will be merged with the configuration ofresource_controlin theglobalsection (if the two fields overlap, the configuration of this field takes effect), and then the configuration file of systemd is generated and distributed to the machine specified in thehostfield. The configuration rules of this field are the same as that ofresource_controlin theglobalsection.
In the monitoring_servers section, the following fields cannot be modified after the deployment is completed:
hostportdeploy_dirdata_dirlog_dirarchos
A monitoring_servers configuration example is as follows:
monitoring_servers:
- host: 10.0.1.11
rule_dir: /local/rule/dir
remote_config:
remote_write:
- queue_config:
batch_send_deadline: 5m
capacity: 100000
max_samples_per_send: 10000
max_shards: 300
url: http://127.0.0.1:8003/write
remote_read:
- url: http://127.0.0.1:8003/read\
external_alertmanagers:
- host: 10.1.1.1
web_port: 9093
- host: 10.1.1.2
web_port: 9094
grafana_servers
grafana_servers specifies the machines to which the Grafana service is deployed. You can also specify the service configuration on the machine. grafana_servers is an array. Each array element contains the following fields:
host: specifies the machine to deploy to. The field value is an IP address and is mandatory.ssh_port: specifies the SSH port to connect to the target machine for operations. If the field is not specified, thessh_portin theglobalsection is used.port: specifies the port on which Grafana provides services. The default value is "3000".deploy_dir: specifies the deployment directory. If the field is not specified, or specified as a relative directory, the deployment directory is generated according to thedeploy_dirconfiguration in theglobalsection.os: the operating system of the machine specified in thehostfield. If the field is not specified, the default value is theosvalue configured in theglobalsection.arch: the architecture of the machine specified in thehostfield. If the field is not specified, the default value is thearchvalue configured in theglobalsection.username: specifies the username of the Grafana login screen.password: specifies the corresponding password of Grafana.dashboard_dir: specifies a local directory where the completedashboard(*.json)files are located. The files in the specified directory will be sent to the target machine as Grafana dashboards during the initialization phase of the cluster configuration.resource_control: resource control on this service. If this field is specified, the configuration of this field will be merged with the configuration ofresource_controlin theglobalsection (if the two fields overlap, the configuration of this field takes effect), and then the configuration file of systemd is generated and distributed to the machine specified in thehostfield. The configuration rules of this field are the same as that ofresource_controlin theglobalsection.
If the dashboard_dir field of grafana_servers is configured, after executing the tiup cluster rename command to rename the cluster, you need to perform the following operations:
- In the local
dashboardsdirectory, update the value of thedatasourcefield to the new cluster name (thedatasourceis named after the cluster name). - Execute the
tiup cluster reload -R grafanacommand.
In grafana_servers, the following fields cannot be modified after the deployment is completed:
hostportdeploy_dirarchos
A grafana_servers configuration example is as follows:
grafana_servers:
- host: 10.0.1.11
dashboard_dir: /local/dashboard/dir
alertmanager_servers
alertmanager_servers specifies the machines to which the Alertmanager service is deployed. You can also specify the service configuration on each machine. alertmanager_servers is an array. Each array element contains the following fields:
host: specifies the machine to deploy to. The field value is an IP address and is mandatory.ssh_port: specifies the SSH port to connect to the target machine for operations. If the field is not specified, thessh_portin theglobalsection is used.web_port: specify the port on which Alertmanager provides web services. The default value is "9093".cluster_port: Specify the communication port between one Alertmanger and other Alertmanager. The default value is "9094".deploy_dir: specifies the deployment directory. If the field is not specified, or specified as a relative directory, the deployment directory is generated according to thedeploy_dirconfiguration in theglobalsection.data_dir: specifies the data directory. If the field is not specified, or specified as a relative directory, the data directory is generated according to thedata_dirconfiguration in theglobalsection.log_dir: specifies the log directory. If the field is not specified, or specified as a relative directory, the log directory is generated according to thelog_dirconfiguration in theglobalsection.numa_node: allocates the NUMA policy to the instance. Before specifying this field, you need to make sure that the target machine has numactl installed. If this field is specified, cpubind and membind policies are allocated using numactl. This field is a string type. The field value is the ID of the NUMA node, such as "0,1"config_file: specifies a local file. The specified file will be sent to the target machine as the configuration for Alertmanager during the initialization phase of the cluster configuration.os: the operating system of the machine specified in thehostfield. If the field is not specified, the default value is theosvalue configured in theglobalsection.arch: the architecture of the machine specified in thehostfield. If the field is not specified, the default value is thearchvalue configured in theglobalsection.resource_control: resource control on this service. If this field is specified, the configuration of this field will be merged with the configuration ofresource_controlin theglobalsection (if the two fields overlap, the configuration of this field takes effect), and then the configuration file of systemd is generated and distributed to the machine specified in thehostfield. The configuration rules of this field are the same as that ofresource_controlin theglobalsection.
In alertmanager_servers, the following fields cannot be modified after the deployment is completed:
hostweb_portcluster_portdeploy_dirdata_dirlog_dirarchos
An alertmanager_servers configuration example is as follows:
alertmanager_servers:
- host: 10.0.1.11
config_file: /local/config/file
- host: 10.0.1.12
config_file: /local/config/file