- Docs Home
- About TiDB
- Quick Start
- Develop
- Overview
- Quick Start
- Build a TiDB Cluster in TiDB Cloud (Developer Tier)
- CRUD SQL in TiDB
- Build a Simple CRUD App with TiDB
- Example Applications
- Connect to TiDB
- Design Database Schema
- Write Data
- Read Data
- Transaction
- Optimize
- Troubleshoot
- Reference
- Cloud Native Development Environment
- Third-party Support
- Deploy
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migrate
- Overview
- Migration Tools
- Migration Scenarios
- Migrate from Aurora
- Migrate MySQL of Small Datasets
- Migrate MySQL of Large Datasets
- Migrate and Merge MySQL Shards of Small Datasets
- Migrate and Merge MySQL Shards of Large Datasets
- Migrate from CSV Files
- Migrate from SQL Files
- Migrate from One TiDB Cluster to Another TiDB Cluster
- Migrate from TiDB to MySQL-compatible Databases
- Advanced Migration
- Integrate
- Maintain
- Monitor and Alert
- Troubleshoot
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries Using Top SQL
- Identify Expensive Queries Using Logs
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Save and Restore the On-Site Information of a Cluster
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Troubleshoot Inconsistency Between Data and Indexes
- Performance Tuning
- Tuning Guide
- Configuration Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Tutorials
- TiDB Tools
- Overview
- Use Cases
- Download
- TiUP
- Documentation Map
- Overview
- Terminology and Concepts
- Manage TiUP Components
- FAQ
- Troubleshooting Guide
- Command Reference
- Overview
- TiUP Commands
- TiUP Cluster Commands
- Overview
- tiup cluster audit
- tiup cluster check
- tiup cluster clean
- tiup cluster deploy
- tiup cluster destroy
- tiup cluster disable
- tiup cluster display
- tiup cluster edit-config
- tiup cluster enable
- tiup cluster help
- tiup cluster import
- tiup cluster list
- tiup cluster patch
- tiup cluster prune
- tiup cluster reload
- tiup cluster rename
- tiup cluster replay
- tiup cluster restart
- tiup cluster scale-in
- tiup cluster scale-out
- tiup cluster start
- tiup cluster stop
- tiup cluster template
- tiup cluster upgrade
- TiUP DM Commands
- Overview
- tiup dm audit
- tiup dm deploy
- tiup dm destroy
- tiup dm disable
- tiup dm display
- tiup dm edit-config
- tiup dm enable
- tiup dm help
- tiup dm import
- tiup dm list
- tiup dm patch
- tiup dm prune
- tiup dm reload
- tiup dm replay
- tiup dm restart
- tiup dm scale-in
- tiup dm scale-out
- tiup dm start
- tiup dm stop
- tiup dm template
- tiup dm upgrade
- TiDB Cluster Topology Reference
- DM Cluster Topology Reference
- Mirror Reference Guide
- TiUP Components
- PingCAP Clinic Diagnostic Service
- TiDB Operator
- Dumpling
- TiDB Lightning
- TiDB Data Migration
- About TiDB Data Migration
- Architecture
- Quick Start
- Deploy a DM cluster
- Tutorials
- Advanced Tutorials
- Maintain
- Cluster Upgrade
- Tools
- Performance Tuning
- Manage Data Sources
- Manage Tasks
- Export and Import Data Sources and Task Configurations of Clusters
- Handle Alerts
- Daily Check
- Reference
- Architecture
- Command Line
- Configuration Files
- OpenAPI
- Compatibility Catalog
- Secure
- Monitoring and Alerts
- Error Codes
- Glossary
- Example
- Troubleshoot
- Release Notes
- Backup & Restore (BR)
- TiDB Binlog
- TiCDC
- Dumpling
- sync-diff-inspector
- TiSpark
- Reference
- Cluster Architecture
- Key Monitoring Metrics
- Secure
- Privileges
- SQL
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMNADD INDEXADMINADMIN CANCEL DDLADMIN CHECKSUM TABLEADMIN CHECK [TABLE|INDEX]ADMIN SHOW DDL [JOBS|QUERIES]ADMIN SHOW TELEMETRYALTER DATABASEALTER INDEXALTER INSTANCEALTER PLACEMENT POLICYALTER TABLEALTER TABLE COMPACTALTER USERANALYZE TABLEBACKUPBATCHBEGINCHANGE COLUMNCOMMITCHANGE DRAINERCHANGE PUMPCREATE [GLOBAL|SESSION] BINDINGCREATE DATABASECREATE INDEXCREATE PLACEMENT POLICYCREATE ROLECREATE SEQUENCECREATE TABLE LIKECREATE TABLECREATE USERCREATE VIEWDEALLOCATEDELETEDESCDESCRIBEDODROP [GLOBAL|SESSION] BINDINGDROP COLUMNDROP DATABASEDROP INDEXDROP PLACEMENT POLICYDROP ROLEDROP SEQUENCEDROP STATSDROP TABLEDROP USERDROP VIEWEXECUTEEXPLAIN ANALYZEEXPLAINFLASHBACK TABLEFLUSH PRIVILEGESFLUSH STATUSFLUSH TABLESGRANT <privileges>GRANT <role>INSERTKILL [TIDB]LOAD DATALOAD STATSMODIFY COLUMNPREPARERECOVER TABLERENAME INDEXRENAME TABLEREPLACERESTOREREVOKE <privileges>REVOKE <role>ROLLBACKSELECTSET DEFAULT ROLESET [NAMES|CHARACTER SET]SET PASSWORDSET ROLESET TRANSACTIONSET [GLOBAL|SESSION] <variable>SHOW ANALYZE STATUSSHOW [BACKUPS|RESTORES]SHOW [GLOBAL|SESSION] BINDINGSSHOW BUILTINSSHOW CHARACTER SETSHOW COLLATIONSHOW [FULL] COLUMNS FROMSHOW CONFIGSHOW CREATE PLACEMENT POLICYSHOW CREATE SEQUENCESHOW CREATE TABLESHOW CREATE USERSHOW DATABASESSHOW DRAINER STATUSSHOW ENGINESSHOW ERRORSSHOW [FULL] FIELDS FROMSHOW GRANTSSHOW INDEX [FROM|IN]SHOW INDEXES [FROM|IN]SHOW KEYS [FROM|IN]SHOW MASTER STATUSSHOW PLACEMENTSHOW PLACEMENT FORSHOW PLACEMENT LABELSSHOW PLUGINSSHOW PRIVILEGESSHOW [FULL] PROCESSSLISTSHOW PROFILESSHOW PUMP STATUSSHOW SCHEMASSHOW STATS_HEALTHYSHOW STATS_HISTOGRAMSSHOW STATS_METASHOW STATUSSHOW TABLE NEXT_ROW_IDSHOW TABLE REGIONSSHOW TABLE STATUSSHOW [FULL] TABLESSHOW [GLOBAL|SESSION] VARIABLESSHOW WARNINGSSHUTDOWNSPLIT REGIONSTART TRANSACTIONTABLETRACETRUNCATEUPDATEUSEWITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Locking Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Table Attributes
- Transactions
- Garbage Collection (GC)
- Views
- Partitioning
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Placement Rules in SQL
- System Tables
mysql- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUSCLIENT_ERRORS_SUMMARY_BY_HOSTCLIENT_ERRORS_SUMMARY_BY_USERCLIENT_ERRORS_SUMMARY_GLOBALCHARACTER_SETSCLUSTER_CONFIGCLUSTER_HARDWARECLUSTER_INFOCLUSTER_LOADCLUSTER_LOGCLUSTER_SYSTEMINFOCOLLATIONSCOLLATION_CHARACTER_SET_APPLICABILITYCOLUMNSDATA_LOCK_WAITSDDL_JOBSDEADLOCKSENGINESINSPECTION_RESULTINSPECTION_RULESINSPECTION_SUMMARYKEY_COLUMN_USAGEMETRICS_SUMMARYMETRICS_TABLESPARTITIONSPLACEMENT_POLICIESPROCESSLISTREFERENTIAL_CONSTRAINTSSCHEMATASEQUENCESSESSION_VARIABLESSLOW_QUERYSTATISTICSTABLESTABLE_CONSTRAINTSTABLE_STORAGE_STATSTIDB_HOT_REGIONSTIDB_HOT_REGIONS_HISTORYTIDB_INDEXESTIDB_SERVERS_INFOTIDB_TRXTIFLASH_REPLICATIKV_REGION_PEERSTIKV_REGION_STATUSTIKV_STORE_STATUSUSER_PRIVILEGESVIEWS
METRICS_SCHEMA
- UI
- TiDB Dashboard
- Overview
- Maintain
- Access
- Overview Page
- Cluster Info Page
- Top SQL Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Instance Profiling
- Session Management and Configuration
- FAQ
- CLI
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Telemetry
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- FAQs
- Release Notes
- All Releases
- Release Timeline
- TiDB Versioning
- v6.1
- v6.0
- v5.4
- v5.3
- v5.2
- v5.1
- v5.0
- v4.0
- v3.1
- v3.0
- v2.1
- v2.0
- v1.0
- Glossary
Troubleshoot a TiFlash Cluster
This section describes some commonly encountered issues when using TiFlash, the reasons, and the solutions.
TiFlash fails to start
The issue might occur due to different reasons. It is recommended that you troubleshoot it following the steps below:
Check whether your system is CentOS8.
CentOS8 does not have the
libnsl.sosystem library. You can manually install it via the following command:dnf install libnslCheck your system's
ulimitparameter setting.ulimit -n 1000000Use the PD Control tool to check whether there is any TiFlash instance that failed to go offline on the node (same IP and Port) and force the instance(s) to go offline. For detailed steps, refer to Scale in a TiFlash cluster.
If the above methods cannot resolve your issue, save the TiFlash log files and email to info@pingcap.com for more information.
TiFlash replica is always unavailable
This is because TiFlash is in an abnormal state caused by configuration errors or environment issues. Take the following steps to identify the faulty component:
Check whether PD enables the
Placement Rulesfeature:echo 'config show replication' | /path/to/pd-ctl -u http://${pd-ip}:${pd-port}- If
trueis returned, go to the next step. - If
falseis returned, enable the Placement Rules feature and go to the next step.
- If
Check whether the TiFlash process is working correctly by viewing
UpTimeon the TiFlash-Summary monitoring panel.Check whether the TiFlash proxy status is normal through
pd-ctl.echo "store" | /path/to/pd-ctl -u http://${pd-ip}:${pd-port}The TiFlash proxy's
store.labelsincludes information such as{"key": "engine", "value": "tiflash"}. You can check this information to confirm a TiFlash proxy.Check whether
pd buddycan correctly print the logs (the log path is the value oflogin the [flash.flash_cluster] configuration item; the default log path is under thetmpdirectory configured in the TiFlash configuration file).Check whether the number of configured replicas is less than or equal to the number of TiKV nodes in the cluster. If not, PD cannot replicate data to TiFlash:
echo 'config placement-rules show' | /path/to/pd-ctl -u http://${pd-ip}:${pd-port}Reconfirm the value of
default: count.NoteAfter the placement rules feature is enabled, the previously configured
max-replicasandlocation-labelsno longer take effect. To adjust the replica policy, use the interface related to placement rules.Check whether the remaining disk space of the machine (where
storeof the TiFlash node is) is sufficient. By default, when the remaining disk space is less than 20% of thestorecapacity (which is controlled by thelow-space-ratioparameter), PD cannot schedule data to this TiFlash node.
TiFlash query time is unstable, and the error log prints many Lock Exception messages
This is because large amounts of data are written to the cluster, which causes that the TiFlash query encounters a lock and requires query retry.
You can set the query timestamp to one second earlier in TiDB. For example, if the current time is '2020-04-08 20:15:01', you can execute set @@tidb_snapshot='2020-04-08 20:15:00'; before you execute the query. This makes less TiFlash queries encounter a lock and mitigates the risk of unstable query time.
Some queries return the Region Unavailable error
If the load pressure on TiFlash is too heavy and it causes that TiFlash data replication falls behind, some queries might return the Region Unavailable error.
In this case, you can balance the load pressure by adding more TiFlash nodes.
Data file corruption
Take the following steps to handle the data file corruption:
- Refer to Take a TiFlash node down to take the corresponding TiFlash node down.
- Delete the related data of the TiFlash node.
- Redeploy the TiFlash node in the cluster.
TiFlash analysis is slow
If a statement contains operators or functions not supported in the MPP mode, TiDB does not select the MPP mode. Therefore, the analysis of the statement is slow. In this case, you can execute the EXPLAIN statement to check for operators or functions not supported in the MPP mode.
create table t(a datetime);
alter table t set tiflash replica 1;
insert into t values('2022-01-13');
set @@session.tidb_enforce_mpp=1;
explain select count(*) from t where subtime(a, '12:00:00') > '2022-01-01' group by a;
show warnings;
In this example, the warning message shows that TiDB does not select the MPP mode because TiDB 5.4 and earlier versions do not support the subtime function.
+---------+------+-----------------------------------------------------------------------------+
> | Level | Code | Message |
+---------+------+-----------------------------------------------------------------------------+
| Warning | 1105 | Scalar function 'subtime'(signature: SubDatetimeAndString, return type: datetime) is not supported to push down to tiflash now. |
+---------+------+-----------------------------------------------------------------------------+
Data is not replicated to TiFlash
After deploying a TiFlash node and starting replication (by performing the ALTER operation), no data is replicated to it. In this case, you can identify and address the problem by following the steps below:
Check whether the replication is successful by running the
ALTER table <tbl_name> set tiflash replica <num>command and check the output.- If there is output, go to the next step.
- If there is no output, run the
SELECT * FROM information_schema.tiflash_replicacommand to check whether TiFlash replicas have been created. If not, run theALTER table ${tbl_name} set tiflash replica ${num}command again, check whether other statements (for example,add index) have been executed, or check whether DDL executions are successful.
Check whether the TiFlash process runs correctly.
Check whether there is any change in
progress, theflash_region_countparameter in thetiflash_cluster_manager.logfile, and the Grafana monitoring itemUptime:- If yes, the TiFlash process runs correctly.
- If no, the TiFlash process is abnormal. Check the
tiflashlog for further information.
Check whether the Placement Rules function has been enabled by using pd-ctl:
echo 'config show replication' | /path/to/pd-ctl -u http://<pd-ip>:<pd-port>- If
trueis returned, go to the next step. - If
falseis returned, enable the Placement Rules feature and go to the next step.
- If
Check whether the
max-replicasconfiguration is correct:- If the value of
max-replicasdoes not exceed the number of TiKV nodes in the cluster, go to the next step. - If the value of
max-replicasis greater than the number of TiKV nodes in the cluster, the PD does not replicate data to the TiFlash node. To address this issue, changemax-replicasto an integer fewer than or equal to the number of TiKV nodes in the cluster.
Notemax-replicasis defaulted to 3. In production environments, the value is usually fewer than the number of TiKV nodes. In test environments, the value can be 1.curl -X POST -d '{ "group_id": "pd", "id": "default", "start_key": "", "end_key": "", "role": "voter", "count": 3, "location_labels": [ "host" ] }' <http://172.16.x.xxx:2379/pd/api/v1/config/rule>- If the value of
Check whether the connection between TiDB or PD and TiFlash is normal.
Search the
flash_cluster_manager.logfile for theERRORkeyword.If no
ERRORis found, the connection is normal. Go to the next step.If
ERRORis found, the connection is abnormal. Perform the following check.Check whether the log records PD keywords.
If PD keywords are found, check whether
raft.pd_addrin the TiFlash configuration file is valid. Specifically, run thecurl '{pd-addr}/pd/api/v1/config/rules'command and check whether there is any output in 5s.Check whether the log records TiDB-related keywords.
If TiDB keywords are found, check whether
flash.tidb_status_addrin the TiFlash configuration file is valid. Specifically, run thecurl '{tidb-status-addr}/tiflash/replica'command and check whether there is any output in 5s.Check whether the nodes can ping through each other.
NoteIf the problem persists, collect logs of the corresponding component for troubleshooting.
Check whether
placement-ruleis created for tables.Search the
flash_cluster_manager.logfile for theSet placement rule … table-<table_id>-rkeyword.- If the keyword is found, go to the next step.
- If not, collect logs of the corresponding component for troubleshooting.
Check whether the PD schedules properly.
Search the
pd.logfile for thetable-<table_id>-rkeyword and scheduling behaviors likeadd operator.- If the keyword is found, the PD schedules properly.
- If not, the PD does not schedule properly. Contact PingCAP technical support for help.
Data replication gets stuck
If data replication on TiFlash starts normally but then all or some data fails to be replicated after a period of time, you can confirm or resolve the issue by performing the following steps:
Check the disk space.
Check whether the disk space ratio is higher than the value of
low-space-ratio(defaulted to 0.8. When the space usage of a node exceeds 80%, the PD stops migrating data to this node to avoid exhaustion of disk space).- If the disk usage ratio is greater than or equal to the value of
low-space-ratio, the disk space is insufficient. To relieve the disk space, remove unnecessary files, such asspace_placeholder_file(if necessary, setreserve-spaceto 0MB after removing the file) under the${data}/flash/folder. - If the disk usage ratio is less than the value of
low-space-ratio, the disk space is sufficient. Go to the next step.
- If the disk usage ratio is greater than or equal to the value of
Check the network connectivity between TiKV, TiFlash, and PD.
In
flash_cluster_manager.log, check whether there are any new updates toflash_region_countcorresponding to the table that gets stuck.If no, go to the next step.
If yes, search for
down peer(replication gets stuck if there is a peer that is down).- Run
pd-ctl region check-down-peerto search fordown peer. - If
down peeris found, runpd-ctl operator add remove-peer\<region-id> \<tiflash-store-id>to remove it.
- Run
Check CPU usage.
On Grafana, choose TiFlash-Proxy-Details > Thread CPU > Region task worker pre-handle/generate snapshot CPU. Check the CPU usage of
<instance-ip>:<instance-port>-region-worker.If the curve is a straight line, the TiFlash node is stuck. Terminate the TiFlash process and restart it, or contact PingCAP technical support for help.
Data replication is slow
The causes may vary. You can address the problem by performing the following steps.
Adjust the value of the scheduling parameters.
- Increase
store limitto accelerate replication. - Decrease
config set patrol-region-interval 10msto make checker scan on Regions more frequent in TiKV. - Increase
region mergeto reduce the number of Regions, which means fewer scans and higher check frequencies.
- Increase
Adjust the load on TiFlsh.
Excessively high load on TiFlash can also result in slow replication. You can check the load of TiFlash indicators on the TiFlash-Summary panel on Grafana:
Applying snapshots Count:TiFlash-summary>raft>Applying snapshots CountSnapshot Predecode Duration:TiFlash-summary>raft>Snapshot Predecode DurationSnapshot Flush Duration:TiFlash-summary>raft>Snapshot Flush DurationWrite Stall Duration:TiFlash-summary>Storage Write Stall>Write Stall Durationgenerate snapshot CPU:TiFlash-Proxy-Details>Thread CPU>Region task worker pre-handle/generate snapshot CPU
Based on your service priorities, adjust the load accordingly to achieve optimal performance.
- TiFlash fails to start
- TiFlash replica is always unavailable
- TiFlash query time is unstable, and the error log prints many Lock Exception messages
- Some queries return the Region Unavailable error
- Data file corruption
- TiFlash analysis is slow
- Data is not replicated to TiFlash
- Data replication gets stuck
- Data replication is slow