- Docs Home
- About TiDB
- Quick Start
- Develop
- Overview
- Quick Start
- Build a TiDB Cluster in TiDB Cloud (Developer Tier)
- CRUD SQL in TiDB
- Build a Simple CRUD App with TiDB
- Example Applications
- Connect to TiDB
- Design Database Schema
- Write Data
- Read Data
- Transaction
- Optimize
- Troubleshoot
- Reference
- Cloud Native Development Environment
- Third-party Support
- Deploy
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migrate
- Overview
- Migration Tools
- Migration Scenarios
- Migrate from Aurora
- Migrate MySQL of Small Datasets
- Migrate MySQL of Large Datasets
- Migrate and Merge MySQL Shards of Small Datasets
- Migrate and Merge MySQL Shards of Large Datasets
- Migrate from CSV Files
- Migrate from SQL Files
- Migrate from One TiDB Cluster to Another TiDB Cluster
- Migrate from TiDB to MySQL-compatible Databases
- Advanced Migration
- Integrate
- Maintain
- Monitor and Alert
- Troubleshoot
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries Using Top SQL
- Identify Expensive Queries Using Logs
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Save and Restore the On-Site Information of a Cluster
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Troubleshoot Inconsistency Between Data and Indexes
- Performance Tuning
- Tuning Guide
- Configuration Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Tutorials
- TiDB Tools
- Overview
- Use Cases
- Download
- TiUP
- Documentation Map
- Overview
- Terminology and Concepts
- Manage TiUP Components
- FAQ
- Troubleshooting Guide
- Command Reference
- Overview
- TiUP Commands
- TiUP Cluster Commands
- Overview
- tiup cluster audit
- tiup cluster check
- tiup cluster clean
- tiup cluster deploy
- tiup cluster destroy
- tiup cluster disable
- tiup cluster display
- tiup cluster edit-config
- tiup cluster enable
- tiup cluster help
- tiup cluster import
- tiup cluster list
- tiup cluster patch
- tiup cluster prune
- tiup cluster reload
- tiup cluster rename
- tiup cluster replay
- tiup cluster restart
- tiup cluster scale-in
- tiup cluster scale-out
- tiup cluster start
- tiup cluster stop
- tiup cluster template
- tiup cluster upgrade
- TiUP DM Commands
- Overview
- tiup dm audit
- tiup dm deploy
- tiup dm destroy
- tiup dm disable
- tiup dm display
- tiup dm edit-config
- tiup dm enable
- tiup dm help
- tiup dm import
- tiup dm list
- tiup dm patch
- tiup dm prune
- tiup dm reload
- tiup dm replay
- tiup dm restart
- tiup dm scale-in
- tiup dm scale-out
- tiup dm start
- tiup dm stop
- tiup dm template
- tiup dm upgrade
- TiDB Cluster Topology Reference
- DM Cluster Topology Reference
- Mirror Reference Guide
- TiUP Components
- PingCAP Clinic Diagnostic Service
- TiDB Operator
- Dumpling
- TiDB Lightning
- TiDB Data Migration
- About TiDB Data Migration
- Architecture
- Quick Start
- Deploy a DM cluster
- Tutorials
- Advanced Tutorials
- Maintain
- Cluster Upgrade
- Tools
- Performance Tuning
- Manage Data Sources
- Manage Tasks
- Export and Import Data Sources and Task Configurations of Clusters
- Handle Alerts
- Daily Check
- Reference
- Architecture
- Command Line
- Configuration Files
- OpenAPI
- Compatibility Catalog
- Secure
- Monitoring and Alerts
- Error Codes
- Glossary
- Example
- Troubleshoot
- Release Notes
- Backup & Restore (BR)
- TiDB Binlog
- TiCDC
- Dumpling
- sync-diff-inspector
- TiSpark
- Reference
- Cluster Architecture
- Key Monitoring Metrics
- Secure
- Privileges
- SQL
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMNADD INDEXADMINADMIN CANCEL DDLADMIN CHECKSUM TABLEADMIN CHECK [TABLE|INDEX]ADMIN SHOW DDL [JOBS|QUERIES]ADMIN SHOW TELEMETRYALTER DATABASEALTER INDEXALTER INSTANCEALTER PLACEMENT POLICYALTER TABLEALTER TABLE COMPACTALTER USERANALYZE TABLEBACKUPBATCHBEGINCHANGE COLUMNCOMMITCHANGE DRAINERCHANGE PUMPCREATE [GLOBAL|SESSION] BINDINGCREATE DATABASECREATE INDEXCREATE PLACEMENT POLICYCREATE ROLECREATE SEQUENCECREATE TABLE LIKECREATE TABLECREATE USERCREATE VIEWDEALLOCATEDELETEDESCDESCRIBEDODROP [GLOBAL|SESSION] BINDINGDROP COLUMNDROP DATABASEDROP INDEXDROP PLACEMENT POLICYDROP ROLEDROP SEQUENCEDROP STATSDROP TABLEDROP USERDROP VIEWEXECUTEEXPLAIN ANALYZEEXPLAINFLASHBACK TABLEFLUSH PRIVILEGESFLUSH STATUSFLUSH TABLESGRANT <privileges>GRANT <role>INSERTKILL [TIDB]LOAD DATALOAD STATSMODIFY COLUMNPREPARERECOVER TABLERENAME INDEXRENAME TABLEREPLACERESTOREREVOKE <privileges>REVOKE <role>ROLLBACKSELECTSET DEFAULT ROLESET [NAMES|CHARACTER SET]SET PASSWORDSET ROLESET TRANSACTIONSET [GLOBAL|SESSION] <variable>SHOW ANALYZE STATUSSHOW [BACKUPS|RESTORES]SHOW [GLOBAL|SESSION] BINDINGSSHOW BUILTINSSHOW CHARACTER SETSHOW COLLATIONSHOW [FULL] COLUMNS FROMSHOW CONFIGSHOW CREATE PLACEMENT POLICYSHOW CREATE SEQUENCESHOW CREATE TABLESHOW CREATE USERSHOW DATABASESSHOW DRAINER STATUSSHOW ENGINESSHOW ERRORSSHOW [FULL] FIELDS FROMSHOW GRANTSSHOW INDEX [FROM|IN]SHOW INDEXES [FROM|IN]SHOW KEYS [FROM|IN]SHOW MASTER STATUSSHOW PLACEMENTSHOW PLACEMENT FORSHOW PLACEMENT LABELSSHOW PLUGINSSHOW PRIVILEGESSHOW [FULL] PROCESSSLISTSHOW PROFILESSHOW PUMP STATUSSHOW SCHEMASSHOW STATS_HEALTHYSHOW STATS_HISTOGRAMSSHOW STATS_METASHOW STATUSSHOW TABLE NEXT_ROW_IDSHOW TABLE REGIONSSHOW TABLE STATUSSHOW [FULL] TABLESSHOW [GLOBAL|SESSION] VARIABLESSHOW WARNINGSSHUTDOWNSPLIT REGIONSTART TRANSACTIONTABLETRACETRUNCATEUPDATEUSEWITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Locking Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Table Attributes
- Transactions
- Garbage Collection (GC)
- Views
- Partitioning
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Placement Rules in SQL
- System Tables
mysql- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUSCLIENT_ERRORS_SUMMARY_BY_HOSTCLIENT_ERRORS_SUMMARY_BY_USERCLIENT_ERRORS_SUMMARY_GLOBALCHARACTER_SETSCLUSTER_CONFIGCLUSTER_HARDWARECLUSTER_INFOCLUSTER_LOADCLUSTER_LOGCLUSTER_SYSTEMINFOCOLLATIONSCOLLATION_CHARACTER_SET_APPLICABILITYCOLUMNSDATA_LOCK_WAITSDDL_JOBSDEADLOCKSENGINESINSPECTION_RESULTINSPECTION_RULESINSPECTION_SUMMARYKEY_COLUMN_USAGEMETRICS_SUMMARYMETRICS_TABLESPARTITIONSPLACEMENT_POLICIESPROCESSLISTREFERENTIAL_CONSTRAINTSSCHEMATASEQUENCESSESSION_VARIABLESSLOW_QUERYSTATISTICSTABLESTABLE_CONSTRAINTSTABLE_STORAGE_STATSTIDB_HOT_REGIONSTIDB_HOT_REGIONS_HISTORYTIDB_INDEXESTIDB_SERVERS_INFOTIDB_TRXTIFLASH_REPLICATIKV_REGION_PEERSTIKV_REGION_STATUSTIKV_STORE_STATUSUSER_PRIVILEGESVIEWS
METRICS_SCHEMA
- UI
- TiDB Dashboard
- Overview
- Maintain
- Access
- Overview Page
- Cluster Info Page
- Top SQL Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Instance Profiling
- Session Management and Configuration
- FAQ
- CLI
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Telemetry
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- FAQs
- Release Notes
- All Releases
- Release Timeline
- TiDB Versioning
- v6.1
- v6.0
- v5.4
- v5.3
- v5.2
- v5.1
- v5.0
- v4.0
- v3.1
- v3.0
- v2.1
- v2.0
- v1.0
- Glossary
INSPECTION_RESULT
TiDB has some built-in diagnostic rules for detecting faults and hidden issues in the system.
The INSPECTION_RESULT diagnostic feature can help you quickly find problems and reduce your repetitive manual work. You can use the select * from information_schema.inspection_result statement to trigger the internal diagnostics.
The structure of the information_schema.inspection_result diagnostic result table information_schema.inspection_result is as follows:
USE information_schema;
DESC inspection_result;
+----------------+--------------+------+------+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------+--------------+------+------+---------+-------+
| RULE | varchar(64) | YES | | NULL | |
| ITEM | varchar(64) | YES | | NULL | |
| TYPE | varchar(64) | YES | | NULL | |
| INSTANCE | varchar(64) | YES | | NULL | |
| STATUS_ADDRESS | varchar(64) | YES | | NULL | |
| VALUE | varchar(64) | YES | | NULL | |
| REFERENCE | varchar(64) | YES | | NULL | |
| SEVERITY | varchar(64) | YES | | NULL | |
| DETAILS | varchar(256) | YES | | NULL | |
+----------------+--------------+------+------+---------+-------+
9 rows in set (0.00 sec)
Field description:
RULE: The name of the diagnostic rule. Currently, the following rules are available:config: Checks whether the configuration is consistent and proper. If the same configuration is inconsistent on different instances, awarningdiagnostic result is generated.version: The consistency check of version. If the same version is inconsistent on different instances, awarningdiagnostic result is generated.node-load: Checks the server load. If the current system load is too high, the correspondingwarningdiagnostic result is generated.critical-error: Each module of the system defines critical errors. If a critical error exceeds the threshold within the corresponding time period, a warning diagnostic result is generated.threshold-check: The diagnostic system checks the thresholds of key metrics. If a threshold is exceeded, the corresponding diagnostic information is generated.
ITEM: Each rule diagnoses different items. This field indicates the specific diagnostic items corresponding to each rule.TYPE: The instance type of the diagnostics. The optional values aretidb,pd, andtikv.INSTANCE: The specific address of the diagnosed instance.STATUS_ADDRESS: The HTTP API service address of the instance.VALUE: The value of a specific diagnostic item.REFERENCE: The reference value (threshold value) for this diagnostic item. IfVALUEexceeds the threshold, the corresponding diagnostic information is generated.SEVERITY: The severity level. The optional values arewarningandcritical.DETAILS: Diagnostic details, which might also contain SQL statement(s) or document links for further diagnostics.
Diagnostics example
Diagnose issues currently existing in the cluster.
SELECT * FROM information_schema.inspection_result\G
***************************[ 1. row ]***************************
RULE | config
ITEM | log.slow-threshold
TYPE | tidb
INSTANCE | 172.16.5.40:4000
VALUE | 0
REFERENCE | not 0
SEVERITY | warning
DETAILS | slow-threshold = 0 will record every query to slow log, it may affect performance
***************************[ 2. row ]***************************
RULE | version
ITEM | git_hash
TYPE | tidb
INSTANCE |
VALUE | inconsistent
REFERENCE | consistent
SEVERITY | critical
DETAILS | the cluster has 2 different tidb version, execute the sql to see more detail: select * from information_schema.cluster_info where type='tidb'
***************************[ 3. row ]***************************
RULE | threshold-check
ITEM | storage-write-duration
TYPE | tikv
INSTANCE | 172.16.5.40:23151
VALUE | 130.417
REFERENCE | < 0.100
SEVERITY | warning
DETAILS | max duration of 172.16.5.40:23151 tikv storage-write-duration was too slow
***************************[ 4. row ]***************************
RULE | threshold-check
ITEM | rocksdb-write-duration
TYPE | tikv
INSTANCE | 172.16.5.40:20151
VALUE | 108.105
REFERENCE | < 0.100
SEVERITY | warning
DETAILS | max duration of 172.16.5.40:20151 tikv rocksdb-write-duration was too slow
The following issues can be detected from the diagnostic result above:
- The first row indicates that TiDB's
log.slow-thresholdvalue is configured to0, which might affect performance. - The second row indicates that two different TiDB versions exist in the cluster.
- The third and fourth rows indicate that the TiKV write delay is too long. The expected delay is no more than 0.1 second, while the actual delay is far longer than expected.
You can also diagnose issues existing within a specified range, such as from "2020-03-26 00:03:00" to "2020-03-26 00:08:00". To specify the time range, use the SQL Hint of /*+ time_range() */. See the following query example:
select /*+ time_range("2020-03-26 00:03:00", "2020-03-26 00:08:00") */ * from information_schema.inspection_result\G
***************************[ 1. row ]***************************
RULE | critical-error
ITEM | server-down
TYPE | tidb
INSTANCE | 172.16.5.40:4009
VALUE |
REFERENCE |
SEVERITY | critical
DETAILS | tidb 172.16.5.40:4009 restarted at time '2020/03/26 00:05:45.670'
***************************[ 2. row ]***************************
RULE | threshold-check
ITEM | get-token-duration
TYPE | tidb
INSTANCE | 172.16.5.40:10089
VALUE | 0.234
REFERENCE | < 0.001
SEVERITY | warning
DETAILS | max duration of 172.16.5.40:10089 tidb get-token-duration is too slow
The following issues can be detected from the diagnostic result above:
- The first row indicates that the
172.16.5.40:4009TiDB instance is restarted at2020/03/26 00:05:45.670. - The second row indicates that the maximum
get-token-durationtime of the172.16.5.40:10089TiDB instance is 0.234s, but the expected time is less than 0.001s.
You can also specify conditions, for example, to query the critical level diagnostic results:
select * from information_schema.inspection_result where severity='critical';
Query only the diagnostic result of the critical-error rule:
select * from information_schema.inspection_result where rule='critical-error';
Diagnostic rules
The diagnostic module contains a series of rules. These rules compare the results with the thresholds after querying the existing monitoring tables and cluster information tables. If the results exceed the thresholds, the diagnostics of warning or critical is generated and the corresponding information is provided in the details column.
You can query the existing diagnostic rules by querying the inspection_rules system table:
select * from information_schema.inspection_rules where type='inspection';
+-----------------+------------+---------+
| NAME | TYPE | COMMENT |
+-----------------+------------+---------+
| config | inspection | |
| version | inspection | |
| node-load | inspection | |
| critical-error | inspection | |
| threshold-check | inspection | |
+-----------------+------------+---------+
config diagnostic rule
In the config diagnostic rule, the following two diagnostic rules are executed by querying the CLUSTER_CONFIG system table:
Check whether the configuration values of the same component are consistent. Not all configuration items has this consistency check. The allowlist of consistency check is as follows:
// The allowlist of the TiDB configuration consistency check port status.status-port host path advertise-address status.status-port log.file.filename log.slow-query-file tmp-storage-path // The allowlist of the PD configuration consistency check advertise-client-urls advertise-peer-urls client-urls data-dir log-file log.file.filename metric.job name peer-urls // The allowlist of the TiKV configuration consistency check server.addr server.advertise-addr server.status-addr log-file raftstore.raftdb-path storage.data-dir storage.block-cache.capacityCheck whether the values of the following configuration items are as expected.
Component Configuration item Expected value TiDB log.slow-threshold larger than 0TiKV raftstore.sync-log true
version diagnostic rule
The version diagnostic rule checks whether the version hash of the same component is consistent by querying the CLUSTER_INFO system table. See the following example:
SELECT * FROM information_schema.inspection_result WHERE rule='version'\G
***************************[ 1. row ]***************************
RULE | version
ITEM | git_hash
TYPE | tidb
INSTANCE |
VALUE | inconsistent
REFERENCE | consistent
SEVERITY | critical
DETAILS | the cluster has 2 different tidb versions, execute the sql to see more detail: SELECT * FROM information_schema.cluster_info WHERE type='tidb'
critical-error diagnostic rule
In critical-error diagnostic rule, the following two diagnostic rules are executed:
Detect whether the cluster has the following errors by querying the related monitoring system tables in the metrics schema:
Component Error name Monitoring table Error description TiDB panic-count tidb_panic_count_total_count Panic occurs in TiDB. TiDB binlog-error tidb_binlog_error_total_count An error occurs when TiDB writes binlog. TiKV critical-error tikv_critical_error_total_coun The critical error of TiKV. TiKV scheduler-is-busy tikv_scheduler_is_busy_total_count The TiKV scheduler is too busy, which makes TiKV temporarily unavailable. TiKV coprocessor-is-busy tikv_coprocessor_is_busy_total_count The TiKV Coprocessor is too busy. TiKV channel-is-full tikv_channel_full_total_count The "channel full" error occurs in TiKV. TiKV tikv_engine_write_stall tikv_engine_write_stall The "stall" error occurs in TiKV. Check whether any component is restarted by querying the
metrics_schema.upmonitoring table and theCLUSTER_LOGsystem table.
threshold-check diagnostic rule
The threshold-check diagnostic rule checks whether the following metrics in the cluster exceed the threshold by querying the related monitoring system tables in the metrics schema:
| Component | Monitoring metric | Monitoring table | Expected value | Description |
|---|---|---|---|---|
| TiDB | tso-duration | pd_tso_wait_duration | < 50ms | The wait duration of getting the TSO of transaction. |
| TiDB | get-token-duration | tidb_get_token_duration | < 1ms | Queries the time it takes to get the token. The related TiDB configuration item is token-limit. |
| TiDB | load-schema-duration | tidb_load_schema_duration | < 1s | The time it takes for TiDB to update the schema metadata. |
| TiKV | scheduler-cmd-duration | tikv_scheduler_command_duration | < 0.1s | The time it takes for TiKV to execute the KV cmd request. |
| TiKV | handle-snapshot-duration | tikv_handle_snapshot_duration | < 30s | The time it takes for TiKV to handle the snapshot. |
| TiKV | storage-write-duration | tikv_storage_async_request_duration | < 0.1s | The write latency of TiKV. |
| TiKV | storage-snapshot-duration | tikv_storage_async_request_duration | < 50ms | The time it takes for TiKV to get the snapshot. |
| TiKV | rocksdb-write-duration | tikv_engine_write_duration | < 100ms | The write latency of TiKV RocksDB. |
| TiKV | rocksdb-get-duration | tikv_engine_max_get_duration | < 50ms | The read latency of TiKV RocksDB. |
| TiKV | rocksdb-seek-duration | tikv_engine_max_seek_duration | < 50ms | The latency of TiKV RocksDB to execute seek. |
| TiKV | scheduler-pending-cmd-coun | tikv_scheduler_pending_commands | < 1000 | The number of commands stalled in TiKV. |
| TiKV | index-block-cache-hit | tikv_block_index_cache_hit | > 0.95 | The hit rate of index block cache in TiKV. |
| TiKV | filter-block-cache-hit | tikv_block_filter_cache_hit | > 0.95 | The hit rate of filter block cache in TiKV. |
| TiKV | data-block-cache-hit | tikv_block_data_cache_hit | > 0.80 | The hit rate of data block cache in TiKV. |
| TiKV | leader-score-balance | pd_scheduler_store_status | < 0.05 | Checks whether the leader score of each TiKV instance is balanced. The expected difference between instances is less than 5%. |
| TiKV | region-score-balance | pd_scheduler_store_status | < 0.05 | Checks whether the Region score of each TiKV instance is balanced. The expected difference between instances is less than 5%. |
| TiKV | store-available-balance | pd_scheduler_store_status | < 0.2 | Checks whether the available storage of each TiKV instance is balanced. The expected difference between instances is less than 20%. |
| TiKV | region-count | pd_scheduler_store_status | < 20000 | Checks the number of Regions on each TiKV instance. The expected number of Regions in a single instance is less than 20,000. |
| PD | region-health | pd_region_health | < 100 | Detects the number of Regions that are in the process of scheduling in the cluster. The expected number is less than 100 in total. |
In addition, this rule also checks whether the CPU usage of the following threads in a TiKV instance is too high:
- scheduler-worker-cpu
- coprocessor-normal-cpu
- coprocessor-high-cpu
- coprocessor-low-cpu
- grpc-cpu
- raftstore-cpu
- apply-cpu
- storage-readpool-normal-cpu
- storage-readpool-high-cpu
- storage-readpool-low-cpu
- split-check-cpu
The built-in diagnostic rules are constantly being improved. If you have more diagnostic rules, welcome to create a PR or an issue in the tidb repository.