- Docs Home
- About TiDB
- Quick Start
- Develop
- Overview
- Quick Start
- Build a TiDB Cluster in TiDB Cloud (Developer Tier)
- CRUD SQL in TiDB
- Build a Simple CRUD App with TiDB
- Example Applications
- Connect to TiDB
- Design Database Schema
- Write Data
- Read Data
- Transaction
- Optimize
- Troubleshoot
- Reference
- Cloud Native Development Environment
- Third-party Support
- Deploy
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migrate
- Overview
- Migration Tools
- Migration Scenarios
- Migrate from Aurora
- Migrate MySQL of Small Datasets
- Migrate MySQL of Large Datasets
- Migrate and Merge MySQL Shards of Small Datasets
- Migrate and Merge MySQL Shards of Large Datasets
- Migrate from CSV Files
- Migrate from SQL Files
- Migrate from One TiDB Cluster to Another TiDB Cluster
- Migrate from TiDB to MySQL-compatible Databases
- Advanced Migration
- Integrate
- Overview
- Integration Scenarios
- Maintain
- Monitor and Alert
- Troubleshoot
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries Using Top SQL
- Identify Expensive Queries Using Logs
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Save and Restore the On-Site Information of a Cluster
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Troubleshoot Inconsistency Between Data and Indexes
- Performance Tuning
- Tuning Guide
- Configuration Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Tutorials
- TiDB Tools
- Overview
- Use Cases
- Download
- TiUP
- Documentation Map
- Overview
- Terminology and Concepts
- Manage TiUP Components
- FAQ
- Troubleshooting Guide
- Command Reference
- Overview
- TiUP Commands
- TiUP Cluster Commands
- Overview
- tiup cluster audit
- tiup cluster check
- tiup cluster clean
- tiup cluster deploy
- tiup cluster destroy
- tiup cluster disable
- tiup cluster display
- tiup cluster edit-config
- tiup cluster enable
- tiup cluster help
- tiup cluster import
- tiup cluster list
- tiup cluster patch
- tiup cluster prune
- tiup cluster reload
- tiup cluster rename
- tiup cluster replay
- tiup cluster restart
- tiup cluster scale-in
- tiup cluster scale-out
- tiup cluster start
- tiup cluster stop
- tiup cluster template
- tiup cluster upgrade
- TiUP DM Commands
- Overview
- tiup dm audit
- tiup dm deploy
- tiup dm destroy
- tiup dm disable
- tiup dm display
- tiup dm edit-config
- tiup dm enable
- tiup dm help
- tiup dm import
- tiup dm list
- tiup dm patch
- tiup dm prune
- tiup dm reload
- tiup dm replay
- tiup dm restart
- tiup dm scale-in
- tiup dm scale-out
- tiup dm start
- tiup dm stop
- tiup dm template
- tiup dm upgrade
- TiDB Cluster Topology Reference
- DM Cluster Topology Reference
- Mirror Reference Guide
- TiUP Components
- PingCAP Clinic Diagnostic Service
- TiDB Operator
- Dumpling
- TiDB Lightning
- TiDB Data Migration
- About TiDB Data Migration
- Architecture
- Quick Start
- Deploy a DM cluster
- Tutorials
- Advanced Tutorials
- Maintain
- Cluster Upgrade
- Tools
- Performance Tuning
- Manage Data Sources
- Manage Tasks
- Export and Import Data Sources and Task Configurations of Clusters
- Handle Alerts
- Daily Check
- Reference
- Architecture
- Command Line
- Configuration Files
- OpenAPI
- Compatibility Catalog
- Secure
- Monitoring and Alerts
- Error Codes
- Glossary
- Example
- Troubleshoot
- Release Notes
- Backup & Restore (BR)
- Point-in-Time Recovery
- TiDB Binlog
- TiCDC
- Dumpling
- sync-diff-inspector
- TiSpark
- Reference
- Cluster Architecture
- Key Monitoring Metrics
- Secure
- Privileges
- SQL
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMNADD INDEXADMINADMIN CANCEL DDLADMIN CHECKSUM TABLEADMIN CHECK [TABLE|INDEX]ADMIN SHOW DDL [JOBS|QUERIES]ADMIN SHOW TELEMETRYALTER DATABASEALTER INDEXALTER INSTANCEALTER PLACEMENT POLICYALTER TABLEALTER TABLE COMPACTALTER TABLE SET TIFLASH MODEALTER USERANALYZE TABLEBACKUPBATCHBEGINCHANGE COLUMNCOMMITCHANGE DRAINERCHANGE PUMPCREATE [GLOBAL|SESSION] BINDINGCREATE DATABASECREATE INDEXCREATE PLACEMENT POLICYCREATE ROLECREATE SEQUENCECREATE TABLE LIKECREATE TABLECREATE USERCREATE VIEWDEALLOCATEDELETEDESCDESCRIBEDODROP [GLOBAL|SESSION] BINDINGDROP COLUMNDROP DATABASEDROP INDEXDROP PLACEMENT POLICYDROP ROLEDROP SEQUENCEDROP STATSDROP TABLEDROP USERDROP VIEWEXECUTEEXPLAIN ANALYZEEXPLAINFLASHBACK TABLEFLUSH PRIVILEGESFLUSH STATUSFLUSH TABLESGRANT <privileges>GRANT <role>INSERTKILL [TIDB]LOAD DATALOAD STATSMODIFY COLUMNPREPARERECOVER TABLERENAME INDEXRENAME TABLEREPLACERESTOREREVOKE <privileges>REVOKE <role>ROLLBACKSAVEPOINTSELECTSET DEFAULT ROLESET [NAMES|CHARACTER SET]SET PASSWORDSET ROLESET TRANSACTIONSET [GLOBAL|SESSION] <variable>SHOW ANALYZE STATUSSHOW [BACKUPS|RESTORES]SHOW [GLOBAL|SESSION] BINDINGSSHOW BUILTINSSHOW CHARACTER SETSHOW COLLATIONSHOW [FULL] COLUMNS FROMSHOW CONFIGSHOW CREATE PLACEMENT POLICYSHOW CREATE SEQUENCESHOW CREATE TABLESHOW CREATE USERSHOW DATABASESSHOW DRAINER STATUSSHOW ENGINESSHOW ERRORSSHOW [FULL] FIELDS FROMSHOW GRANTSSHOW INDEX [FROM|IN]SHOW INDEXES [FROM|IN]SHOW KEYS [FROM|IN]SHOW MASTER STATUSSHOW PLACEMENTSHOW PLACEMENT FORSHOW PLACEMENT LABELSSHOW PLUGINSSHOW PRIVILEGESSHOW [FULL] PROCESSSLISTSHOW PROFILESSHOW PUMP STATUSSHOW SCHEMASSHOW STATS_HEALTHYSHOW STATS_HISTOGRAMSSHOW STATS_METASHOW STATUSSHOW TABLE NEXT_ROW_IDSHOW TABLE REGIONSSHOW TABLE STATUSSHOW [FULL] TABLESSHOW [GLOBAL|SESSION] VARIABLESSHOW WARNINGSSHUTDOWNSPLIT REGIONSTART TRANSACTIONTABLETRACETRUNCATEUPDATEUSEWITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Locking Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Table Attributes
- Transactions
- Garbage Collection (GC)
- Views
- Partitioning
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Placement Rules in SQL
- System Tables
mysql- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUSCLIENT_ERRORS_SUMMARY_BY_HOSTCLIENT_ERRORS_SUMMARY_BY_USERCLIENT_ERRORS_SUMMARY_GLOBALCHARACTER_SETSCLUSTER_CONFIGCLUSTER_HARDWARECLUSTER_INFOCLUSTER_LOADCLUSTER_LOGCLUSTER_SYSTEMINFOCOLLATIONSCOLLATION_CHARACTER_SET_APPLICABILITYCOLUMNSDATA_LOCK_WAITSDDL_JOBSDEADLOCKSENGINESINSPECTION_RESULTINSPECTION_RULESINSPECTION_SUMMARYKEY_COLUMN_USAGEMETRICS_SUMMARYMETRICS_TABLESPARTITIONSPLACEMENT_POLICIESPROCESSLISTREFERENTIAL_CONSTRAINTSSCHEMATASEQUENCESSESSION_VARIABLESSLOW_QUERYSTATISTICSTABLESTABLE_CONSTRAINTSTABLE_STORAGE_STATSTIDB_HOT_REGIONSTIDB_HOT_REGIONS_HISTORYTIDB_INDEXESTIDB_SERVERS_INFOTIDB_TRXTIFLASH_REPLICATIKV_REGION_PEERSTIKV_REGION_STATUSTIKV_STORE_STATUSUSER_PRIVILEGESVARIABLES_INFOVIEWS
METRICS_SCHEMA
- UI
- TiDB Dashboard
- Overview
- Maintain
- Access
- Overview Page
- Cluster Info Page
- Top SQL Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Monitoring Page
- Search Logs Page
- Instance Profiling
- Session Management and Configuration
- FAQ
- CLI
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Telemetry
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- FAQs
- Release Notes
- All Releases
- Release Timeline
- TiDB Versioning
- TiDB Installation Packages
- v6.2
- v6.1
- v6.0
- v5.4
- v5.3
- v5.2
- v5.1
- v5.0
- v4.0
- v3.1
- v3.0
- v2.1
- v2.0
- v1.0
- Glossary
TiUP Mirror Reference Guide
TiUP mirrors are TiUP's component warehouse, which stores components and their metadata. TiUP mirrors take the following two forms:
- Directory on the local disk: serves the local TiUP client, which is called a local mirror in this document.
- HTTP mirror started based on the remote disk directory: serves the remote TiUP client, which is called a remote mirror in this document.
Create and update mirror
You can create a TiUP mirror using one of the following two methods:
- Execute
tiup mirror initto create a mirror from scratch. - Execute
tiup mirror cloneto clone from an existing mirror.
After the mirror is created, you can add components to or delete components from the mirror using the tiup mirror commands. TiUP updates a mirror by adding files and assigning a new version number to it, rather than deleting any files from the mirror.
Mirror structure
A typical mirror structure is as follows:
+ <mirror-dir> # Mirror's root directory
|-- root.json # Mirror's root certificate
|-- {2..N}.root.json # Mirror's root certificate
|-- {1..N}.index.json # Component/user index
|-- {1..N}.{component}.json # Component metadata
|-- {component}-{version}-{os}-{arch}.tar.gz # Component binary package
|-- snapshot.json # Mirror's latest snapshot
|-- timestamp.json # Mirror's latest timestamp
|--+ commits # Mirror's update log (deletable)
|--+ commit-{ts1..tsN}
|-- {N}.root.json
|-- {N}.{component}.json
|-- {N}.index.json
|-- {component}-{version}-{os}-{arch}.tar.gz
|-- snapshot.json
|-- timestamp.json
|--+ keys # Mirror's private key (can be moved to other locations)
|-- {hash1..hashN}-root.json # Private key of the root certificate
|-- {hash}-index.json # Private key of the indexes
|-- {hash}-snapshot.json # Private key of the snapshots
|-- {hash}-timestamp.json # Private key of the timestamps
- The
commitsdirectory stores the logs generated in the process of mirror update and is used to roll back the mirror. You can delete the old log directories regularly when the disk space is insufficient. - The private key stored in the
keysdirectory is sensitive. It is recommended to keep it separately.
Root directory
In a TiUP mirror, the root certificate is used to store the public key of other metadata files. Each time any metadata file (*.json) is obtained, TiUP client needs to find the corresponding public key in the installed root.json based on the metadata file type (root, index, snapshot, timestamp). Then TiUP client uses the public key to verify whether the signature is valid.
The root certificate's format is as follows:
{
"signatures": [ # Each metadata file has some signatures which are signed by several private keys corresponding to the file.
{
"keyid": "{id-of-root-key-1}", # The ID of the first private key that participates in the signature. This ID is obtained by hashing the content of the public key that corresponds to the private key.
"sig": "{signature-by-root-key-1}" # The signed part of this file by this private key.
},
...
{
"keyid": "{id-of-root-key-N}", # The ID of the Nth private key that participates in the signature.
"sig": "{signature-by-root-key-N}" # The signed part of this file by this private key.
}
],
"signed": { # The signed part.
"_type": "root", # The type of this file. root.json's type is root.
"expires": "{expiration-date-of-this-file}", # The expiration time of the file. If the file expires, the client rejects the file.
"roles": { # Records the keys used to sign each metadata file.
"{role:index,root,snapshot,timestamp}": { # Each involved metadata file includes index, root, snapshot, and timestamp.
"keys": { # Only the key's signature recorded in `keys` is valid.
"{id-of-the-key-1}": { # The ID of the first key used to sign {role}.
"keytype": "rsa", # The key's type. Currently, the key type is fixed as rsa.
"keyval": { # The key's payload.
"public": "{public-key-content}" # The public key's content.
},
"scheme": "rsassa-pss-sha256" # Currently, the scheme is fixed as rsassa-pss-sha256.
},
"{id-of-the-key-N}": { # The ID of the Nth key used to sign {role}.
"keytype": "rsa",
"keyval": {
"public": "{public-key-content}"
},
"scheme": "rsassa-pss-sha256"
}
},
"threshold": {N}, # Indicates that the metadata file needs at least N key signatures.
"url": "/{role}.json" # The address from which the file can be obtained. For index files, prefix it with the version number (for example, /{N}.index.json).
}
},
"spec_version": "0.1.0", # The specified version followed by this file. If the file structure is changed in the future, the version number needs to be upgraded. The current version number is 0.1.0.
"version": {N} # The version number of this file. You need to create a new {N+1}.root.json every time you update the file, and set its version to N + 1.
}
}
Index
The index file records all the components in the mirror and the owner information of the components.
The index file's format is as follows:
{
"signatures": [ # The file's signature.
{
"keyid": "{id-of-index-key-1}", # The ID of the first private key that participates in the signature.
"sig": "{signature-by-index-key-1}", # The signed part of this file by this private key.
},
...
{
"keyid": "{id-of-root-key-N}", # The ID of the Nth private key that participates in the signature.
"sig": "{signature-by-root-key-N}" # The signed part of this file by this private key.
}
],
"signed": {
"_type": "index", # The file type.
"components": { # The component list.
"{component1}": { # The name of the first component.
"hidden": {bool}, # Whether it is a hidden component.
"owner": "{owner-id}", # The component owner's ID.
"standalone": {bool}, # Whether it is a standalone component.
"url": "/{component}.json", # The address from which the component can be obtained. You need to prefix it with the version number (for example, /{N}.{component}.json).
"yanked": {bool} # Indicates whether the component is marked as deleted.
},
...
"{componentN}": { # The name of the Nth component.
...
},
},
"default_components": ["{component1}".."{componentN}"], # The default component that a mirror must contain. Currently, this field defaults to empty (disabled).
"expires": "{expiration-date-of-this-file}", # The expiration time of the file. If the file expires, the client rejects the file.
"owners": {
"{owner1}": { # The ID of the first owner.
"keys": { # Only the key's signature recorded in `keys` is valid.
"{id-of-the-key-1}": { # The first key of the owner.
"keytype": "rsa", # The key's type. Currently, the key type is fixed as rsa.
"keyval": { # The key's payload.
"public": "{public-key-content}" # The public key's content.
},
"scheme": "rsassa-pss-sha256" # Currently, the scheme is fixed as rsassa-pss-sha256.
},
...
"{id-of-the-key-N}": { # The Nth key of the owner.
...
}
},
"name": "{owner-name}", # The name of the owner.
"threshod": {N} # Indicates that the components owned by the owner must have at least N valid signatures.
},
...
"{ownerN}": { # The ID of the Nth owner.
...
}
}
"spec_version": "0.1.0", # The specified version followed by this file. If the file structure is changed in the future, the version number needs to be upgraded. The current version number is 0.1.0.
"version": {N} # The version number of this file. You need to create a new {N+1}.index.json every time you update the file, and set its version to N + 1.
}
}
Component
The component's metadata file records information of the component-specific platform and the version.
The component metadata file's format is as follows:
{
"signatures": [ # The file's signature.
{
"keyid": "{id-of-index-key-1}", # The ID of the first private key that participates in the signature.
"sig": "{signature-by-index-key-1}", # The signed part of this file by this private key.
},
...
{
"keyid": "{id-of-root-key-N}", # The ID of the Nth private key that participates in the signature.
"sig": "{signature-by-root-key-N}" # The signed part of this file by this private key.
}
],
"signed": {
"_type": "component", # The file type.
"description": "{description-of-the-component}", # The description of the component.
"expires": "{expiration-date-of-this-file}", # The expiration time of the file. If the file expires, the client rejects the file.
"id": "{component-id}", # The globally unique ID of the component.
"nightly": "{nightly-cursor}", # The nightly cursor, and the value is the latest nightly version number (for example, v5.0.0-nightly-20201209).
"platforms": { # The component's supported platforms (such as darwin/amd64, linux/arm64).
"{platform-pair-1}": {
"{version-1}": { # The semantic version number (for example, v1.0.0).
"dependencies": null, # Specifies the dependency relationship between components. The field is not used yet and is fixed as null.
"entry": "{entry}", # The relative path of the entry binary file in the tar package.
"hashs": { # The checksum of the tar package. sha256 and sha512 are used.
"sha256": "{sum-of-sha256}",
"sha512": "{sum-of-sha512}",
},
"length": {length-of-tar}, # The length of the tar package.
"released": "{release-time}", # The release date of the version.
"url": "{url-of-tar}", # The download address of the tar package.
"yanked": {bool} # Indicates whether this version is disabled.
}
},
...
"{platform-pair-N}": {
...
}
},
"spec_version": "0.1.0", # The specified version followed by this file. If the file structure is changed in the future, the version number needs to be upgraded. The current version number is 0.1.0.
"version": {N} # The version number of this file. You need to create a new {N+1}.{component}.json every time you update the file, and set its version to N + 1.
}
Snapshot
The snapshot file records the version number of each metadata file:
The snapshot file's structure is as follows:
{
"signatures": [ # The file's signature.
{
"keyid": "{id-of-index-key-1}", # The ID of the first private key that participates in the signature.
"sig": "{signature-by-index-key-1}", # The signed part of this file by this private key.
},
...
{
"keyid": "{id-of-root-key-N}", # The ID of the Nth private key that participates in the signature.
"sig": "{signature-by-root-key-N}" # The signed part of this file by this private key.
}
],
"signed": {
"_type": "snapshot", # The file type.
"expires": "{expiration-date-of-this-file}", # The expiration time of the file. If the file expires, the client rejects the file.
"meta": { # Other metadata files' information.
"/root.json": {
"length": {length-of-json-file}, # The length of root.json
"version": {version-of-json-file} # The version of root.json
},
"/index.json": {
"length": {length-of-json-file},
"version": {version-of-json-file}
},
"/{component-1}.json": {
"length": {length-of-json-file},
"version": {version-of-json-file}
},
...
"/{component-N}.json": {
...
}
},
"spec_version": "0.1.0", # The specified version followed by this file. If the file structure is changed in the future, the version number needs to be upgraded. The current version number is 0.1.0.
"version": 0 # The version number of this file, which is fixed as 0.
}
Timestamp
The timestamp file records the checksum of the current snapshot.
The timestamp file's format is as follows:
{
"signatures": [ # The file's signature.
{
"keyid": "{id-of-index-key-1}", # The ID of the first private key that participates in the signature.
"sig": "{signature-by-index-key-1}", # The signed part of this file by this private key.
},
...
{
"keyid": "{id-of-root-key-N}", # The ID of the Nth private key that participates in the signature.
"sig": "{signature-by-root-key-N}" # The signed part of this file by this private key.
}
],
"signed": {
"_type": "timestamp", # The file type.
"expires": "{expiration-date-of-this-file}", # The expiration time of the file. If the file expires, the client rejects the file.
"meta": { # The information of snapshot.json.
"/snapshot.json": {
"hashes": {
"sha256": "{sum-of-sha256}" # snapshot.json's sha256.
},
"length": {length-of-json-file} # The length of snapshot.json.
}
},
"spec_version": "0.1.0", # The specified version followed by this file. If the file structure is changed in the future, the version number needs to be upgraded. The current version number is 0.1.0.
"version": {N} # The version number of this file. You need to overwrite timestamp.json every time you update the file, and set its version to N + 1.
Client workflow
The client uses the following logic to ensure that the files downloaded from the mirror are safe:
- A
root.jsonfile is included with the binary when the client is installed. - The running client performs the following tasks based on the existing
root.json:- Obtain the version from
root.jsonand mark it asN. - Request
{N+1}.root.jsonfrom the mirror. If the request is successful, use the public key recorded inroot.jsonto verify whether the file is valid. - Request
timestamp.jsonfrom the mirror and use the public key recorded inroot.jsonto verify whether the file is valid. - Check whether the checksum of
snapshot.jsonrecorded intimestamp.jsonmatches the checksum of the localsnapshot.json. If the two do not match, request the latestsnapshot.jsonfrom the mirror and use the public key recorded inroot.jsonto verify whether the file is valid. - Obtain the version number
Nof theindex.jsonfile fromsnapshot.jsonand request{N}.index.jsonfrom the mirror. Then use the public key recorded inroot.jsonto verify whether the file is valid. - For components such as
tidb.jsonandtikv.json, the client obtains the version numbersNof the components fromsnapshot.jsonand requests{N}.{component}.jsonfrom the mirror. Then the client uses the public key recorded inindex.jsonto verify whether the file is valid. - For component's tar files, the client obtains the URLs and checksums of the files from
{component}.jsonand request the URLs for the tar packages. Then the client verifies whether the checksum is correct.
- Obtain the version from