- Docs Home
- About TiDB
- Quick Start
- Develop
- Overview
- Quick Start
- Build a TiDB Cluster in TiDB Cloud (Developer Tier)
- CRUD SQL in TiDB
- Build a Simple CRUD App with TiDB
- Example Applications
- Connect to TiDB
- Design Database Schema
- Write Data
- Read Data
- Transaction
- Optimize
- Troubleshoot
- Reference
- Cloud Native Development Environment
- Third-party Support
- Deploy
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migrate
- Overview
- Migration Tools
- Migration Scenarios
- Migrate from Aurora
- Migrate MySQL of Small Datasets
- Migrate MySQL of Large Datasets
- Migrate and Merge MySQL Shards of Small Datasets
- Migrate and Merge MySQL Shards of Large Datasets
- Migrate from CSV Files
- Migrate from SQL Files
- Migrate from One TiDB Cluster to Another TiDB Cluster
- Migrate from TiDB to MySQL-compatible Databases
- Advanced Migration
- Integrate
- Overview
- Integration Scenarios
- Maintain
- Monitor and Alert
- Troubleshoot
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries Using Top SQL
- Identify Expensive Queries Using Logs
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Save and Restore the On-Site Information of a Cluster
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Troubleshoot Inconsistency Between Data and Indexes
- Performance Tuning
- Tuning Guide
- Configuration Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Tutorials
- TiDB Tools
- Overview
- Use Cases
- Download
- TiUP
- Documentation Map
- Overview
- Terminology and Concepts
- Manage TiUP Components
- FAQ
- Troubleshooting Guide
- Command Reference
- Overview
- TiUP Commands
- TiUP Cluster Commands
- Overview
- tiup cluster audit
- tiup cluster check
- tiup cluster clean
- tiup cluster deploy
- tiup cluster destroy
- tiup cluster disable
- tiup cluster display
- tiup cluster edit-config
- tiup cluster enable
- tiup cluster help
- tiup cluster import
- tiup cluster list
- tiup cluster patch
- tiup cluster prune
- tiup cluster reload
- tiup cluster rename
- tiup cluster replay
- tiup cluster restart
- tiup cluster scale-in
- tiup cluster scale-out
- tiup cluster start
- tiup cluster stop
- tiup cluster template
- tiup cluster upgrade
- TiUP DM Commands
- Overview
- tiup dm audit
- tiup dm deploy
- tiup dm destroy
- tiup dm disable
- tiup dm display
- tiup dm edit-config
- tiup dm enable
- tiup dm help
- tiup dm import
- tiup dm list
- tiup dm patch
- tiup dm prune
- tiup dm reload
- tiup dm replay
- tiup dm restart
- tiup dm scale-in
- tiup dm scale-out
- tiup dm start
- tiup dm stop
- tiup dm template
- tiup dm upgrade
- TiDB Cluster Topology Reference
- DM Cluster Topology Reference
- Mirror Reference Guide
- TiUP Components
- PingCAP Clinic Diagnostic Service
- TiDB Operator
- Dumpling
- TiDB Lightning
- TiDB Data Migration
- About TiDB Data Migration
- Architecture
- Quick Start
- Deploy a DM cluster
- Tutorials
- Advanced Tutorials
- Maintain
- Cluster Upgrade
- Tools
- Performance Tuning
- Manage Data Sources
- Manage Tasks
- Export and Import Data Sources and Task Configurations of Clusters
- Handle Alerts
- Daily Check
- Reference
- Architecture
- Command Line
- Configuration Files
- OpenAPI
- Compatibility Catalog
- Secure
- Monitoring and Alerts
- Error Codes
- Glossary
- Example
- Troubleshoot
- Release Notes
- Backup & Restore (BR)
- Point-in-Time Recovery
- TiDB Binlog
- TiCDC
- Dumpling
- sync-diff-inspector
- TiSpark
- Reference
- Cluster Architecture
- Key Monitoring Metrics
- Secure
- Privileges
- SQL
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMN
ADD INDEX
ADMIN
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
ADMIN SHOW TELEMETRY
ALTER DATABASE
ALTER INDEX
ALTER INSTANCE
ALTER PLACEMENT POLICY
ALTER TABLE
ALTER TABLE COMPACT
ALTER TABLE SET TIFLASH MODE
ALTER USER
ANALYZE TABLE
BACKUP
BATCH
BEGIN
CHANGE COLUMN
COMMIT
CHANGE DRAINER
CHANGE PUMP
CREATE [GLOBAL|SESSION] BINDING
CREATE DATABASE
CREATE INDEX
CREATE PLACEMENT POLICY
CREATE ROLE
CREATE SEQUENCE
CREATE TABLE LIKE
CREATE TABLE
CREATE USER
CREATE VIEW
DEALLOCATE
DELETE
DESC
DESCRIBE
DO
DROP [GLOBAL|SESSION] BINDING
DROP COLUMN
DROP DATABASE
DROP INDEX
DROP PLACEMENT POLICY
DROP ROLE
DROP SEQUENCE
DROP STATS
DROP TABLE
DROP USER
DROP VIEW
EXECUTE
EXPLAIN ANALYZE
EXPLAIN
FLASHBACK TABLE
FLUSH PRIVILEGES
FLUSH STATUS
FLUSH TABLES
GRANT <privileges>
GRANT <role>
INSERT
KILL [TIDB]
LOAD DATA
LOAD STATS
MODIFY COLUMN
PREPARE
RECOVER TABLE
RENAME INDEX
RENAME TABLE
REPLACE
RESTORE
REVOKE <privileges>
REVOKE <role>
ROLLBACK
SAVEPOINT
SELECT
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET PASSWORD
SET ROLE
SET TRANSACTION
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [BACKUPS|RESTORES]
SHOW [GLOBAL|SESSION] BINDINGS
SHOW BUILTINS
SHOW CHARACTER SET
SHOW COLLATION
SHOW [FULL] COLUMNS FROM
SHOW CONFIG
SHOW CREATE PLACEMENT POLICY
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DATABASES
SHOW DRAINER STATUS
SHOW ENGINES
SHOW ERRORS
SHOW [FULL] FIELDS FROM
SHOW GRANTS
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW PLACEMENT
SHOW PLACEMENT FOR
SHOW PLACEMENT LABELS
SHOW PLUGINS
SHOW PRIVILEGES
SHOW [FULL] PROCESSSLIST
SHOW PROFILES
SHOW PUMP STATUS
SHOW SCHEMAS
SHOW STATS_HEALTHY
SHOW STATS_HISTOGRAMS
SHOW STATS_META
SHOW STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
SHOW WARNINGS
SHUTDOWN
SPLIT REGION
START TRANSACTION
TABLE
TRACE
TRUNCATE
UPDATE
USE
WITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Locking Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Table Attributes
- Transactions
- Garbage Collection (GC)
- Views
- Partitioning
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Placement Rules in SQL
- System Tables
mysql
- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUS
CLIENT_ERRORS_SUMMARY_BY_HOST
CLIENT_ERRORS_SUMMARY_BY_USER
CLIENT_ERRORS_SUMMARY_GLOBAL
CHARACTER_SETS
CLUSTER_CONFIG
CLUSTER_HARDWARE
CLUSTER_INFO
CLUSTER_LOAD
CLUSTER_LOG
CLUSTER_SYSTEMINFO
COLLATIONS
COLLATION_CHARACTER_SET_APPLICABILITY
COLUMNS
DATA_LOCK_WAITS
DDL_JOBS
DEADLOCKS
ENGINES
INSPECTION_RESULT
INSPECTION_RULES
INSPECTION_SUMMARY
KEY_COLUMN_USAGE
METRICS_SUMMARY
METRICS_TABLES
PARTITIONS
PLACEMENT_POLICIES
PROCESSLIST
REFERENTIAL_CONSTRAINTS
SCHEMATA
SEQUENCES
SESSION_VARIABLES
SLOW_QUERY
STATISTICS
TABLES
TABLE_CONSTRAINTS
TABLE_STORAGE_STATS
TIDB_HOT_REGIONS
TIDB_HOT_REGIONS_HISTORY
TIDB_INDEXES
TIDB_SERVERS_INFO
TIDB_TRX
TIFLASH_REPLICA
TIKV_REGION_PEERS
TIKV_REGION_STATUS
TIKV_STORE_STATUS
USER_PRIVILEGES
VARIABLES_INFO
VIEWS
METRICS_SCHEMA
- UI
- TiDB Dashboard
- Overview
- Maintain
- Access
- Overview Page
- Cluster Info Page
- Top SQL Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Monitoring Page
- Search Logs Page
- Instance Profiling
- Session Management and Configuration
- FAQ
- CLI
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Telemetry
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- FAQs
- Release Notes
- All Releases
- Release Timeline
- TiDB Versioning
- TiDB Installation Packages
- v6.2
- v6.1
- v6.0
- v5.4
- v5.3
- v5.2
- v5.1
- v5.0
- v4.0
- v3.1
- v3.0
- v2.1
- v2.0
- v1.0
- Glossary
Backup & Restore FAQs
This document lists the frequently asked questions (FAQs) and the solutions about Backup & Restore (BR).
In TiDB v5.4.0 and later versions, when backup tasks are performed on the cluster under high workload, why does the speed of backup tasks become slow?
Starting from TiDB v5.4.0, BR introduces the auto-tune feature for backup tasks. For clusters in v5.4.0 or later versions, this feature is enabled by default. When the cluster workload is heavy, the feature limits the resources used by backup tasks to reduce the impact on the online cluster. For more information, refer to BR Auto-Tune.
TiKV supports dynamically configuring the auto-tune feature. You can enable or disable the feature by the following methods without restarting your cluster:
- Disable auto-tune: Set the TiKV configuration item
backup.enable-auto-tune
tofalse
. - Enable auto-tune: Set
backup.enable-auto-tune
totrue
. For clusters upgraded from v5.3.x to v5.4.0 or later versions, the auto-tune feature is disabled by default. You need to manually enable it.
To use tikv-ctl
to enable or disable auto-tune, refer to Use auto-tune.
In addition, auto-tune reduces the default number of threads used by backup tasks. For details, see backup.num-threads
](/tikv-configuration-file.md#num-threads-1). Therefore, on the Grafana Dashboard, the speed, CPU usage, and I/O resource utilization used by backup tasks are lower than those of versions earlier than v5.4.0. Before v5.4.0, the default value of backup.num-threads
was CPU * 0.75
, that is, the number of threads used by backup tasks makes up 75% of the logical CPU cores. The maximum value of it was 32
. Starting from v5.4.0, the default value of this configuration item is CPU * 0.5
, and its maximum value is 8
.
When you perform backup tasks on an offline cluster, to speed up the backup, you can modify the value of backup.num-threads
to a larger number using tikv-ctl
.
What should I do if the error message could not read local://...:download sst failed
is returned during data restoration?
When you restore data, each node must have access to all backup files (SST files). By default, if local
storage is used, you cannot restore data because the backup files are scattered among different nodes. Therefore, you have to copy the backup file of each TiKV node to the other TiKV nodes.
It is recommended that you mount an NFS disk as a backup disk during backup. For details, see Back up a single table to a network disk.
How much impact does a backup operation have on the cluster?
For TiDB v5.4.0 or later versions, BR not only reduces the default CPU utilization used by backup tasks, but also introduces the BR Auto-tune feature to limit the resources used by backup tasks in the cluster with heavy workloads. Therefore, when you use the default configuration for backup tasks in a v5.4.0 cluster with heavy workloads, the impact of the tasks on the cluster performance is significantly less than that on the clusters earlier than v5.4.0.
The following is an internal test on a single node. The test results show that when you use the default configuration of v5.4.0 and v5.3.0 in the full-speed backup scenario, the impact of backup using BR on cluster performance is quite different. The detailed test results are as follows:
- When BR uses the default configuration of v5.3.0, the QPS of write-only workload is reduced by 75%.
- When BR uses the default configuration of v5.4.0, the QPS for the same workload is reduced by 25%. However, when this configuration is used, the duration of backup tasks using BR becomes correspondingly longer. The time required is 1.7 times that of the v5.3.0 configuration.
You can use either of the following solutions to manually control the impact of backup tasks on cluster performance. Note that these methods reduce the impact of backup tasks on the cluster, but they also reduce the speed of backup tasks.
- Use the
--ratelimit
parameter to limit the speed of backup tasks. Note that this parameter limits the speed of saving backup files to external storage. When calculating the total size of backup files, use thebackup data size(after compressed)
in the backup log as a benchmark. - Adjust the TiKV configuration item
backup.num-threads
to limit the number of threads used by backup tasks. When BR uses no more than8
threads for backup tasks, and the total CPU utilization of the cluster does not exceed 60%, the backup tasks have little impact on the cluster, regardless of the read and write workload.
Does BR back up system tables? During data restoration, do they raise conflicts?
Before v5.1.0, BR filters out data from the system schemas mysql.*
during the backup. Since v5.1.0, BR backs up all data by default, including the system schemas mysql.*
.
The technical implementation of restoring the system tables in mysql.*
is not complete yet, so the tables in the system schema mysql
are not restored by default, which means no conflicts will be raised. For more details, refer to Restore tables in the mysql
schema (experimental).
What should I do to handle the Permission denied
or No such file or directory
error, even if I have tried to run BR using root in vain?
You need to confirm whether TiKV has access to the backup directory. To back up data, confirm whether TiKV has the write permission. To restore data, confirm whether it has the read permission.
During the backup operation, if the storage medium is the local disk or a network file system (NFS), make sure that the user to start BR and the user to start TiKV are consistent (if BR and TiKV are on different machines, the users' UIDs must be consistent). Otherwise, the Permission denied
issue might occur.
Running BR with the root access might fail due to the disk permission, because the backup files (SST files) are saved by TiKV.
You might encounter the same problem during data restoration. When the SST files are read for the first time, the read permission is verified. The execution duration of DDL suggests that there might be a long interval between checking the permission and running BR. You might receive the error message Permission denied
after waiting for a long time.
Therefore, it is recommended to check the permission before data restore according to the following steps:
Run the Linux-native command for process query:
ps aux | grep tikv-server
The output of the above command:
tidb_ouo 9235 10.9 3.8 2019248 622776 ? Ssl 08:28 1:12 bin/tikv-server --addr 0.0.0.0:20162 --advertise-addr 172.16.6.118:20162 --status-addr 0.0.0.0:20188 --advertise-status-addr 172.16.6.118:20188 --pd 172.16.6.118:2379 --data-dir /home/user1/tidb-data/tikv-20162 --config conf/tikv.toml --log-file /home/user1/tidb-deploy/tikv-20162/log/tikv.log tidb_ouo 9236 9.8 3.8 2048940 631136 ? Ssl 08:28 1:05 bin/tikv-server --addr 0.0.0.0:20161 --advertise-addr 172.16.6.118:20161 --status-addr 0.0.0.0:20189 --advertise-status-addr 172.16.6.118:20189 --pd 172.16.6.118:2379 --data-dir /home/user1/tidb-data/tikv-20161 --config conf/tikv.toml --log-file /home/user1/tidb-deploy/tikv-20161/log/tikv.log
Or you can run the following command:
ps aux | grep tikv-server | awk '{print $1}'
The output of the above command:
tidb_ouo tidb_ouo
Query the startup information of the cluster using the TiUP command:
tiup cluster list
The output of the above command:
[root@Copy-of-VM-EE-CentOS76-v1 br]# tiup cluster list Starting component `cluster`: /root/.tiup/components/cluster/v1.5.2/tiup-cluster list Name User Version Path PrivateKey ---- ---- ------- ---- ---------- tidb_cluster tidb_ouo v5.0.2 /root/.tiup/storage/cluster/clusters/tidb_cluster /root/.tiup/storage/cluster/clusters/tidb_cluster/ssh/id_rsa
Check the permission for the backup directory. For example,
backup
is for backup data storage:ls -al backup
The output of the above command:
[root@Copy-of-VM-EE-CentOS76-v1 user1]# ls -al backup total 0 drwxr-xr-x 2 root root 6 Jun 28 17:48 . drwxr-xr-x 11 root root 310 Jul 4 10:35 ..
From the above output, you can find that the
tikv-server
instance is started by the usertidb_ouo
. But the usertidb_ouo
does not have the write permission forbackup
. Therefore, the backup fails.
What should I do to handle the Io(Os...)
error?
Almost all of these problems are system call errors that occur when TiKV writes data to the disk, for example, Io(Os {code: 13, kind: PermissionDenied...})
or Io(Os {code: 2, kind: NotFound...})
.
To address such problems, first check the mounting method and the file system of the backup directory, and try to back up data to another folder or another hard disk.
For example, you might encounter the Code: 22(invalid argument)
error when backing up data to the network disk built by samba
.
What should I do to handle the rpc error: code = Unavailable desc =...
error occurred in BR?
This error might occur when the capacity of the cluster to restore (using BR) is insufficient. You can further confirm the cause by checking the monitoring metrics of this cluster or the TiKV log.
To handle this issue, you can try to scale out the cluster resources, reduce the concurrency during restoration, and enable the RATE_LIMIT
option.
Where are the backed up files stored when I use local
storage?
When you use local
storage, backupmeta
is generated on the node where BR is running, and backup files are generated on the Leader nodes of each Region.
How about the size of the backup data? Are there replicas of the backup?
During data backup, backup files are generated on the Leader nodes of each Region. The size of the backup is equal to the data size, with no redundant replicas. Therefore, the total data size is approximately the total number of TiKV data divided by the number of replicas.
However, if you want to restore data from local storage, the number of replicas is equal to that of the TiKV nodes, because each TiKV must have access to all backup files.
What should I do when BR restores data to the upstream cluster of TiCDC/Drainer?
The data restored using BR cannot be replicated to the downstream. This is because BR directly imports SST files but the downstream cluster currently cannot obtain these files from the upstream.
Before v4.0.3, DDL jobs generated during the BR restore might cause unexpected DDL executions in TiCDC/Drainer. Therefore, if you need to perform restore on the upstream cluster of TiCDC/Drainer, add all tables restored using BR to the TiCDC/Drainer block list.
You can use filter.rules
to configure the block list for TiCDC and use syncer.ignore-table
to configure the block list for Drainer.
Does BR back up the SHARD_ROW_ID_BITS
and PRE_SPLIT_REGIONS
information of a table? Does the restored table have multiple Regions?
Yes. BR backs up the SHARD_ROW_ID_BITS
and PRE_SPLIT_REGIONS
information of a table. The data of the restored table is also split into multiple Regions.
What should I do if the restore fails with the error message the entry too large, the max entry size is 6291456, the size of data is 7690800
?
You can try to reduce the number of tables to be created in a batch by setting --ddl-batch-size
to 128
or a smaller value.
When using BR to restore the backup data with the value of [--ddl-batch-size
](/br/br-batch-create-table.md#how to use) greater than 1
, TiDB writes a DDL job of table creation to the DDL jobs queue that is maintained by TiKV. At this time, the total size of all tables schema sent by TiDB at one time should not exceed 6 MB, because the maximum value of job messages is 6 MB
by default (it is not recommended to modify this value. For details, see txn-entry-size-limit
and raft-entry-max-size
). Therefore, if you set --ddl-batch-size
to an excessively large value, the schema size of the tables sent by TiDB in a batch at one time exceeds the specified value, which causes BR to report the entry too large, the max entry size is 6291456, the size of data is 7690800
error.
Why is the region is unavailable
error reported for a SQL query after I use BR to restore the backup data?
If the cluster backed up using BR has TiFlash, TableInfo
stores the TiFlash information when BR restores the backup data. If the cluster to be restored does not have TiFlash, the region is unavailable
error is reported.
Does BR support in-place full restoration of some historical backup?
No. BR does not support in-place full restoration of some historical backup.
How can I use BR for incremental backup in the Kubernetes environment?
To get the commitTs
field of the last BR backup, run the kubectl -n ${namespace} get bk ${name}
command using kubectl. You can use the content of this field as --lastbackupts
.
How can I convert BR backupTS to Unix time?
BR backupTS
defaults to the latest timestamp obtained from PD before the backup starts. You can use pd-ctl tso timestamp
to parse the timestamp to obtain an accurate value, or use backupTS >> 18
to quickly obtain an estimated value.
After BR restores the backup data, do I need to execute the ANALYZE
statement on the table to update the statistics of TiDB on the tables and indexes?
BR does not back up statistics (except in v4.0.9). Therefore, after restoring the backup data, you need to manually execute ANALYZE TABLE
or wait for TiDB to automatically execute ANALYZE
.
In v4.0.9, BR backs up statistics by default, which consumes too much memory. To ensure that the backup process goes well, the backup for statistics is disabled by default starting from v4.0.10.
If you do not execute ANALYZE
on the table, TiDB will fail to select the optimized execution plan due to inaccurate statistics. If query performance is not a key concern, you can ignore ANALYZE
.
Can I use multiple BR processes at the same time to restore the data of a single cluster?
It is strongly not recommended to use multiple BR processes at the same time to restore the data of a single cluster for the following reasons:
- When BR restores data, it modifies some global configurations of PD. Therefore, if you use multiple BR processes for data restore at the same time, these configurations might be mistakenly overwritten and cause abnormal cluster status.
- BR consumes a lot of cluster resources to restore data, so in fact, running BR processes in parallel improves the restore speed only to a limited extent.
- There has been no test for running multiple BR processes in parallel for data restore, so it is not guaranteed to succeed.
What should I do if the backup log reports key locked Error
?
Error message in the log: log - ["backup occur kv error"][error="{\"KvError\":{\"locked\":
If a key is locked during the backup process, BR tries to resolve the lock. If this error occurs only occasionally, the correctness of the backup is not affected.
What should I do if a backup operation fails?
Error message in the log: log - Error: msg:"Io(Custom { kind: AlreadyExists, error: \"[5_5359_42_123_default.sst] is already exists in /dir/backup_local/\" })"
If a backup operation fails and the preceding message occurs, perform one of the following operations and then start the backup again:
- Change the directory for the backup. For example, change
/dir/backup_local/
to/dir/backup-2020-01-01/
. - Delete the backup directories of all TiKV nodes and BR nodes.
What should I do if the disk usage shown on the monitoring node is inconsistent after BR backup or restoration?
This inconsistency is caused by the fact that the data compression rate used in backup is different from the default rate used in restoration. If the checksum succeeds, you can ignore this issue.
Why does an error occur when I restore placement rules to a cluster?
Before v6.0.0, BR does not support placement rules. Starting from v6.0.0, BR supports placement rules and introduces a command-line option --with-tidb-placement-mode=strict/ignore
to control the backup and restore mode of placement rules. With the default value strict
, BR imports and validates placement rules, but ignores all placement rules when the value is ignore
.
Why does BR report new_collations_enabled_on_first_bootstrap
mismatch?
Since TiDB v6.0.0, the default value of new_collations_enabled_on_first_bootstrap
has changed from false
to true
. BR backs up the new_collations_enabled_on_first_bootstrap
configuration of the upstream cluster and then checks whether the value of this configuration is consistent between the upstream and downstream clusters. If the value is consistent, BR safely restores the data backed up in the upstream cluster to the downstream cluster. If the value is inconsistent, BR does not perform the data restore and reports an error.
Suppose that you have backed up the data in a TiDB cluster of an earlier version of v6.0.0, and you want to restore this data to a TiDB cluster of v6.0.0 or later versions. In this situation, you need to manually check whether the value of new_collations_enabled_on_first_bootstrap
is consistent between the upstream and downstream clusters:
- If the value is consistent, you can add
--check-requirements=false
to the restoration command to skip this configuration check. - If the value is inconsistent, and you forcibly perform the restoration, BR reports a data validation error.
- In TiDB v5.4.0 and later versions, when backup tasks are performed on the cluster under high workload, why does the speed of backup tasks become slow?
- What should I do if the error message could not read local://...:download sst failed is returned during data restoration?
- How much impact does a backup operation have on the cluster?
- Does BR back up system tables? During data restoration, do they raise conflicts?
- What should I do to handle the Permission denied or No such file or directory error, even if I have tried to run BR using root in vain?
- What should I do to handle the Io(Os...) error?
- What should I do to handle the rpc error: code = Unavailable desc =... error occurred in BR?
- Where are the backed up files stored when I use local storage?
- How about the size of the backup data? Are there replicas of the backup?
- What should I do when BR restores data to the upstream cluster of TiCDC/Drainer?
- Does BR back up the SHARD_ROW_ID_BITS and PRE_SPLIT_REGIONS information of a table? Does the restored table have multiple Regions?
- What should I do if the restore fails with the error message the entry too large, the max entry size is 6291456, the size of data is 7690800?
- Why is the region is unavailable error reported for a SQL query after I use BR to restore the backup data?
- Does BR support in-place full restoration of some historical backup?
- How can I use BR for incremental backup in the Kubernetes environment?
- How can I convert BR backupTS to Unix time?
- After BR restores the backup data, do I need to execute the ANALYZE statement on the table to update the statistics of TiDB on the tables and indexes?
- Can I use multiple BR processes at the same time to restore the data of a single cluster?
- What should I do if the backup log reports key locked Error?
- What should I do if a backup operation fails?
- What should I do if the disk usage shown on the monitoring node is inconsistent after BR backup or restoration?
- Why does an error occur when I restore placement rules to a cluster?
- Why does BR report new_collations_enabled_on_first_bootstrap mismatch?