- Docs Home
- About TiDB Cloud
- Get Started
- Develop Applications
- Overview
- Quick Start
- Build a TiDB Developer Cluster
- CRUD SQL in TiDB
- Build a Simple CRUD App with TiDB
- Example Applications
- Connect to TiDB
- Design Database Schema
- Write Data
- Read Data
- Transaction
- Optimize
- Troubleshoot
- Reference
- Cloud Native Development Environment
- Manage Cluster
- Plan Your Cluster
- Create a TiDB Cluster
- Connect to Your TiDB Cluster
- Set Up VPC Peering Connections
- Use an HTAP Cluster with TiFlash
- Scale a TiDB Cluster
- Upgrade a TiDB Cluster
- Delete a TiDB Cluster
- Use TiDB Cloud API (Beta)
- Migrate Data
- Import Sample Data
- Migrate Data into TiDB
- Configure Amazon S3 Access and GCS Access
- Migrate from MySQL-Compatible Databases
- Migrate Incremental Data from MySQL-Compatible Databases
- Migrate from Amazon Aurora MySQL in Bulk
- Import or Migrate from Amazon S3 or GCS to TiDB Cloud
- Import CSV Files from Amazon S3 or GCS into TiDB Cloud
- Import Apache Parquet Files from Amazon S3 or GCS into TiDB Cloud
- Troubleshoot Access Denied Errors during Data Import from Amazon S3
- Export Data from TiDB
- Back Up and Restore
- Monitor and Alert
- Overview
- Built-in Monitoring
- Built-in Alerting
- Third-Party Monitoring Integrations
- Tune Performance
- Overview
- Analyze Performance
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- TiKV Follower Read
- Coprocessor Cache
- Garbage Collection (GC)
- Tune TiFlash performance
- Manage User Access
- Billing
- Reference
- TiDB Cluster Architecture
- TiDB Cloud Cluster Limits and Quotas
- TiDB Limitations
- SQL
- Explore SQL with TiDB
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMN
ADD INDEX
ADMIN
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
ALTER DATABASE
ALTER INDEX
ALTER TABLE
ALTER TABLE COMPACT
ALTER USER
ANALYZE TABLE
BATCH
BEGIN
CHANGE COLUMN
COMMIT
CHANGE DRAINER
CHANGE PUMP
CREATE [GLOBAL|SESSION] BINDING
CREATE DATABASE
CREATE INDEX
CREATE ROLE
CREATE SEQUENCE
CREATE TABLE LIKE
CREATE TABLE
CREATE USER
CREATE VIEW
DEALLOCATE
DELETE
DESC
DESCRIBE
DO
DROP [GLOBAL|SESSION] BINDING
DROP COLUMN
DROP DATABASE
DROP INDEX
DROP ROLE
DROP SEQUENCE
DROP STATS
DROP TABLE
DROP USER
DROP VIEW
EXECUTE
EXPLAIN ANALYZE
EXPLAIN
FLASHBACK TABLE
FLUSH PRIVILEGES
FLUSH STATUS
FLUSH TABLES
GRANT <privileges>
GRANT <role>
INSERT
KILL [TIDB]
MODIFY COLUMN
PREPARE
RECOVER TABLE
RENAME INDEX
RENAME TABLE
REPLACE
REVOKE <privileges>
REVOKE <role>
ROLLBACK
SELECT
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET PASSWORD
SET ROLE
SET TRANSACTION
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [GLOBAL|SESSION] BINDINGS
SHOW BUILTINS
SHOW CHARACTER SET
SHOW COLLATION
SHOW [FULL] COLUMNS FROM
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DATABASES
SHOW DRAINER STATUS
SHOW ENGINES
SHOW ERRORS
SHOW [FULL] FIELDS FROM
SHOW GRANTS
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW PLUGINS
SHOW PRIVILEGES
SHOW [FULL] PROCESSSLIST
SHOW PROFILES
SHOW PUMP STATUS
SHOW SCHEMAS
SHOW STATS_HEALTHY
SHOW STATS_HISTOGRAMS
SHOW STATS_META
SHOW STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
SHOW WARNINGS
SHUTDOWN
SPLIT REGION
START TRANSACTION
TABLE
TRACE
TRUNCATE
UPDATE
USE
WITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Locking Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Table Attributes
- Transactions
- Views
- Partitioning
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Read Historical Data
- System Tables
mysql
- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUS
CLIENT_ERRORS_SUMMARY_BY_HOST
CLIENT_ERRORS_SUMMARY_BY_USER
CLIENT_ERRORS_SUMMARY_GLOBAL
CHARACTER_SETS
CLUSTER_INFO
COLLATIONS
COLLATION_CHARACTER_SET_APPLICABILITY
COLUMNS
DATA_LOCK_WAITS
DDL_JOBS
DEADLOCKS
ENGINES
KEY_COLUMN_USAGE
PARTITIONS
PROCESSLIST
REFERENTIAL_CONSTRAINTS
SCHEMATA
SEQUENCES
SESSION_VARIABLES
SLOW_QUERY
STATISTICS
TABLES
TABLE_CONSTRAINTS
TABLE_STORAGE_STATS
TIDB_HOT_REGIONS_HISTORY
TIDB_INDEXES
TIDB_SERVERS_INFO
TIDB_TRX
TIFLASH_REPLICA
TIKV_REGION_PEERS
TIKV_REGION_STATUS
TIKV_STORE_STATUS
USER_PRIVILEGES
VIEWS
- System Variables
- API Reference
- Storage Engines
- Dumpling
- Table Filter
- Troubleshoot Inconsistency Between Data and Indexes
- FAQs
- Release Notes
- Support
- Glossary
Import Apache Parquet Files from Amazon S3 or GCS into TiDB Cloud
You can import both uncompressed and Snappy compressed Apache Parquet format data files to TiDB Cloud. This document describes how to import Parquet files from Amazon Simple Storage Service (Amazon S3) or Google Cloud Storage (GCS) into TiDB Cloud.
TiDB Cloud only supports importing Parquet files into empty tables. To import data into an existing table that already contains data, you can use TiDB Cloud to import the data into a temporary empty table by following this document, and then use the INSERT SELECT
statement to copy the data to the target existing table.
Step 1. Prepare the Parquet files
Currently, TiDB Cloud does not support importing Parquet files that contain any of the following data types. If Parquet files to be imported contain such data types, you need to first regenerate the Parquet files using the supported data types (for example, STRING
). Alternatively, you could use a service such as AWS Glue to transform data types easily.
LIST
NEST STRUCT
BOOL
ARRAY
MAP
If a Parquet file is larger than 256 MB, consider splitting it into smaller files, each with a size around 256 MB.
TiDB Cloud supports importing very large Parquet files but performs best with multiple input files around 256 MB in size. This is because TiDB Cloud can process multiple files in parallel, which can greatly improve the import speed.
Name the Parquet files as follows:
- If a Parquet file contains all data of an entire table, name the file in the
${db_name}.${table_name}.parquet
format, which maps to the${db_name}.${table_name}
table when you import the data. - If the data of one table is separated into multiple Parquet files, append a numeric suffix to these Parquet files. For example,
${db_name}.${table_name}.000001.parquet
and${db_name}.${table_name}.000002.parquet
. The numeric suffixes can be inconsecutive but must be in ascending order. You also need to add extra zeros before the number to ensure all the suffixes are in the same length.
NoteIf you cannot update the Parquet filenames according to the preceding rules in some cases (for example, the Parquet file links are also used by your other programs), you can keep the filenames unchanged and use the Custom Pattern in Step 4 to import your source data to a single target table.
- If a Parquet file contains all data of an entire table, name the file in the
Step 2. Create the target table schemas
Because Parquet files do not contain schema information, before importing data from Parquet files into TiDB Cloud, you need to create the table schemas using either of the following methods:
Method 1: In TiDB Cloud, create the target databases and tables for your source data.
Method 2: In the Amazon S3 or GCS directory where the Parquet files are located, create the target table schema files for your source data as follows:
Create database schema files for your source data.
If your Parquet files follow the naming rules in Step 1, the database schema files are optional for the data import. Otherwise, the database schema files are mandatory.
Each database schema file must be in the
${db_name}-schema-create.sql
format and contain aCREATE DATABASE
DDL statement. With this file, TiDB Cloud will create the${db_name}
database to store your data when you import the data.For example, if you create a
mydb-scehma-create.sql
file that contains the following statement, TiDB Cloud will create themydb
database when you import the data.CREATE DATABASE mydb;
Create table schema files for your source data.
If you do not include the table schema files in the Amazon S3 or GCS directory where the Parquet files are located, TiDB Cloud will not create the corresponding tables for you when you import the data.
Each table schema file must be in the
${db_name}.${table_name}-schema.sql
format and contain aCREATE TABLE
DDL statement. With this file, TiDB Cloud will create the${db_table}
table in the${db_name}
database when you import the data.For example, if you create a
mydb.mytable-schema.sql
file that contains the following statement, TiDB Cloud will create themytable
table in themydb
database when you import the data.CREATE TABLE mytable ( ID INT, REGION VARCHAR(20), COUNT INT );
NoteEach
${db_name}.${table_name}-schema.sql
file should only contain a single DDL statement. If the file contains multiple DDL statements, only the first one takes effect.
Step 3. Configure cross-account access
To allow TiDB Cloud to access the Parquet files in the Amazon S3 or GCS bucket, do one of the following:
If your Parquet files are located in Amazon S3, configure cross-account access to Amazon S3.
Once finished, make a note of the Role ARN value as you will need it in Step 4.
If your Parquet files are located in GCS, configure cross-account access to GCS.
Step 4. Import Parquet files to TiDB Cloud
To import the Parquet files to TiDB Cloud, take the following steps:
Navigate to the Clusters page.
Find the area of your target cluster and click Import Data in the upper-right corner of the area. The Data Import Task page is displayed.
TipAlternatively, you can also click the name of your target cluster on the Clusters page and click Import Data in the upper-right corner.
On the Data Import Task page, provide the following information.
Data Source Type: select the type of the data source.
Bucket URL: select the bucket URL where your Parquet files are located.
Data Format: select Parquet.
Setup Credentials (This field is visible only for AWS S3): enter the Role ARN value for Role-ARN.
Target Cluster: fill in the Username and Password fields.
DB/Tables Filter: if you want to filter which tables to be imported, you can specify one or more table filters in this field, separated by
,
.For example:
db01.*
: all tables in thedb01
database will be imported.db01.table01*,db01.table02*
: all tables starting withtable01
andtable02
in thedb01
database will be imported.!db02.*
: except the tables in thedb02
database, all other tables will be imported.!
is used to exclude tables that do not need to be imported.*.*
: all tables will be imported.For more information, see table filter snytax.
Custom Pattern: enable the Custom Pattern feature if you want to import Parquet files whose filenames match a certain pattern to a single target table.
NoteAfter enabling this feature, one import task can only import data to a single table at a time. If you want to use this feature to import data into different tables, you need to import several times, each time specifying a different target table.
When Custom Pattern is enabled, you are required to specify a custom mapping rule between Parquet files and a single target table in the following fields:
Object Name Pattern: enter a pattern that matches the names of the Parquet files to be imported. If you have one Parquet file only, you can enter the filename here directly.
For example:
my-data?.parquet
: all Parquet files starting withmy-data
and one character (such asmy-data1.parquet
andmy-data2.parquet
) will be imported into the same target table.my-data*.parquet
: all Parquet files starting withmy-data
will be imported into the same target table.
Target Table Name: enter the name of the target table in TiDB Cloud, which must be in the
${db_name}.${table_name}
format. For example,mydb.mytable
. Note that this field only accepts one specific table name, so wildcards are not supported.
Click Import.
A warning message about the database resource consumption is displayed.
Click Confirm.
TiDB Cloud starts validating whether it can access your data in the specified bucket URL. After the validation is completed and successful, the import task starts automatically. If you get the
AccessDenied
error, see Troubleshoot Access Denied Errors during Data Import from S3.When the import progress shows success, check the number after Total Files:.
If the number is zero, it means no data files matched the value you entered in the Object Name Pattern field. In this case, check whether there are any typos in the Object Name Pattern field and try again.
When running an import task, if any unsupported or invalid conversions are detected, TiDB Cloud terminates the import job automatically and reports an importing error.
If you get an importing error, do the following:
Drop the partially imported table.
Check the table schema file. If there are any errors, correct the table schema file.
Check the data types in the Parquet files.
If the Parquet files contain any unsupported data types (for example,
NEST STRUCT
,ARRAY
, orMAP
), you need to regenerate the Parquet files using supported data types (for example,STRING
).Try the import task again.
Supported data types
The following table lists the supported Parquet data types that can be imported to TiDB Cloud.
Parquet Primitive Type | Parquet Logical Type | Types in TiDB or MySQL |
---|---|---|
DOUBLE | DOUBLE | DOUBLE FLOAT |
FIXED_LEN_BYTE_ARRAY(9) | DECIMAL(20,0) | BIGINT UNSIGNED |
FIXED_LEN_BYTE_ARRAY(N) | DECIMAL(p,s) | DECIMAL NUMERIC |
INT32 | DECIMAL(p,s) | DECIMAL NUMERIC |
INT32 | N/A | INT MEDIUMINT YEAR |
INT64 | DECIMAL(p,s) | DECIMAL NUMERIC |
INT64 | N/A | BIGINT INT UNSIGNED MEDIUMINT UNSIGNED |
INT64 | TIMESTAMP_MICROS | DATETIME TIMESTAMP |
BYTE_ARRAY | N/A | BINARY BIT BLOB CHAR LINESTRING LONGBLOB MEDIUMBLOB MULTILINESTRING TINYBLOB VARBINARY |
BYTE_ARRAY | STRING | ENUM DATE DECIMAL GEOMETRY GEOMETRYCOLLECTION JSON LONGTEXT MEDIUMTEXT MULTIPOINT MULTIPOLYGON NUMERIC POINT POLYGON SET TEXT TIME TINYTEXT VARCHAR |
SMALLINT | N/A | INT32 |
SMALLINT UNSIGNED | N/A | INT32 |
TINYINT | N/A | INT32 |
TINYINT UNSIGNED | N/A | INT32 |