Skip to content

Latest commit

 

History

History
124 lines (98 loc) · 4.04 KB

File metadata and controls

124 lines (98 loc) · 4.04 KB
page_title subcategory description
tidbcloud_import Resource - terraform-provider-tidbcloud
import resource

tidbcloud_import (Resource)

import resource

Example Usage

terraform {
  required_providers {
    tidbcloud = {
      source = "tidbcloud/tidbcloud"
    }
  }
}

provider "tidbcloud" {
  public_key  = "fake_public_key"
  private_key = "fake_private_key"
}

resource "tidbcloud_import" "example_local" {
  project_id  = "fake_id"
  cluster_id  = "fake_id"
  type        = "LOCAL"
  data_format = "CSV"
  target_table = {
    schema = "test"
    table  = "t"
  }
  file_name = "fake_path"
}

resource "tidbcloud_import" "example_s3_csv" {
  project_id   = "fake_id"
  cluster_id   = "fake_id"
  type         = "S3"
  data_format  = "CSV"
  aws_role_arn = "fake_arn"
  source_url   = "fake_url"
}

resource "tidbcloud_import" "example_s3_parquet" {
  project_id   = "1369847559691367867"
  cluster_id   = "1373933076658240623"
  type         = "S3"
  data_format  = "Parquet"
  aws_role_arn = "fake_arn"
  source_url   = "fake_url"
}

Schema

Required

  • cluster_id (String) The ID of your cluster.
  • data_format (String) The format of data to import.Enum: "SqlFile" "AuroraSnapshot" "CSV" "Parquet".
  • project_id (String) The ID of the project. You can get the project ID from tidbcloud_projects datasource.
  • type (String) The type of data source. Enum: "S3" "LOCAL".

Optional

Read-Only

  • all_completed_tables (List of Object) Import task all completed tables (see below for nested schema)
  • completed_percent (Number) Import task completed percent
  • completed_tables (Number) Import task completed tables
  • created_at (String) Import task create time
  • elapsed_time_seconds (Number) Import task elapsed time seconds
  • id (String) The ID of the import.
  • message (String) Import task message
  • new_file_name (String) The file name returned by generating upload url, used for importing from local file.
  • pending_tables (Number) Import task pending tables
  • post_import_completed_percent (Number) Import task post import completed percent
  • processed_source_data_size (String) Import task processed source data size
  • status (String) Import task status
  • total_files (Number) Import task total files
  • total_size (String) Import task total size
  • total_tables_count (Number) Import task total tables count

Nested Schema for csv_format

Optional:

  • backslash_escape (Boolean) In CSV file whether to parse backslash inside fields as escape characters (default true).
  • delimiter (String) The delimiter used for quoting of CSV file (default """).
  • header (Boolean) In CSV file whether regard the first row as header (default true).
  • separator (String) The field separator of CSV file (default ",").
  • trim_last_separator (Boolean) In CSV file whether to treat Separator as the line terminator and trim all trailing separators (default false).

Nested Schema for target_table

Optional:

  • database (String) The database of your cluster.
  • table (String) The table of your cluster.

Nested Schema for all_completed_tables

Read-Only:

  • message (String)
  • result (String)
  • table_name (String)