Jump to contents

Data import guide

Revision: 3.1.0-14594-df57d806

1 Introduction

1.1 Organization of this guide

This document explains how to import data in GridDB Cloud. Each chapter deals with the following topics.

  • Introduction
    This section explains the structure and terminology used in this document.

  • Data Import Tool
    Describes the data import tool.

  • Data Import Workflow
    Explains the process of importing data to GridDB Cloud.

  • Creating the Target Container
    Describes how to create the container where data will be imported.

  • Creating the Input File
    Explains the input file used for data import.

  • Registering Data to a Single Container
    Describes how to register data to a single container.

  • Registering Data to Multiple Containers
    Explains how to register data to multiple containers.

  • Supported Data Types
    Describes the data types that can be imported using the data import tool covered in this document.

1.2 Terminology

This section explains the terminology used in this document.

Term Description
curl command A command-line tool used to retrieve or send data by specifying a URL.
jq command A command-line tool used to format, extract, and transform JSON data.
Shell script A script file used in Linux environments to automate the execution of multiple commands.
PowerShell Command-line shells and scripting languages available in Windows environments

2 Data import tool

2.1 Operating environment

The data import tool can be executed in the following environments:

CPU x64 processor, 2.0GHz or higher (minimum)
Memory 1.0GB or more (minimum)
Disk At least twice the size of the data to be imported (minimum)
OS Windows 11 (64-bit)
Red Hat Enterprise Linux 8.10 (64-bit)
Red Hat Enterprise Linux 9.4 (64-bit)
Ubuntu Server 24.04 (64-bit)
Shell Environment PowerShell 5.1 (pre-installed on Windows 11)
bash (pre-installed in Linux environments)

2.2 Usage

2.2.1 Preparation

2.2.1.1 Downloading the Tool

[For both Windows and Linux versions]
To use this tool, download it from GridDB Cloud.
Navigate to the Support page in the Management GUI and click File Download.
The file named griddbCloudDataImport.zip included in the downloaded package is the tool.

When you extract the contents of griddbCloudDataImport.zip, the following files are included:

File Name Description
griddbCloudDataImport.bat Data import tool for Windows.
griddbCloudDataImport.ps1 Data import tool for Windows. Used together with griddbCloudDataImport.bat. Must be placed in the same directory as the .bat file.
griddbCloudDataImport.sh Data import tool for Linux.

[Linux version only]
Install the required tools to run this tool:

  • jq command
    Must be installed in advance using dnf or yum.
    (dnf install jq or yum install jq)

2.2.1.2 Deploying the Tool

Save the downloaded tool to any folder and extract it.

2.2.1.3 Setting Parameters

Open the extracted file in a text editor:

  • Windows version: griddbCloudDataImport.bat
  • Linux version: griddbCloudDataImport.sh

In the opened file, set the values for the following parameters:

  • Windows version: Lines 29 to 40
  • Linux version: Lines 31 to 44

◎ indicates a required setting, and ○ indicates an optional setting.

Parameter Required◎/Optional○ Default value Description
WEBAPI_URL https://xxx-cloud.griddb.com/
XXXXXX/griddb/v2/
gs_clusterXXXX/dbs/XXXX/
Set the URL of the Web API that this tool connects to.
GRIDDB_USER user Set the GridDB username used to connect with this tool.
GRIDDB_PASS password Set the password for the GridDB user used to connect with this tool.
PROXY_SERVER - Set this if you need to use a proxy server to connect to GridDB Cloud. Leave blank if not using a proxy.
SKIP_HEADER_ROWS 0 Set the number of rows to skip when specifying a CSV file as the input file.
SPLIT_ROWS 10000 Set the number of rows per request when specifying a CSV file as the input file. Use this parameter to adjust if the CSV file has too many rows.
TEMP_FILE_PATH - This parameter is required only for the Linux version. Specify the output location for temporary files. Temporary files will be generated in the specified location during the import process.

2.2.2 Running the tool

[Windows]

  1. Launch Command Prompt.
  2. In the procedure Deploying the Tool, navigate to the directory where the extracted files are stored.
> cd /{tool location}
  1. Execute the following command:
> griddbCloudDataImport.bat [target container name] [input file name]

If "Processing ends." is displayed on the console screen and the command prompt returns, the process has completed successfully.

【Linux】

  1. Launch the terminal.
  2. In the procedure Deploying the Tool, navigate to the directory where the extracted files are stored.
$ cd /{tool location}
  1. Execute the following command:
$ bash griddbCloudDataImport.sh [target container name] [input file name]

If "Processing ends." is displayed on the console screen and the terminal returns, the process has completed successfully.

2.2.3 Verifying imported Data

2.2.3.1 Executing a query

This feature is available in both SQL and TQL modes.

Step 1: After selecting the database and container (TQL mode only), enter your query in the [QUERY EDITOR].
You can either use autocomplete (②) to quickly insert SQL statements or manually enter them.
The query editor displays line numbers (①).

Executing a query in SQL mode
Executing a query in TQL mode

Step 2: Click the enabled [Execute] button (①) to run the query.

Clicking the Execute button in SQL mode
Clicking the Execute button in TQL mode

For more details, refer to the Executing a query in the "Management GUI Reference for GridDB Cloud."

2.3 Note

[Note]

  • If the value of "SPLIT_ROWS" is small and the number of rows in the input file to be registered is large, please note that a large number of temporary files may be output to the path specified in "TEMP_FILE_PATH".
  • If the value of "SPLIT_ROWS" is small and the number of rows in the input file to be registered is large, please note that a large amount of log data may be output to the log file.

3 Data import process

4 Creating a target container

4.1 Creating via management GUI

4.1.1 Creating a container

To create a new container, click the [CREATE CONTAINER] button.

The [Create Container] dialog will appear. Enter the container details in the input fields.

There are two types of containers. Please choose the type that suits your purpose:

  • Collection: A container for general data management. For instructions on creating a collection, refer to Creating a Collection.
  • Time Series Container: A container for managing time-series data. Time-series data refers to observational data collected through repeated measurements over a period of time. For instructions on creating a time series container, refer to Creating a Time Series Container.
  • By default, the container type is set to COLLECTION.
  • Column widths can be adjusted by dragging and dropping the vertical bar (|) in the header.

4.1.1.1 Creating a collection

In the [Create Container] dialog, the default container type COLLECTION is selected.

Step 1:

  • Once COLLECTION is chosen, the dialog is displayed as the image below. Here you can provide the affinity of data in the [Data Affinity] field (①)(optional). Data affinity is a function to increase the memory hit rate by arranging highly correlated data in the same block and localizing data access.
  • To add a new column to the container, click the [+] button (②).
  • You can define the name of the column by filling in the [Column Name] field (③). Then choose the type of that column from the [Type] drop-down list (④). You can see supported types in the table below.
  • You can choose the NOT NULL constraint for the column by checking the [Not Null] checkbox (⑤). The value in the column with the NOT NULL constraint cannot be empty. To define a row identifier, check the [Row Key] checkbox (⑥).
  • You can click the [Container name conditions] tab (⑦) to show/hide the conditions for the name of a container name. You can also click the [Data affinity conditions] tab (⑧) to show/hide the conditions for data affinity.

[Note]: For more details of data affinity, refer to the section on data affinity in the GridDB Features Reference.

Supported types:

Description Type
BASIC data types Boolean type BOOL
Character string STRING
Integer type BYTE, SHORT, INTEGER, LONG
Floating point type FLOAT, DOUBLE
Time type TIMESTAMP, TIMESTAMP(3), TIMESTAMP(6), TIMESTAMP(9)
Spatial type GEOMETRY
Object data type BLOB
HYBRID data types Array type BOOL_ARRAY, STRING_ARRAY, BYTE_ARRAY, SHORT_ARRAY, INTEGER_ARRAY, LONG_ARRAY, FLOAT_ARRAY, DOUBLE_ARRAY, TIMESTAMP_ARRAY

Step 2:

  • To delete the unwanted column, click the [DELETE] button (②).
  • After filling in all information, click the [CREATE] button (①) to create a container.
  • If you do not want to create the container, click the [CANCEL] button to close the dialog and return to the container list screen.

4.1.1.2 Creating a timeseries container

Step 1: In the [Create Container] dialog, choose [TIMESERIES].

Step 2:

  • Once TIMESERIES is chosen, the dialog is displayed as the image below. Here you can provide the affinity of data in the [Data Affinity] field (①)(optional). Data affinity is a function to increase the memory hit rate by arranging highly correlated data in the same block and localizing data access.
  • You can click the [Data affinity conditions] tab (②) to show/hide the conditions for data affinity.
  • To add a new column, click the [+] button (①).
  • By default, the first column of a timeseries container is set to a TIMESTAMP type.
  • You can define the name of the column by filling in the [Column Name] field (②). Choose the type of that column by clicking the type from the [Type] drop-down list (③).
  • You can choose the NOT NULL constraint for the column by checking the [Not Null] checkbox (④). The value in the column with the NOT NULL constraint cannot be empty. The [Row Key] checkbox (⑤) is only available to the first column (TIMESTAMP type); other columns cannot be set as a row key.

Step 3:

  • To delete the unwanted column, click the [DELETE] button (①).
  • After filling in all information, click the [CREATE] button (②) to create a container.
  • If you do not want to create the container, click the [CANCEL] button to close the dialog and return to the container list screen.

For more details, refer to the Creating a container in the "Management GUI Reference for GridDB Cloud."

4.2 Creating via GridDB WebAPI

Example: Container Creation (bash script)

#!/bin/bash
$ WEBAPI_URL="https://xxx-cloud.griddb.com/XXXXXX/griddb/v2/gs_clusterXXXX/dbs/XXXX/"
$ GRIDDB_USER="user"
$ GRIDDB_PASS="password"
$ basic_auth=$(echo "$(echo -n ${GRIDDB_USER}:${GRIDDB_PASS} | base64)")
$ curl -X POST -H "Content-Type: application/json" -H "Authorization:Basic ${basic_auth}" "${WEBAPI_URL}containers" -d"{\"container_name\":\"containerName\",\"container_type\":\"TIME_SERIES\",\"rowkey\":true,\"columns\":[{\"name\":\"a\",\"type\":\"TIMESTAMP\",\"timePrecision\":\"MILLISECOND\",\"index\":[]},{\"name\":\"b\",\"type\":\"LONG\",\"index\":[]},{\"name\":\"c\",\"type\":\"FLOAT\",\"index\":[]}]}"

Example: Container Creation (PowerShell Script)

$WEBAPI_URL="https://xxx-cloud.griddb.com/XXXXXX/griddb/v2/gs_clusterXXXX/dbs/XXXX/"
$GRIDDB_USER="user"
$GRIDDB_PASS="password"

$basic_auth = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes("${GRIDDB_USER}:${GRIDDB_PASS}"))

$griddb_webapi_url = $WEBAPI_URL.TrimEnd('/')
$url = "${griddb_webapi_url}/containers"

$headers = @{
  "Content-Type" = "application/json; charset=utf-8"
  "Authorization" = "Basic $basic_auth"
  "Accept" = "*/*"
}

Invoke-RestMethod -Uri $url -Method Post -Body '{"container_name":"containerName","container_type":"TIME_SERIES","rowkey":true,"columns":[{"name":"a","type":"TIMESTAMP","timePrecision":"MILLISECOND","index":[]},{"name":"b","type":"LONG","index":[]},{"name":"c","type":"FLOAT","index":[]}]}' -Headers $headers

For more details, refer to the Container creation in the "GridDB WebAPI Reference."

5 Creating an input file

Please refer to Supported Data Types for the format of data to be imported.

[Note]: Please make sure the input file is created in UTF-8 format.

5.1 Creating a JSON file

Only one input format (one element) can be imported per container and per file.

Example: When importing the contents of the specified JSON file as-is:

[
  ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
  ["2016-01-16T10:35:00.691Z", 173.9, "normal"],
  ["2016-01-16T10:45:00.032Z", 173.9, null]
]

Example: When importing the value of the "results" key from the specified JSON file:

{
...
 "results":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal"],
   ["2016-01-16T10:45:00.032Z", 173.9, null]
 ]
...
}

Example: When importing the value of the "data" key from the specified JSON file:

{
...
 "data":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal"],
   ["2016-01-16T10:45:00.032Z", 173.9, null]
 ]
...
}

Example: When importing the value of the "rows" key from the specified JSON file:

{
...
 "rows":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal"],
   ["2016-01-16T10:45:00.032Z", 173.9, null]
 ]
...
}

Example: When importing the value of the "row" key from the specified JSON file:

{
...
 "row":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"]
 ]
...
}

Examples of JSON files that cannot be imported are as follows:

Example: When multiple target keys are specified within the designated JSON file

{
...
 "results":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal"]
 ],
 "data":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal"]
 ],
 "rows":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal"]
 ]
...
}

Example: When the same target key is specified multiple times within the designated JSON file

{
...
 "results":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal"]
 ],
 "results":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal"]
 ],
 "results":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal"]
 ]
...
}

Example: When the number or data type of values for the target key within the designated JSON file is not consistent

{
...
 "results":[
   ["2016-01-16T10:25:00.253Z", 100.5, "normal"],
   ["2016-01-16T10:35:00.691Z", 173.9, "normal", "plusA"],
   ["2016-01-16T10:45:00.032Z", 173.9, null]
 ]
...
}

5.2 Creating a CSV file

Adjust the input using the parameter "SKIP_HEADER_ROWS" in the Setting Parameters section of the data import tool and the contents of the CSV file.

If the CSV file contains only values:

"Value","Value","Value",..(number of column definitions)
"Value","Value","Value",..(number of column definitions)
  :

Set SKIP_HEADER_ROWS=0 to import all rows as data values.

If the CSV file includes column header information:


"Column Name","Column Name","Column Name",... (number of column definitions)  
"Column Type","Column Type","Column Type",... (number of column definitions)  
"value","value","value",... (number of column definitions)  
"value","value","value",... (number of column definitions)  
   :

Set SKIP_HEADER_ROWS=2 to skip the header rows and import only the data from the third row onward.

5.3 Creating files from registered data in GridDB Enterprise Edition

The following explains how to create a file from data registered in GridDB Enterprise Edition (hereafter referred to as GridDB when referencing this product).

5.3.1 Using gs_export

$ gs_export --container c001 -u admin/admin 

Output Row File (CSV)


"#(Timestamp Information)(space)GridDB Release Version"  
"#User:(Username)"  
"%","Metadata file name"  
"$","Database name.Container name"  
"value","value","value",... (number of column definitions)  
"value","value","value",... (number of column definitions)  
   :

Example: File exported by gs_export

"#2025-01-01T00:00:00.000+0000  GridDB V5.X.00"
"#User:admin"
"%","public.containerName_properties.json"
"$","public.containerName"
"10000","AAAAAAAA01","AAAAAAAA01","1.0","2022-10-01T15:00:00.000Z"
"10001","BBBBBBBB02","BBBBBBBB02","1.1","2022-10-01T15:00:01.000Z"
"10002","CCCCCCCC03","CCCCCCCC03","1.2","2022-10-01T15:00:02.000Z"

Since the actual data starts from the 5th row, the first 4 rows should be skipped as header rows. Set SKIP_HEADER_ROWS=4 to import only the data rows.

For more details, refer to the Export function in the "GridDB Operation Tools Reference."

5.3.2 Using gs_sh

$ gs_sh
> select * from container_name; <- Retrieving container data using SQL or TQL
> getcsv CSV_file_name [Number of records to retrieve]
"Column Name","Column Name","Column Name",... (number of column definitions)  
"$",  
"value","value","value",... (number of column definitions)  
"value","value","value",... (number of column definitions)  
   :

Example: File exported by gs_sh

"#id","value01","value02","value03double","value04time"
"$",
"10000","AAAAAAAA01","AAAAAAAA01","1.0","2022-10-01T15:00:00.000Z"
"10001","BBBBBBBB02","BBBBBBBB02","1.1","2022-10-01T15:00:01.000Z"
"10002","CCCCCCCC03","CCCCCCCC03","1.2","2022-10-01T15:00:02.000Z"

Since the actual data starts from the 3rd row, the first 2 rows should be skipped as header rows. Set SKIP_HEADER_ROWS=2 to import only the data rows.

For more details, refer to the Getting search results in the "GridDB Operation Tools Reference."

5.3.3 Using GridDB WebAPI

Example: Row Retrieval from a Single Container (bash script)

#!/bin/bash

WEBAPI_URL="https://xxx-cloud.griddb.com/XXXXXX/griddb/v2/gs_clusterXXXX/dbs/XXXX/"
GRIDDB_USER="user"
GRIDDB_PASS="password"

basic_auth=$(echo "$(echo -n ${GRIDDB_USER}:${GRIDDB_PASS} | base64)")

curl -X POST -H "Content-Type: application/json" -H "Authorization:Basic ${basic_auth}" "${WEBAPI_URL}containers/containerName/rows" -d"{\"limit\":1000}" > output.json

Example: Row Retrieval from a Single Container (PowerShell Script)

$WEBAPI_URL="https://xxx-cloud.griddb.com/XXXXXX/griddb/v2/gs_clusterXXXX/dbs/XXXX/"
$GRIDDB_USER="user"
$GRIDDB_PASS="password"

$basic_auth = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes("${GRIDDB_USER}:${GRIDDB_PASS}"))

$griddb_webapi_url = $WEBAPI_URL.TrimEnd('/')
$url = "${griddb_webapi_url}/containers/UContainerName1/rows"

$headers = @{
  "Content-Type" = "application/json; charset=utf-8"
  "Authorization" = "Basic $basic_auth"
  "Accept" = "*/*"
}

$response = Invoke-RestMethod -Uri $url -Method Post -Body '{"limit":1000}' -Headers $headers
$response | ConvertTo-Json | Out-File -FilePath "output.json" -Encoding UTF8

For more details, refer to the Row acquisition from a single container in the "GridDB WebAPI Reference."

Example: Executing an SQL DML SELECT Statement (bash script)

#!/bin/bash

WEBAPI_URL="https://xxx-cloud.griddb.com/XXXXXX/griddb/v2/gs_clusterXXXX/dbs/XXXX/"
GRIDDB_USER="user"
GRIDDB_PASS="password"

basic_auth=$(echo "$(echo -n ${GRIDDB_USER}:${GRIDDB_PASS} | base64)")

curl -X POST -H "Content-Type: application/json" -H "Authorization:Basic ${basic_auth}" "${WEBAPI_URL}sql/dml/query" -d"[{\"stmt\":\"select * from containerName\"}]" > output.json

Example: Executing an SQL DML SELECT Statement (PowerShell script)

$WEBAPI_URL="https://xxx-cloud.griddb.com/XXXXXX/griddb/v2/gs_clusterXXXX/dbs/XXXX/"
$GRIDDB_USER="user"
$GRIDDB_PASS="password"

$basic_auth = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes("${GRIDDB_USER}:${GRIDDB_PASS}"))

$griddb_webapi_url = $WEBAPI_URL.TrimEnd('/')
$url = "${griddb_webapi_url}/sql/dml/query"

$headers = @{
  "Content-Type" = "application/json; charset=utf-8"
  "Authorization" = "Basic $basic_auth"
  "Accept" = "*/*"
}

$response = Invoke-RestMethod -Uri $url -Method Post -Body '[{"stmt":"select * from UContainerName1"}]' -Headers $headers
$response | ConvertTo-Json | Out-File -FilePath "output.json" -Encoding UTF8

For more details, refer to the SQL DML SELECT execution in the "GridDB WebAPI Reference."

The retrieved file will be used as the input file for the data import tool.

5.4 Retrieving files using the GridDB Cloud data export tool

Just like with the data import tool, obtain the tool and configure the parameters. Additionally, for Windows environments, install the curl and jq commands to set up the environment.

For more details, refer to the Data export tool guide.

Run the data export tool as shown in the example below to retrieve the result file.

Example: Retrieving Result File Using the Data Export Tool

> griddbCloudDataExport.bat
or
$ bash griddbCloudDataExport.sh

Use the retrieved result file ({execution_timestamp}_sql_result_1.csv) as the input file for the data import tool.

"#id","value01","value02","value03double","value04time"
"10000","AAAAAAAA01","AAAAAAAA01","1.0","2022-10-01T15:00:00.000Z"

Since the actual data starts from the second row, skip the first row as a header by setting: SKIP_HEADER_ROWS=1

6 Row registration in a single container

For Windows:

> griddbCloudDataImport.bat [Target container name] [Input file name]

For Linux:

$ bash griddbCloudDataImport.sh [Target container name] [Input file name]

7 Row registration in multiple containers

Example: Creating Multiple Target Containers Using GridDB WebAPI (Bash Script)

#!/bin/bash

WEBAPI_URL="https://xxx-cloud.griddb.com/XXXXXX/griddb/v2/gs_clusterXXXX/dbs/XXXX/"
GRIDDB_USER="user"
GRIDDB_PASS="password"

basic_auth=$(echo "$(echo -n ${GRIDDB_USER}:${GRIDDB_PASS} | base64)")

for i in `seq 10`
do
    curl -X POST -H "Content-Type: application/json" -H "Authorization:Basic ${basic_auth}" "${WEBAPI_URL}containers" -d"{\"container_name\":\"LtestCSV$i\",\"container_type\":\"TIME_SERIES\",\"rowkey\":true,\"columns\":[{\"name\":\"a\",\"type\":\"TIMESTAMP\",\"timePrecision\":\"MILLISECOND\",\"index\":[]},{\"name\":\"b\",\"type\":\"LONG\",\"index\":[]},{\"name\":\"c\",\"type\":\"FLOAT\",\"index\":[]}]}"
done

Example: Creating Multiple Target Containers Using GridDB WebAPI (PowerShell Script)

$WEBAPI_URL="https://xxx-cloud.griddb.com/XXXXXX/griddb/v2/gs_clusterXXXX/dbs/XXXX/"
$GRIDDB_USER="user"
$GRIDDB_PASS="password"

$basic_auth = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes("${GRIDDB_USER}:${GRIDDB_PASS}"))

$griddb_webapi_url = $WEBAPI_URL.TrimEnd('/')
$url = "${griddb_webapi_url}/containers"

$headers = @{
  "Content-Type" = "application/json; charset=utf-8"
  "Authorization" = "Basic $basic_auth"
  "Accept" = "*/*"
}

for ($i = 1; $i -le 10; $i++) {
  Invoke-RestMethod -Uri $url -Method Post -Body "{`"container_name`":`"containerName$i`",`"container_type`":`"TIME_SERIES`",`"rowkey`":true,`"columns`":[{`"name`":`"a`",`"type`":`"TIMESTAMP`",`"timePrecision`":`"MILLISECOND`",`"index`":[]},{`"name`":`"b`",`"type`":`"LONG`",`"index`":[]},{`"name`":`"c`",`"type`":`"FLOAT`",`"index`":[]}]}" -Headers $headers
}

Example: Importing Data to Multiple Containers Using the Data Import Tool (bash script)

#!/bin/bash

for i in `seq 10`
do
    bash griddbCloudDataImport.sh "containerName$i" inputfile$i.csv
done

Example: Importing Data to Multiple Containers Using the Data Import Tool (Powershell Script)

$maxCount = 10
$batFilePath = Join-Path (Get-Location).Path "griddbCloudDataImport.bat"
for ($i = 1; $i -le $maxCount; $i++) {
    $arg = "containerName$i"
    $file = "inputfile$i.csv"
    Start-Process -FilePath $batFilePath -ArgumentList @($arg, $file) -Wait
}

8 Supported Data Types

This section explains the data types that can be imported using the data import tool.

Classification Data type JSON data type Example
Primitive Boolean type BOOL Boolean value (true or false) true
String type STRING String "GridDB"
Integer type BYTE/SHORT/INTEGER/LONG Number 512
Floating point type FLOAT/DOUBLE Number 593.5
Date and time type TIMESTAMP Text string
・UTC
・format
YYYY-MM-DDThh:mm:ss.SSSZ
"2016-01-16T10:25:00.253Z"
Spatial type GEOMETRY Text string (WKT representation) POLYGON((0 0,10 0,10 10,0 10,0 0))
Array Boolean type BOOL Array of Boolean values [true, false, true]\
String type STRING Array of text string values ["A","B","C"]
Integer type BYTE/SHORT/INTEGER/LONG Array of numbers [100, 39, 535]
Floating point type FLOAT/DOUBLE Array of numbers [3.52, 6.94, 1.83]
Date and time type TIMESTAMP Array of text string values
(The format is the same as the format for the primitive date and time type)
["2016-01-16T10:25:00.253Z", "2016-01-17T01:42:53.038Z"]

Data types other than those listed above cannot be imported.

9 List of Error Messages

This section explains the main error messages and their corresponding solutions.

Response Code Error Message Solution
- Usage: griddbCloudDataImport.sh [Target container name] [Input file name] Please check that you have specified the [Target container name] and [Input file name] as arguments.
- Please specify a non-negative integer for [SKIP_HEADER_ROWS] Please check that the parameter "SKIP_HEADER_ROWS" is correctly set.
- Please specify a positive integer greater than 0 for [SPLIT_ROWS] Please check that the parameter "SPLIT_ROWS" is correctly set.
- [Input file name] does not exist. Please check that the specified [Input file name] exists.
- [Input file name] has an incorrect JSON format. Please check that the JSON format of the specified [Input file name] is correct.
- WebAPI call failed. Please check that the parameter "WEBAPI_URL" is correctly set.
400 Row data is invalid Please check that the input file and the "SKIP_HEADER_ROWS" parameter are correctly set.
400 Rows data is empty Please check that the input file and the "SKIP_HEADER_ROWS" parameter are correctly set.
401 TXN_AUTH_FAILED Please check that the parameters "WEBAPI_URL", "GRIDDB_USER", and "GRIDDB_PASS" are correctly set.
404 Container not existed Please check that the specified [Target container name] exists.

If other error messages are displayed, please review the input file and adjust the parameter settings such as "SKIP_HEADER_ROWS" and "SPLIT_ROWS" in the Setting Parameters section.