flink sql insert iceberg table array, then select the table array values is same.
See original GitHub issueflink version: 1.14.4 iceberg version: 0.13.1 idea test use hadoop catalog. example:
env.executeSql(
s"""
|CREATE CATALOG `$catalog` WITH (
| 'type'='iceberg',
| 'catalog-type'='hadoop',
| 'warehouse'='$basePath',
| 'property-version'='1'
|)
|""".stripMargin
)
env.useCatalog(catalog)
env.useDatabase(database)
create table :
env.executeSql(
s"""
|CREATE TABLE IF NOT EXISTS `$catalog`.`$database`.`test`
|(
| `dateId` STRING COMMENT '分区时间',
| `tagGcode` ARRAY<STRING> COMMENT '标签gcode列表',
| `windowsTagIds` ARRAY<STRING> COMMENT '窗口标签ID列表',
| `windowTag` STRING COMMENT '窗口标签',
| `create_time` TIMESTAMP COMMENT '数据创建时间'
|) PARTITIONED BY (dateId)
|WITH (
| 'format-version' = '2',
| 'write.upsert.enabled' = 'false',
| 'write.distribution-mode' = 'hash',
| 'write.metadata.delete-after-commit.enabled' = 'true',
| 'write.parquet.row-group-size-bytes' = '134217728',
| 'write.target-file-size-bytes' = '536870912',
| 'history.expire.max-snapshot-age-ms' = '360000',
| 'write.metadata.previous-versions-max' = '5',
| 'write.parallelism' = '1'
|)
|""".stripMargin
)
insert data:
env.executeSql(
s"""
|INSERT INTO `$catalog`.`$database`.`test` VALUES(
|'2022-05-31',
|Array['gcode'],
|Array['window_tag_id'],
|'window_tag',
|to_timestamp('2022-05-31 18:00:00')
|)
|""".stripMargin
)
select:
env
.executeSql(
s"""
|select tagGcode, windowsTagIds
|from `$catalog`.`$database`.`test`
|where dateId >= '2022-05-29' and dateId < '2022-06-01'
|""".stripMargin).print()
result:
+--------------------------------+--------------------------------+
| tagGcode | windowsTagIds |
+--------------------------------+--------------------------------+
| [gcode] | [gcode] |
+--------------------------------+--------------------------------+
why? is this bug?
Issue Analytics
- State:
- Created a year ago
- Reactions:2
- Comments:7
Top Results From Across the Web
Enabling Iceberg in Flink - Apache Iceberg
To create iceberg table in flink, we recommend to use Flink SQL Client because it's easier for users to understand the concepts.
Read more >A Hands-On Look at the Structure of an Apache Iceberg Table
Dremio provides an intuitive UI to create Iceberg tables and run DML operations (insert, update, delete, upsert) directly on your data lake.
Read more >INSERT Statements | CDP Private Cloud
You can use the INSERT clause with different source tables, or the VALUES clause. The following examples show how to use INSERT and...
Read more >Scala Option Types not recognized in apache flink table api
I had the exact same issue of Option being parsed as RAW and found yet another workaround that might interest you: TL;DR:
Read more >Hive command examples for exporting, importing, and ...
If you then create an EXTERNAL table in Amazon S3 you can call the INSERT OVERWRITE ... TABLE hiveTableName ( col1 string, col2...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’m sorry I can’t reproduce this problem, it’s most likely caused by the version of flink we maintain internally. so I will close this issue. Thank you very much for your generous answer!
No worries! Thanks for closing the issue 👍