Failed to query Iceberg TIMESTAMP type
See original GitHub issue名称 | 版本 | 描述 |
---|---|---|
flink | 1.3.2 | 开源版本 |
cdh | 6.3.2 | 开源版本 |
hive | 2.1.1-cdh6.3.2 | 包含cdh中 |
hadoop | 3.0.0-cdh6.3.2 | cdh原生版本 |
presto | 2.591 | 开源版本 |
trino | 360 | 开源版本 |
Iceberg | 0.13 | master分支编译 |
- Flink create table
CREATE TABLE ic17 (
tinyint0 INT
, smallint1 SMALLINT
, int2 INT
, bigint3 BIGINT
, float4 FLOAT
, double5 DOUBLE
, decimal6 DECIMAL(12,3)
, boolean7 BOOLEAN
,char8 CHAR(64) PRIMARY KEY NOT ENFORCED
,varchar9 VARCHAR(64)
, string10 STRING
, timestamp11 TIMESTAMP(3)
)
PARTITIONED BY (tinyint0)
WITH (
'connector'='iceberg' -- 主键和分区均可多个,分割
, 'format-version' = '1' -- iceberg表版本,可选1,2
, 'engine.hive.enabled' = 'true' -- 启用hive同步
, 'catalog-name'='hive_catalog' -- 指定catalog
, 'catalog-database'='iceberg' -- 指定hive database
, 'uri'='thrift://cdh2:9083' -- hive hms地址,分割
);
- Flink insert
INSERT INTO ic17 VALUES(
cast(1218 as INT)
, cast(295 as SMALLINT)
, cast(-210121792 as INT)
, cast(-3697946268377828253 as BIGINT)
, cast(1.123456789111111 as FLOAT)
, cast(1111111.123411 as DOUBLE)
, cast(1111.1234111 as DECIMAL(12, 3))
, cast(123123123123 as BOOLEAN)
, cast('`[s1tX213ysdasdasdgfq3wqwdqwqd速度速度pGPYl`AggMaHNRJv\[CkIYzcgMlmVvLSjtYmnlBEcwH^kEgDSxGIwGNLDP' as CHAR(64))
, cast('daQOIE[n_eJsYLBJLttyFHnBXiCoT`RWeCO\G[JZZTdFFnFZFCODoI`X[SbMVAjq' as VARCHAR(64))
, cast('e1916697-e626-4446-bd18-0142bfb9417b' as STRING)
, cast('2021-09-13 03:08:50.810' as TIMESTAMP(3))
);
-
presto query
-
trino query
-
hive query
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Querying Iceberg table data and performing time travel
To run a time travel query, use FOR TIMESTAMP AS OF timestamp after the table name in the SELECT statement, as in the...
Read more >"Iceberg query cannot be parsed" when trying to create ...
1 Answer 1 ... Neither map (without specifying the key/value types) nor varchar are valid Iceberg types. See the Iceberg documentation for valid ......
Read more >Spark Writes - Apache Iceberg
When writing with the v1 DataFrame API in Spark 3, use saveAsTable or insertInto to load tables with a catalog. Using format("iceberg") loads...
Read more >15: Iceberg right ahead! - Trino
... Write data from Trino and check data and snapshots */ INSERT INTO iceberg.logging.logs VALUES ( 'ERROR', timestamp '2021-04-01' AT TIME ...
Read more >Maintaining Iceberg Tables - Compaction, Expiring Snapshots ...
The Apache Iceberg table format creates many benefits when working ... When it comes to querying the data it would be more efficient...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
#17190
Hi @Sudeepam97, I’m looking at the issue and plan to send an initial PR to support the feature soon. Cc @beinan