routine load导入kafka数据时分区列读不到数据问题

Viewed 39

新建表后,在kafka写入数据,routine load导入数据报错,显示分区列为空,但建表,routine load的语法都没问题,希望大佬协助看看。
以下是建表
CREATE TABLE tale_data (
province_name varchar(64) NOT NULL DEFAULT "" COMMENT '省份名称',
main_data varchar(64) NOT NULL DEFAULT "" COMMENT '主端口号',
sub_data varchar(64) NOT NULL DEFAULT "" COMMENT '主端口号',
create_time datetime NOT NULL COMMENT '创建时间',
id bigint NOT NULL AUTO_INCREMENT(1)
) ENGINE=OLAP
DUPLICATE KEY(province_name, main_data, sub_data)
COMMENT '有发送未登记子端口删除记录表'
PARTITION BY LIST(province_name)
(PARTITION Shanghai VALUES IN ("上海"),
PARTITION Yunnan VALUES IN ("云南"),
PARTITION Inner_Mongolia VALUES IN ("内蒙古"),
PARTITION Beijing VALUES IN ("北京"),
PARTITION Jilin VALUES IN ("吉林"),
PARTITION Sichuan VALUES IN ("四川"),
PARTITION Tianjin VALUES IN ("天津"),
PARTITION Ningxia VALUES IN ("宁夏"),
PARTITION Anhui VALUES IN ("安徽"),
PARTITION Shandong VALUES IN ("山东"),
PARTITION Shanxi VALUES IN ("山西"),
PARTITION Guangdong VALUES IN ("广东"),
PARTITION Guangxi VALUES IN ("广西"),
PARTITION Xinjiang VALUES IN ("新疆"),
PARTITION Jiangsu VALUES IN ("江苏"),
PARTITION Jiangxi VALUES IN ("江西"),
PARTITION Hebei VALUES IN ("河北"),
PARTITION Henan VALUES IN ("河南"),
PARTITION Zhejiang VALUES IN ("浙江"),
PARTITION Hainan VALUES IN ("海南"),
PARTITION Hubei VALUES IN ("湖北"),
PARTITION Hunan VALUES IN ("湖南"),
PARTITION Gansu VALUES IN ("甘肃"),
PARTITION Fujian VALUES IN ("福建"),
PARTITION Tibet VALUES IN ("西藏"),
PARTITION Guizhou VALUES IN ("贵州"),
PARTITION Liaoning VALUES IN ("辽宁"),
PARTITION ChongqingAnhui VALUES IN ("重庆"),
PARTITION Shaanxi VALUES IN ("陕西"),
PARTITION Qinhai VALUES IN ("青海"),
PARTITION Heilongjiang VALUES IN ("黑龙江"))
DISTRIBUTED BY HASH(sub_data) BUCKETS AUTO
PROPERTIES (
"replication_allocation" = "tag.location.default: 3",
"light_schema_change" = "true",
"disable_auto_compaction" = "false",
"enable_mow_light_delete" = "false"
);

以下是routine load,kafka信息省略
CREATE ROUTINE LOAD test.tale_data_job ON tale_data
COLUMNS(province_name, main_data, sub_data, create_time)
PROPERTIES
(
"desired_concurrent_number"="5",
"max_batch_interval" = "20",
"max_batch_rows" = "300000",
"max_batch_size" = "209715200",
"strict_mode" = "false",
"format" = "json",
"jsonpaths" = "["$.provinceName","$.main_data","$.sub_data", "$.createTime"]"
);
以下是kafka中JSON数据样例
{"createTime":"2025-02-19 00:31:36","sub_data":"1068411111111","provinceName":"上海",
"main_data":"10684","province_code":"31","operation":2}

启动后报错

doris版本:2.1.7

2 Answers

可能是有脏数据,应该是报文里provinceName字段的值没有对应的分区。

是这条数据吗?感觉不对呀,是不是有脏数据呢,样例数据看着肯定没问题的。是不是有的脏数据中这个字段是空的呢?

设置下容错率,过滤下脏数据的:
image.png