Apache Doris 中文技术论坛
Questions Tags Users Badges

【已解决】flink-connector-doris消费kafka写入doris出现问题

Asked Apr 12, 2024 Modified May 6, 2024
Viewed 71
ingestion 2.0

1712900818935.png

edited May 6, 2024
zhb123319
asked Apr 12, 2024
1 Answers

默认的心跳超时时间为5s, 心跳停止后,FE马上abort coordinate BE 的事务。实际上be没有down
然而BE事务在导入过程中并不需要fe的参与,这个5s太敏感了,建议改成超过1分钟没心跳才abort coordinate BE 的事务。

参考pr:
https://github.com/apache/doris/pull/22781

edited May 6, 2024
徐振超@SelectDB7701
answered Apr 12, 2024
Related Questions
创建ROUTINE LOAD Kafka任务报错Failed to get real offsets of kafka topic
int类型写入数据超过范围不报错
logstash-output-doris-1.2.0.zip 离线安装包不存在
2 answers
sqoop 从hdfs导入doris
1 answers
Doris存算分离架构压测指标上不去
1 answers
Flink CdcTools 使用Oracle XStream方式 初始化时无问题,在增量同步时无法识别Oracle Date类型数据,会将目标字段值设为NULL
1 answers

Terms of service Privacy policy

Powered by Answer - the open-source software that powers Q&A communities.
Made with love © 2025 Apache Doris 中文技术论坛.