Apache Doris 中文技术论坛
Questions Tags Users Badges

【已解决】flink-connector-doris消费kafka写入doris出现问题

Asked Apr 12, 2024 Modified May 6, 2024
Viewed 92
ingestion 2.0

1712900818935.png

edited May 6, 2024
zhb123319
asked Apr 12, 2024
1 Answers

默认的心跳超时时间为5s, 心跳停止后,FE马上abort coordinate BE 的事务。实际上be没有down
然而BE事务在导入过程中并不需要fe的参与,这个5s太敏感了,建议改成超过1分钟没心跳才abort coordinate BE 的事务。

参考pr:
https://github.com/apache/doris/pull/22781

edited May 6, 2024
徐振超@SelectDB(可以直接加微信Faith_xzc)8271
answered Apr 12, 2024
Related Questions
4.0.3 mysql 通过CREATE JOB 同步数据之后中文变成了?
2 answers
ROUTINE LOAD导入kafka数据,当kafka是cdc数据,JSON格式,带删除标识的:op:d
1 answers
大量数据通过streamload导入报错
1 answers
能使用ROUTINE LOAD将kafka数据批量同步到doris中吗
1 answers
4.0.4 stream load 报错: can not cast from origin type bitmap to target type=varchar(65533)
1 answers
flink 多job checkpoint超时,doirs abort transation
1 answers

Terms of service Privacy policy

Powered by Answer - the open-source software that powers Q&A communities.
Made with love © 2026 Apache Doris 中文技术论坛.