doris2.1.5升级doris2.1.7。kerberos认证出错

Viewed 46

doris2.1.5的catalog语句 升级doris2.1.7啥也没动然后报错

hudi数据在华为中台,能访问hivemetastore,但是查数据的时候出错

backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:30:00,912 INFO (thrift-server-pool-2|240) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:30:02,251 WARN (mysql-nio-pool-1|226) [StmtExecutor.analyze():1284] Analyze failed. stmt[177, 4532dd14bc0241b5-8f35e54cfdac1d4e]
java.lang.RuntimeException: Failed to get hudi partitions: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.doris.common.security.authentication.HadoopUGI.ugiDoAs(HadoopUGI.java:100) ~[fe-common-1.2-SNAPSHOT.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.hive.HiveMetaStoreClientHelper.ugiDoAs(HiveMetaStoreClientHelper.java:825) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.hudi.source.HudiScanNode.isBatchMode(HudiScanNode.java:455) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.FileQueryScanNode.createScanRangeLocations(FileQueryScanNode.java:311) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.FileQueryScanNode.doFinalize(FileQueryScanNode.java:216) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.FileQueryScanNode.finalize(FileQueryScanNode.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.planner.OriginalPlanner.createPlanFragments(OriginalPlanner.java:207) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.planner.OriginalPlanner.plan(OriginalPlanner.java:101) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.analyzeAndGenerateQueryPlan(StmtExecutor.java:1460) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.analyze(StmtExecutor.java:1267) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.executeByLegacy(StmtExecutor.java:900) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:606) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:532) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.ConnectProcessor.executeQuery(ConnectProcessor.java:337) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:218) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.MysqlConnectProcessor.handleQuery(MysqlConnectProcessor.java:284) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.MysqlConnectProcessor.dispatch(MysqlConnectProcessor.java:312) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.MysqlConnectProcessor.processOnce(MysqlConnectProcessor.java:479) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.mysql.ReadListener.lambda$handleEvent$0(ReadListener.java:52) ~[doris-fe.jar:1.2-SNAPSHOT]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_352]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_352]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_352]
Caused by: org.apache.doris.datasource.CacheException: Failed to get hudi partitions: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.doris.datasource.hudi.source.HudiCachedPartitionProcessor.getPartitionValues(HudiCachedPartitionProcessor.java:170) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.hudi.source.HudiScanNode.getPrunedPartitions(HudiScanNode.java:273) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.hudi.source.HudiScanNode.lambda$isBatchMode$9(HudiScanNode.java:457) ~[doris-fe.jar:1.2-SNAPSHOT]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_352]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_352]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) ~[hadoop-common-3.3.6.jar:?]
at org.apache.doris.common.security.authentication.HadoopUGI.ugiDoAs(HadoopUGI.java:95) ~[fe-common-1.2-SNAPSHOT.jar:1.2-SNAPSHOT]
... 21 more
Caused by: org.apache.hudi.exception.HoodieException: org.apache.hudi.exception.HoodieException: Error occurs when executing flatMap
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_352]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:593) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) ~[?:1.8.0_352]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) ~[?:1.8.0_352]
at org.apache.doris.datasource.hudi.source.HudiLocalEngineContext.flatMap(HudiLocalEngineContext.java:134) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.hudi.metadata.FileSystemBackedTableMetadata.getPartitionPathWithPathPrefixUsingFilterExpression(FileSystemBackedTableMetadata.java:174) ~[hudi-common-0.14.1.jar:0.14.1]
at org.apache.hudi.metadata.FileSystemBackedTableMetadata.getPartitionPathWithPathPrefix(FileSystemBackedTableMetadata.java:138) ~[hudi-common-0.14.1.jar:0.14.1]
at org.apache.hudi.metadata.FileSystemBackedTableMetadata.lambda$getPartitionPathWithPathPrefixes$1(FileSystemBackedTableMetadata.java:130) ~[hudi-common-0.14.1.jar:0.14.1]
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:269) ~[?:1.8.0_352]
at java.util.Collections$2.tryAdvance(Collections.java:4719) ~[?:1.8.0_352]
at java.util.Collections$2.forEachRemaining(Collections.java:4727) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_352]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) ~[?:1.8.0_352]
at org.apache.hudi.metadata.FileSystemBackedTableMetadata.getPartitionPathWithPathPrefixes(FileSystemBackedTableMetadata.java:134) ~[hudi-common-0.14.1.jar:0.14.1]
at org.apache.hudi.metadata.FileSystemBackedTableMetadata.getAllPartitionPaths(FileSystemBackedTableMetadata.java:109) ~[hudi-common-0.14.1.jar:0.14.1]
at org.apache.doris.datasource.hudi.source.HudiPartitionProcessor.getAllPartitionNames(HudiPartitionProcessor.java:55) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.hudi.source.HudiCachedPartitionProcessor.getPartitionValues(HudiCachedPartitionProcessor.java:156) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.hudi.source.HudiScanNode.getPrunedPartitions(HudiScanNode.java:273) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.datasource.hudi.source.HudiScanNode.lambda$isBatchMode$9(HudiScanNode.java:457) ~[doris-fe.jar:1.2-SNAPSHOT]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_352]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_352]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) ~[hadoop-common-3.3.6.jar:?]
at org.apache.doris.common.security.authentication.HadoopUGI.ugiDoAs(HadoopUGI.java:95) ~[fe-common-1.2-SNAPSHOT.jar:1.2-SNAPSHOT]
... 21 more
Caused by: org.apache.hudi.exception.HoodieException: Error occurs when executing flatMap
at org.apache.hudi.common.function.FunctionWrapper.lambda$throwingFlatMapWrapper$1(FunctionWrapper.java:50) ~[hudi-common-0.14.1.jar:0.14.1]
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:269) ~[?:1.8.0_352]
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) ~[?:1.8.0_352]
at java.util.stream.AbstractTask.compute(AbstractTask.java:327) ~[?:1.8.0_352]
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) ~[?:1.8.0_352]
Caused by: java.io.IOException: DestHost:destPort node-master4rnui:25000 , LocalHost:localPort host-25-37-69-69/25.37.69.69:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_352]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_352]
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:930) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:905) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1571) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1513) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1410) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139) ~[hadoop-common-3.3.6.jar:?]
at com.sun.proxy.$Proxy108.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:689) ~[hadoop-hdfs-client-3.3.6.jar:?]
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362) ~[hadoop-common-3.3.6.jar:?]
at com.sun.proxy.$Proxy109.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1702) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1686) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1113) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:149) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1188) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1185) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1195) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hudi.metadata.FileSystemBackedTableMetadata.lambda$getPartitionPathWithPathPrefixUsingFilterExpression$8d4dc07c$1(FileSystemBackedTableMetadata.java:176) ~[hudi-common-0.14.1.jar:0.14.1]
at org.apache.hudi.common.function.FunctionWrapper.lambda$throwingFlatMapWrapper$1(FunctionWrapper.java:48) ~[hudi-common-0.14.1.jar:0.14.1]
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:269) ~[?:1.8.0_352]
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) ~[?:1.8.0_352]
at java.util.stream.AbstractTask.compute(AbstractTask.java:327) ~[?:1.8.0_352]
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) ~[?:1.8.0_352]
Caused by: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:738) ~[hadoop-common-3.3.6.jar:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_352]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_352]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:693) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:796) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:347) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1632) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1457) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1410) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139) ~[hadoop-common-3.3.6.jar:?]
at com.sun.proxy.$Proxy108.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:689) ~[hadoop-hdfs-client-3.3.6.jar:?]
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362) ~[hadoop-common-3.3.6.jar:?]
at com.sun.proxy.$Proxy109.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1702) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1686) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1113) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:149) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1188) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1185) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1195) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hudi.metadata.FileSystemBackedTableMetadata.lambda$getPartitionPathWithPathPrefixUsingFilterExpression$8d4dc07c$1(FileSystemBackedTableMetadata.java:176) ~[hudi-common-0.14.1.jar:0.14.1]
at org.apache.hudi.common.function.FunctionWrapper.lambda$throwingFlatMapWrapper$1(FunctionWrapper.java:48) ~[hudi-common-0.14.1.jar:0.14.1]
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:269) ~[?:1.8.0_352]
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) ~[?:1.8.0_352]
at java.util.stream.AbstractTask.compute(AbstractTask.java:327) ~[?:1.8.0_352]
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) ~[?:1.8.0_352]
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:179) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:392) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:347) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:783) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:779) ~[hadoop-common-3.3.6.jar:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_352]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_352]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:779) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:347) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1632) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1457) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1410) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139) ~[hadoop-common-3.3.6.jar:?]
at com.sun.proxy.$Proxy108.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:689) ~[hadoop-hdfs-client-3.3.6.jar:?]
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362) ~[hadoop-common-3.3.6.jar:?]
at com.sun.proxy.$Proxy109.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1702) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1686) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1113) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:149) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1188) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1185) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-3.3.6.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1195) ~[hadoop-hdfs-client-3.3.6.jar:?]
at org.apache.hudi.metadata.FileSystemBackedTableMetadata.lambda$getPartitionPathWithPathPrefixUsingFilterExpression$8d4dc07c$1(FileSystemBackedTableMetadata.java:176) ~[hudi-common-0.14.1.jar:0.14.1]
at org.apache.hudi.common.function.FunctionWrapper.lambda$throwingFlatMapWrapper$1(FunctionWrapper.java:48) ~[hudi-common-0.14.1.jar:0.14.1]
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:269) ~[?:1.8.0_352]
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_352]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) ~[?:1.8.0_352]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) ~[?:1.8.0_352]
at java.util.stream.AbstractTask.compute(AbstractTask.java:327) ~[?:1.8.0_352]
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) ~[?:1.8.0_352]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) ~[?:1.8.0_352]
2024-12-21 14:30:02,256 WARN (mysql-nio-pool-1|226) [StmtExecutor.executeByLegacy():1025] execute Exception. stmt[177, 4532dd14bc0241b5-8f35e54cfdac1d4e]
org.apache.doris.common.AnalysisException: errCode = 2, detailMessage = Unexpected exception: Failed to get hudi partitions: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.doris.qe.StmtExecutor.analyze(StmtExecutor.java:1285) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.executeByLegacy(StmtExecutor.java:900) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:606) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:532) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.ConnectProcessor.executeQuery(ConnectProcessor.java:337) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:218) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.MysqlConnectProcessor.handleQuery(MysqlConnectProcessor.java:284) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.MysqlConnectProcessor.dispatch(MysqlConnectProcessor.java:312) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.qe.MysqlConnectProcessor.processOnce(MysqlConnectProcessor.java:479) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.mysql.ReadListener.lambda$handleEvent$0(ReadListener.java:52) ~[doris-fe.jar:1.2-SNAPSHOT]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_352]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_352]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_352]
2024-12-21 14:30:04,776 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:30:12,913 INFO (thrift-server-pool-0|237) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:30:12,913 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:30:14,748 INFO (colocate group clone checker|93) [ColocateTableCheckerAndBalancer.matchGroups():594] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/0/0/0/0/0, cost: 0 ms
2024-12-21 14:30:14,759 INFO (tablet checker|42) [TabletChecker.checkTablets():351] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/22/0/0/0/0,cost: 0 ms
2024-12-21 14:30:14,776 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:30:14,776 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:30:14,776 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:30:14,776 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:30:14,781 INFO (recycle bin|38) [CatalogRecycleBin.erasePartition():516] erasePartition eraseNum: 0 cost: 0ms
2024-12-21 14:30:14,781 INFO (recycle bin|38) [CatalogRecycleBin.eraseTable():397] eraseTable eraseNum: 0 cost: 0ms
2024-12-21 14:30:14,781 INFO (recycle bin|38) [CatalogRecycleBin.eraseDatabase():264] eraseDatabase eraseNum: 0 cost: 0ms
2024-12-21 14:30:14,806 INFO (tablet stat mgr|39) [TabletStatMgr.runAfterCatalogReady():175] finished to update index row num of all databases. cost: 0 ms
2024-12-21 14:30:14,822 INFO (leaderCheckpointer|90) [BDBJEJournal.getFinalizedJournalId():626] database names: 1
2024-12-21 14:30:14,822 INFO (leaderCheckpointer|90) [Checkpoint.doCheckpoint():101] last checkpoint journal id: 0, current finalized journal id: 0
2024-12-21 14:30:14,857 INFO (TopicPublisher|63) [TopicPublisherThread.runAfterCatalogReady():68] [topic_publish]begin publish topic info
2024-12-21 14:30:14,859 INFO (topic-publish-thread-5|531) [TopicPublisherThread$TopicPublishWorker.run():151] [topic_publish]publish topic info to be 25.37.69.69 success, time cost=1 ms, details: WORKLOAD_GROUP=1 WORKLOAD_SCHED_POLICY=0
2024-12-21 14:30:24,776 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:30:26,914 INFO (thrift-server-pool-1|238) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:30:26,914 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:30:29,688 INFO (catalog-refresh-timer-pool-0|183) [RefreshManager.refreshCatalogInternal():80] refresh catalog hive_krb with invalidCache true
2024-12-21 14:30:29,776 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:30:29,776 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:30:29,776 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:30:34,748 INFO (colocate group clone checker|93) [ColocateTableCheckerAndBalancer.matchGroups():594] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/0/0/0/0/0, cost: 0 ms
2024-12-21 14:30:34,760 INFO (tablet checker|42) [TabletChecker.checkTablets():351] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/22/0/0/0/0,cost: 0 ms
2024-12-21 14:30:34,777 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:30:34,925 INFO (thrift-server-pool-2|240) [ReportHandler.handleReport():216] receive report from be 10002. type: DISK, report version -1, current queue size: 1
2024-12-21 14:30:34,925 INFO (report-thread|193) [ReportHandler.diskReport():634] begin to handle disk report from backend 10002
2024-12-21 14:30:34,926 INFO (report-thread|193) [ReportHandler.diskReport():645] finished to handle disk report from backend: 10002, disk size: 1, bad disk: [], cost: 0 ms
2024-12-21 14:30:34,926 INFO (report-thread|193) [ReportHandler.cpuReport():650] begin to handle cpu report from backend 10002
2024-12-21 14:30:34,926 INFO (report-thread|193) [ReportHandler.cpuReport():664] finished to handle cpu report from backend 10002, cost: 0 ms
2024-12-21 14:30:37,915 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:30:37,915 INFO (thrift-server-pool-0|237) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:30:44,777 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:30:44,777 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:30:44,777 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:30:44,777 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:30:44,782 INFO (recycle bin|38) [CatalogRecycleBin.erasePartition():516] erasePartition eraseNum: 0 cost: 0ms
2024-12-21 14:30:44,782 INFO (recycle bin|38) [CatalogRecycleBin.eraseTable():397] eraseTable eraseNum: 0 cost: 0ms
2024-12-21 14:30:44,782 INFO (recycle bin|38) [CatalogRecycleBin.eraseDatabase():264] eraseDatabase eraseNum: 0 cost: 0ms
2024-12-21 14:30:44,859 INFO (TopicPublisher|63) [TopicPublisherThread.runAfterCatalogReady():68] [topic_publish]begin publish topic info
2024-12-21 14:30:44,860 INFO (topic-publish-thread-0|255) [TopicPublisherThread$TopicPublishWorker.run():151] [topic_publish]publish topic info to be 25.37.69.69 success, time cost=1 ms, details: WORKLOAD_GROUP=1 WORKLOAD_SCHED_POLICY=0
2024-12-21 14:30:47,907 INFO (report-thread|193) [ReportHandler.storagePolicyReport():364] backend[10002] reports policies [], report resources: []
2024-12-21 14:30:47,907 INFO (thrift-server-pool-1|238) [ReportHandler.handleReport():216] receive report from be 10002. type: TABLET, report version 17347624440022, current queue size: 1
2024-12-21 14:30:47,907 INFO (report-thread|193) [ReportHandler.tabletReport():473] backend[10002] reports 22 tablet(s). report version: 17347624440022
2024-12-21 14:30:47,909 INFO (report-thread|193) [TabletInvertedIndex.tabletReport():377] finished to do tablet diff with backend[10002]. fe tablet num: 22, backend tablet num: 22. sync: 0. metaDel: 0. foundInMeta: 22. migration: 0. backend partition num: 6, backend need update: 0. found invalid transactions 0. found republish transactions 0. tabletToUpdate: 0. need recovery: 0. cost: 1 ms
2024-12-21 14:30:47,909 INFO (report-thread|193) [ReportHandler.tabletReport():579] finished to handle tablet report from backend[10002] cost: 2 ms
2024-12-21 14:30:48,916 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:30:48,916 INFO (thrift-server-pool-2|240) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:30:54,749 INFO (colocate group clone checker|93) [ColocateTableCheckerAndBalancer.matchGroups():594] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/0/0/0/0/0, cost: 0 ms
2024-12-21 14:30:54,761 INFO (tablet checker|42) [TabletChecker.checkTablets():351] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/22/0/0/0/0,cost: 1 ms
2024-12-21 14:30:54,778 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:30:59,777 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:30:59,777 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:30:59,777 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:31:01,917 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:31:01,917 INFO (thrift-server-pool-0|237) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:31:04,778 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:31:14,749 INFO (colocate group clone checker|93) [ColocateTableCheckerAndBalancer.matchGroups():594] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/0/0/0/0/0, cost: 0 ms
2024-12-21 14:31:14,762 INFO (tablet checker|42) [TabletChecker.checkTablets():351] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/22/0/0/0/0,cost: 1 ms
2024-12-21 14:31:14,778 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:31:14,778 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:31:14,778 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:31:14,779 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:31:14,782 INFO (recycle bin|38) [CatalogRecycleBin.erasePartition():516] erasePartition eraseNum: 0 cost: 0ms
2024-12-21 14:31:14,782 INFO (recycle bin|38) [CatalogRecycleBin.eraseTable():397] eraseTable eraseNum: 0 cost: 0ms
2024-12-21 14:31:14,782 INFO (recycle bin|38) [CatalogRecycleBin.eraseDatabase():264] eraseDatabase eraseNum: 0 cost: 0ms
2024-12-21 14:31:14,785 INFO (stream_load_record_manager|44) [StreamLoadRecordMgr.runAfterCatalogReady():343] finished to pull stream load records of all backends. record size: 0, cost: 0 ms
2024-12-21 14:31:14,807 INFO (tablet stat mgr|39) [TabletStatMgr.runAfterCatalogReady():175] finished to update index row num of all databases. cost: 0 ms
2024-12-21 14:31:14,823 INFO (leaderCheckpointer|90) [BDBJEJournal.getFinalizedJournalId():626] database names: 1
2024-12-21 14:31:14,823 INFO (leaderCheckpointer|90) [Checkpoint.doCheckpoint():101] last checkpoint journal id: 0, current finalized journal id: 0
2024-12-21 14:31:14,861 INFO (TopicPublisher|63) [TopicPublisherThread.runAfterCatalogReady():68] [topic_publish]begin publish topic info
2024-12-21 14:31:14,862 INFO (topic-publish-thread-1|259) [TopicPublisherThread$TopicPublishWorker.run():151] [topic_publish]publish topic info to be 25.37.69.69 success, time cost=1 ms, details: WORKLOAD_GROUP=1 WORKLOAD_SCHED_POLICY=0
2024-12-21 14:31:16,918 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:31:16,918 INFO (thrift-server-pool-1|238) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:31:24,779 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:31:28,919 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:31:28,919 INFO (thrift-server-pool-2|240) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:31:29,666 INFO (catalog-refresh-timer-pool-0|183) [RefreshManager.refreshCatalogInternal():80] refresh catalog hive_krb with invalidCache true
2024-12-21 14:31:29,778 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:31:29,778 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:31:29,778 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:31:34,750 INFO (colocate group clone checker|93) [ColocateTableCheckerAndBalancer.matchGroups():594] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/0/0/0/0/0, cost: 0 ms
2024-12-21 14:31:34,763 INFO (tablet checker|42) [TabletChecker.checkTablets():351] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/22/0/0/0/0,cost: 1 ms
2024-12-21 14:31:34,780 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:31:34,926 INFO (thrift-server-pool-0|237) [ReportHandler.handleReport():216] receive report from be 10002. type: DISK, report version -1, current queue size: 1
2024-12-21 14:31:34,926 INFO (report-thread|193) [ReportHandler.diskReport():634] begin to handle disk report from backend 10002
2024-12-21 14:31:34,927 INFO (report-thread|193) [ReportHandler.diskReport():645] finished to handle disk report from backend: 10002, disk size: 1, bad disk: [], cost: 0 ms
2024-12-21 14:31:34,927 INFO (report-thread|193) [ReportHandler.cpuReport():650] begin to handle cpu report from backend 10002
2024-12-21 14:31:34,927 INFO (report-thread|193) [ReportHandler.cpuReport():664] finished to handle cpu report from backend 10002, cost: 0 ms
2024-12-21 14:31:42,920 INFO (thrift-server-pool-1|238) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:31:42,920 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:31:44,779 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:31:44,779 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:31:44,779 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:31:44,780 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:31:44,783 INFO (recycle bin|38) [CatalogRecycleBin.erasePartition():516] erasePartition eraseNum: 0 cost: 0ms
2024-12-21 14:31:44,783 INFO (recycle bin|38) [CatalogRecycleBin.eraseTable():397] eraseTable eraseNum: 0 cost: 0ms
2024-12-21 14:31:44,783 INFO (recycle bin|38) [CatalogRecycleBin.eraseDatabase():264] eraseDatabase eraseNum: 0 cost: 0ms
2024-12-21 14:31:44,862 INFO (TopicPublisher|63) [TopicPublisherThread.runAfterCatalogReady():68] [topic_publish]begin publish topic info
2024-12-21 14:31:44,863 INFO (topic-publish-thread-2|409) [TopicPublisherThread$TopicPublishWorker.run():151] [topic_publish]publish topic info to be 25.37.69.69 success, time cost=0 ms, details: WORKLOAD_GROUP=1 WORKLOAD_SCHED_POLICY=0
2024-12-21 14:31:52,909 INFO (thrift-server-pool-2|240) [ReportHandler.handleReport():216] receive report from be 10002. type: TABLET, report version 17347624440022, current queue size: 1
2024-12-21 14:31:52,909 INFO (report-thread|193) [ReportHandler.storagePolicyReport():364] backend[10002] reports policies [], report resources: []
2024-12-21 14:31:52,910 INFO (report-thread|193) [ReportHandler.tabletReport():473] backend[10002] reports 22 tablet(s). report version: 17347624440022
2024-12-21 14:31:52,912 INFO (report-thread|193) [TabletInvertedIndex.tabletReport():377] finished to do tablet diff with backend[10002]. fe tablet num: 22, backend tablet num: 22. sync: 0. metaDel: 0. foundInMeta: 22. migration: 0. backend partition num: 6, backend need update: 0. found invalid transactions 0. found republish transactions 0. tabletToUpdate: 0. need recovery: 0. cost: 1 ms
2024-12-21 14:31:52,912 INFO (report-thread|193) [ReportHandler.tabletReport():579] finished to handle tablet report from backend[10002] cost: 2 ms
2024-12-21 14:31:53,921 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:31:53,921 INFO (thrift-server-pool-0|237) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:31:54,750 INFO (colocate group clone checker|93) [ColocateTableCheckerAndBalancer.matchGroups():594] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/0/0/0/0/0, cost: 0 ms
2024-12-21 14:31:54,764 INFO (tablet checker|42) [TabletChecker.checkTablets():351] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/22/0/0/0/0,cost: 1 ms
2024-12-21 14:31:54,780 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:31:59,779 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:31:59,779 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:31:59,779 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:32:04,781 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:32:06,922 INFO (thrift-server-pool-1|238) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:32:06,922 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:32:14,751 INFO (colocate group clone checker|93) [ColocateTableCheckerAndBalancer.matchGroups():594] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/0/0/0/0/0, cost: 0 ms
2024-12-21 14:32:14,765 INFO (tablet checker|42) [TabletChecker.checkTablets():351] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/22/0/0/0/0,cost: 1 ms
2024-12-21 14:32:14,780 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:32:14,780 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:32:14,780 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:32:14,781 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:32:14,783 INFO (recycle bin|38) [CatalogRecycleBin.erasePartition():516] erasePartition eraseNum: 0 cost: 0ms
2024-12-21 14:32:14,784 INFO (recycle bin|38) [CatalogRecycleBin.eraseTable():397] eraseTable eraseNum: 0 cost: 0ms
2024-12-21 14:32:14,784 INFO (recycle bin|38) [CatalogRecycleBin.eraseDatabase():264] eraseDatabase eraseNum: 0 cost: 0ms
2024-12-21 14:32:14,809 INFO (tablet stat mgr|39) [TabletStatMgr.runAfterCatalogReady():175] finished to update index row num of all databases. cost: 1 ms
2024-12-21 14:32:14,824 INFO (leaderCheckpointer|90) [BDBJEJournal.getFinalizedJournalId():626] database names: 1
2024-12-21 14:32:14,824 INFO (leaderCheckpointer|90) [Checkpoint.doCheckpoint():101] last checkpoint journal id: 0, current finalized journal id: 0
2024-12-21 14:32:14,864 INFO (TopicPublisher|63) [TopicPublisherThread.runAfterCatalogReady():68] [topic_publish]begin publish topic info
2024-12-21 14:32:14,865 INFO (topic-publish-thread-3|417) [TopicPublisherThread$TopicPublishWorker.run():151] [topic_publish]publish topic info to be 25.37.69.69 success, time cost=1 ms, details: WORKLOAD_GROUP=1 WORKLOAD_SCHED_POLICY=0
2024-12-21 14:32:21,923 INFO (thrift-server-pool-2|240) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:32:21,923 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:32:24,781 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:32:29,666 INFO (catalog-refresh-timer-pool-0|183) [RefreshManager.refreshCatalogInternal():80] refresh catalog hive_krb with invalidCache true
2024-12-21 14:32:29,780 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:32:29,780 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:32:29,780 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:32:33,924 INFO (thrift-server-pool-0|237) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:32:33,924 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:32:34,751 INFO (colocate group clone checker|93) [ColocateTableCheckerAndBalancer.matchGroups():594] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/0/0/0/0/0, cost: 0 ms
2024-12-21 14:32:34,766 INFO (tablet checker|42) [TabletChecker.checkTablets():351] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/22/0/0/0/0,cost: 0 ms
2024-12-21 14:32:34,782 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:32:34,927 INFO (report-thread|193) [ReportHandler.diskReport():634] begin to handle disk report from backend 10002
2024-12-21 14:32:34,927 INFO (thrift-server-pool-1|238) [ReportHandler.handleReport():216] receive report from be 10002. type: DISK, report version -1, current queue size: 1
2024-12-21 14:32:34,928 INFO (report-thread|193) [ReportHandler.diskReport():645] finished to handle disk report from backend: 10002, disk size: 1, bad disk: [], cost: 1 ms
2024-12-21 14:32:34,928 INFO (report-thread|193) [ReportHandler.cpuReport():650] begin to handle cpu report from backend 10002
2024-12-21 14:32:34,928 INFO (report-thread|193) [ReportHandler.cpuReport():664] finished to handle cpu report from backend 10002, cost: 0 ms
2024-12-21 14:32:44,781 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:32:44,781 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:32:44,781 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:32:44,782 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:32:44,784 INFO (recycle bin|38) [CatalogRecycleBin.erasePartition():516] erasePartition eraseNum: 0 cost: 0ms
2024-12-21 14:32:44,784 INFO (recycle bin|38) [CatalogRecycleBin.eraseTable():397] eraseTable eraseNum: 0 cost: 0ms
2024-12-21 14:32:44,784 INFO (recycle bin|38) [CatalogRecycleBin.eraseDatabase():264] eraseDatabase eraseNum: 0 cost: 0ms
2024-12-21 14:32:44,866 INFO (TopicPublisher|63) [TopicPublisherThread.runAfterCatalogReady():68] [topic_publish]begin publish topic info
2024-12-21 14:32:44,867 INFO (topic-publish-thread-4|422) [TopicPublisherThread$TopicPublishWorker.run():151] [topic_publish]publish topic info to be 25.37.69.69 success, time cost=1 ms, details: WORKLOAD_GROUP=1 WORKLOAD_SCHED_POLICY=0
2024-12-21 14:32:47,925 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:32:47,925 INFO (thrift-server-pool-2|240) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:32:54,752 INFO (colocate group clone checker|93) [ColocateTableCheckerAndBalancer.matchGroups():594] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/0/0/0/0/0, cost: 0 ms
2024-12-21 14:32:54,766 INFO (tablet checker|42) [TabletChecker.checkTablets():351] finished to check tablets. unhealth/total/added/in_sched/not_ready/exceed_limit: 0/22/0/0/0/0,cost: 0 ms
2024-12-21 14:32:54,782 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions
2024-12-21 14:32:56,911 INFO (thrift-server-pool-0|237) [ReportHandler.handleReport():216] receive report from be 10002. type: TABLET, report version 17347624440022, current queue size: 1
2024-12-21 14:32:56,911 INFO (report-thread|193) [ReportHandler.storagePolicyReport():364] backend[10002] reports policies [], report resources: []
2024-12-21 14:32:56,911 INFO (report-thread|193) [ReportHandler.tabletReport():473] backend[10002] reports 22 tablet(s). report version: 17347624440022
2024-12-21 14:32:56,913 INFO (report-thread|193) [TabletInvertedIndex.tabletReport():377] finished to do tablet diff with backend[10002]. fe tablet num: 22, backend tablet num: 22. sync: 0. metaDel: 0. foundInMeta: 22. migration: 0. backend partition num: 6, backend need update: 0. found invalid transactions 0. found republish transactions 0. tabletToUpdate: 0. need recovery: 0. cost: 2 ms
2024-12-21 14:32:56,913 INFO (report-thread|193) [ReportHandler.tabletReport():579] finished to handle tablet report from backend[10002] cost: 2 ms
2024-12-21 14:32:59,781 INFO (binlog-gcer|60) [BinlogManager.gc():401] begin gc binlog
2024-12-21 14:32:59,781 INFO (binlog-gcer|60) [BinlogManager.gc():412] gc binlog, dbBinlogMap is null
2024-12-21 14:32:59,782 INFO (binlog-gcer|60) [BinlogGcer.runAfterCatalogReady():63] no gc binlog
2024-12-21 14:33:01,926 INFO (thrift-server-pool-1|238) [ReportHandler.handleReport():216] receive report from be 10002. type: TASK, report version -1, current queue size: 1
2024-12-21 14:33:01,926 INFO (report-thread|193) [ReportHandler.taskReport():629] finished to handle task report from backend 10002, diff task num: 0. cost: 0 ms
2024-12-21 14:33:04,783 INFO (InsertOverwriteDropDirtyPartitions|64) [InsertOverwriteManager.runAfterCatalogReady():368] start clean insert overwrite temp partitions

1 Answers

刷新catalog或重建catalog,并重新进行kerberos kinit等初始化后查询是否正常。