Tag Archives: apache

[Solved] write javaBean error, fastjson version 1.2.76, class org.apache.flink.table.data.binary

An exception is thrown when using the static variable Map as a return

com.alibaba.fastjson.JSONException: write javaBean error, fastjson version 1.2.76, class org.apache.flink.table.data.binary.BinaryStringData, fieldName : id, Memory segment does not represent off heap memory

The rest of the code remains unchanged, and only an empty new Map is returned. The result is normal, which proves that there is a problem with the encoding of the fields inside, so it can be re-encoded
for (Map<String, Object> map : FakerConstant.TABLE_ROW_DATA_RESULT) {
            Map<String, Object> m = new HashMap<>();
            map.forEach((key, value) -> {
                m.put(
                        // You need to re-encode it, otherwise it will report FastJson exception. write javaBean error, fastjson version 1.2.76
                        new String(key.getBytes(), StandardCharsets.UTF_8),
                        new String(String.valueOf(value).getBytes(), StandardCharsets.UTF_8)););
            });
            res.add(m);
        }

[Solved] eclipse Error: org.apache.hadoop.hbase.NotServingRegionException:

Error1: org.apache.hadoop.hbase.NotServingRegionException:
Error 2: Can’t get master address from ZooKeeper; znode data == null

[root@hadoop01 bin]# sh hbase hbck
2022-01-29 16:48:49,797 INFO  [main] client.HConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 failed; retrying after sleep of 10044, exception=java.io.IOException: Can't get master address from ZooKeeper; znode data == null

Solution:

Stop HBase and go to the bin directory of HBase

sh stop-hbase.sh

Start the zookeeper client and delete the/HBase node

[root@hadoop01 bin]# sh zkCli.sh
[zk: localhost:2181(CONNECTED) 1] rmr /hbase

Restart HBase cluster

sh start-hbase.sh

javaweb Connect Datas Error: che.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1444)

1. Jar package is not imported
2 Multiple versions conflict
3 Version mismatch between idea and Tomcat

java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver
	at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1444)
	at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1252)

Error reporting check

1. Check whether you have introduced MySQL-connector-Java.Jar package

This is used to link the database

2. Check the position of the bag

be sure to check

3. See if there are more spaces in the incoming string

[Solved] Flink jdbc Error: Access Denied for user ‘root‘@‘10.0.0.x‘ (using password: YES)

Error message:

2021-12-31 11:02:51.955 [Source: TableSourceScan(table=[[default_catalog, default_database, jdbc_source_table, project=[id]]], fields=[id]) -> Calc(select=[id, id AS id0, id AS id1]) -> Sink: Sink(table=[default_catalog.default_database.jdbc_upsert_sink_table], fields=[id, id0, id1]) (1/1)#1] WARN  org.apache.flink.runtime.taskmanager.Task - Source: TableSourceScan(table=[[default_catalog, default_database, jdbc_source_table, project=[id]]], fields=[id]) -> Calc(select=[id, id AS id0, id AS id1]) -> Sink: Sink(table=[default_catalog.default_database.jdbc_upsert_sink_table], fields=[id, id0, id1]) (1/1)#1 (85391b7c05a5282974f4da4cbb62f38e) switched from INITIALIZING to FAILED with failure cause: java.io.IOException: unable to open JDBC writer
	at org.apache.flink.connector.jdbc.internal.AbstractJdbcOutputFormat.open(AbstractJdbcOutputFormat.java:56)
	at org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.open(JdbcBatchingOutputFormat.java:129)
	at org.apache.flink.connector.jdbc.internal.GenericJdbcSinkFunction.open(GenericJdbcSinkFunction.java:60)
	at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
	at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
	at org.apache.flink.table.runtime.operators.sink.SinkOperator.open(SinkOperator.java:58)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:442)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:582)
	at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.call(StreamTaskActionExecutor.java:100)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:562)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537)
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Access denied for user 'root'@'10.0.0.23' (using password: YES)
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
	at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
	at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
	at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836)
	at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
	at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246)
	at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197)
	at org.apache.flink.connector.jdbc.internal.connection.SimpleJdbcConnectionProvider.getOrEstablishConnection(SimpleJdbcConnectionProvider.java:121)
	at org.apache.flink.connector.jdbc.internal.AbstractJdbcOutputFormat.open(AbstractJdbcOutputFormat.java:54)
	... 14 more

How to Solve:

The connector with parameter has incorrect connection information. The table field type in the database does not match the definition in the Flink SQL, resulting in table connection failure

[Solved] git error: unable to create file Filename too long

$ git checkout -b Dev_Br20211217 origin/Dev_Br20211217
error: unable to create file shenyu-examples/shenyu-examples-dubbo/shenyu-examples-alibaba-dubbo-service-annotation/src/main/java/org/apache/shenyu/examples/alibaba/dubbo/service/annotation/TestAlibabaDubboAnnotationApplication.java: Filename too long
error: unable to create file shenyu-examples/shenyu-examples-dubbo/shenyu-examples-alibaba-dubbo-service-annotation/src/main/java/org/apache/shenyu/examples/alibaba/dubbo/service/annotation/impl/DubboMultiParamServiceImpl.java: Filename too long
error: unable to create file shenyu-examples/shenyu-examples-dubbo/shenyu-examples-alibaba-dubbo-service-annotation/src/main/java/org/apache/shenyu/examples/alibaba/dubbo/service/annotation/impl/DubboTestServiceImpl.java: Filename too long
error: unable to create file shenyu-examples/shenyu-examples-dubbo/shenyu-examples-apache-dubbo-service-annotation/src/main/java/org/apache/shenyu/examples/apache/dubbo/service/annotation/TestApacheDubboAnnotationApplication.java: Filenametoo long
error: unable to create file shenyu-examples/shenyu-examples-dubbo/shenyu-examples-apache-dubbo-service-annotation/src/main/java/org/apache/shenyu/examples/apache/dubbo/service/annotation/impl/DubboMultiParamServiceImpl.java: Filename too long

Solution:
git config --global core.longpaths true

[Solved] Unable to connect to a as user root com.jcraft.jsch.JSchException: Auth failUnable to connect

The specific errors encountered in building Hadoop HA are as follows

com.jcraft.jsch.JSchException: Auth fail
	at com.jcraft.jsch.Session.connect(Session.java:452)
	at org.apache.hadoop.ha.SshFenceByTcpPort.tryFence(SshFenceByTcpPort.java:100)
	at org.apache.hadoop.ha.NodeFencer.fence(NodeFencer.java:97)
	at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:532)
	at org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:505)
	at org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:61)
	at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:892)
	at org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:902)
	at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:801)
	at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:416)
	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
2021-12-27 11:07:20,846 WARN org.apache.hadoop.ha.NodeFencer: Fencing method org.apache.hadoop.ha.SshFenceByTcpPort(null) was unsuccessful.
2021-12-27 11:07:20,846 ERROR org.apache.hadoop.ha.NodeFencer: Unable to fence service by any configured method.
2021-12-27 11:07:20,846 WARN org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of election
java.lang.RuntimeException: Unable to fence NameNode at a/192.168.0.149:8020
	at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:533)
	at org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:505)
	at org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:61)
	at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:892)
	at org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:902)
	at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:801)
	at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:416)
	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
2021-12-27 11:07:20,846 INFO org.apache.hadoop.ha.ActiveStandbyElector: Trying to re-establish ZK session
2021-12-27 11:07:20,851 INFO org.apache.zookeeper.ZooKeeper: Session: 0x37df9b417310059 closed
2021-12-27 11:07:21,852 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=a:2181,b:2181,c:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@44a90199
2021-12-27 11:07:21,853 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server b/192.168.0.150:2181. Will not attempt to authenticate using SASL (unknown error)
2021-12-27 11:07:21,854 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to b/192.168.0.150:2181, initiating session
2021-12-27 11:07:21,859 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server b/192.168.0.150:2181, sessionid = 0x27df9b3aaf60068, negotiated timeout = 5000
2021-12-27 11:07:21,860 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2021-12-27 11:07:21,861 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
2021-12-27 11:07:21,862 INFO org.apache.hadoop.ha.ActiveStandbyElector: Checking for any old active which needs to be fenced...
2021-12-27 11:07:21,862 INFO org.apache.hadoop.ha.ActiveStandbyElector: Old node exists: 0a096d79636c757374657212026e311a016120d43e28d33e
2021-12-27 11:07:21,864 INFO org.apache.hadoop.ha.ZKFailoverController: Should fence: NameNode at a/192.168.0.149:8020
2021-12-27 11:07:22,866 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: a/192.168.0.149:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
2021-12-27 11:07:22,867 WARN org.apache.hadoop.ha.FailoverController: Unable to gracefully make NameNode at a/192.168.0.149:8020 standby (unable to connect)
java.net.ConnectException: Call From b/192.168.0.150 to a:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.GeneratedConstructorAccessor26.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1407)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy9.transitionToStandby(Unknown Source)
	at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToStandby(HAServiceProtocolClientSideTranslatorPB.java:112)
	at org.apache.hadoop.ha.FailoverController.tryGracefulFence(FailoverController.java:172)
	at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:514)
	at org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:505)
	at org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:61)
	at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:892)
	at org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:902)
	at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:801)
	at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:416)
	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1446)
	... 14 more

Here are two possible reasons for this error. You are also welcome to point out the shortcomings and discuss with us.

The first is that SSH secret login is not configured. You can try to report an error and log in with other machines to see if you can successfully log in without secret.

The second is because the parameter of dfs.ha.fencing.methods is sshence, and needs to use fuser command; maybe you do not install  fuser (required for each namenode node)
installation command: Yum - y install psmisc

[Solved] mybatis Error: Cause: org.apache.ibatis.builder.BuilderException: Error creating document instance.

java.lang.ExceptionInInitializerError
	at com.example.dao.UserDaoTest.test(UserDaoTest.java:12)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
	at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
	at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
Caused by: org.apache.ibatis.exceptions.PersistenceException: 
### Error building SqlSession.
### Cause: org.apache.ibatis.builder.BuilderException: Error creating document instance.  Cause: org.xml.sax.SAXParseException; lineNumber: 5; columnNumber: 18; 1 字节的 UTF-8 序列的字节 1 无效。
	at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:30)
	at org.apache.ibatis.session.SqlSessionFactoryBuilder.build(SqlSessionFactoryBuilder.java:80)
	at org.apache.ibatis.session.SqlSessionFactoryBuilder.build(SqlSessionFactoryBuilder.java:64)
	at com.example.util.MybatisUtils.<clinit>(MybatisUtils.java:19)
	... 26 more
Caused by: org.apache.ibatis.builder.BuilderException: Error creating document instance.  Cause: org.xml.sax.SAXParseException; lineNumber: 5; columnNumber: 18; 1 字节的 UTF-8 序列的字节 1 无效。
	at org.apache.ibatis.parsing.XPathParser.createDocument(XPathParser.java:263)
	at org.apache.ibatis.parsing.XPathParser.<init>(XPathParser.java:127)
	at org.apache.ibatis.builder.xml.XMLConfigBuilder.<init>(XMLConfigBuilder.java:81)
	at org.apache.ibatis.session.SqlSessionFactoryBuilder.build(SqlSessionFactoryBuilder.java:77)
	... 28 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 5; columnNumber: 18; 1 字节的 UTF-8 序列的字节 1 无效。
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:204)
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:178)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:400)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:306)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:1000)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:605)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:534)
	at java.xml/com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:888)
	at java.xml/com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:824)
	at java.xml/com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
	at java.xml/com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:246)
	at java.xml/com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339)
	at org.apache.ibatis.parsing.XPathParser.createDocument(XPathParser.java:261)
	... 31 more
Caused by: com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: 1 字节的 UTF-8 序列的字节 1 无效。
	at java.xml/com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.invalidByte(UTF8Reader.java:702)
	at java.xml/com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.read(UTF8Reader.java:568)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.load(XMLEntityScanner.java:1904)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLEntityScanner.scanData(XMLEntityScanner.java:1377)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLScanner.scanComment(XMLScanner.java:800)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanComment(XMLDocumentFragmentScannerImpl.java:1069)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:883)
	... 39 more

Solution:

Change UTF-8 in the XML of all configuration files to utf8

After modification, you can query it.

[ERROR] Failed to execute goal on project simple-project: Could not resolve dependencies for project

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:01 h
[INFO] Finished at: 2021-12-05T21:19:38+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project simple-project: Could not resolve dependencies for project cn.edu.xmu:simple-project:jar:1.0: Could not transfer artifact org.apache.flink:flink-rpc-akka-loader:jar:1.14.0 from/to maven-default-http-blocker (http://0.0.0.0/): Blocked mirror for repositories: [jboss (http://repository.jboss.com/maven2/, default, releases+snapshots)] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException

 

Solution:
My flink is flink-1.14.0-bin-scala_2.12.tgz
The above error is basically the pom.xml file messing up
So in the pom.xml file

version should correspond to 1.14.0

flink-streaming-java_ also followed by 2.12
version should correspond to 1.14.0

filnk-clients_ followed by 2.12
version should correspond to 1.14.0
Changed these three places, basically it’s ok
The error that appears is basically that there is no corresponding good version, change it and save it, and then execute
/usr/local/maven/bin/mvn package
That’s it

The following is the pom.xml file that I ran successfully, and the corresponding flink is flink-1.14.0-bin-scala_2.12.tgz

<project>
    <groupId>cn.edu.xmu</groupId>
    <artifactId>simple-project</artifactId>
    <modelVersion>4.0.0</modelVersion>
    <name>Simple Project</name>
    <packaging>jar</packaging>
    <version>1.0</version>
    <repositories>
        <repository>
            <id>jboss</id>
            <name>JBoss Repository</name>
            <url>http://repository.jboss.com/maven2/</url>
        </repository>
    </repositories>
    <dependencies>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>1.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-java_2.12</artifactId>
<version>1.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_2.12</artifactId>
            <version>1.14.0</version>
        </dependency>
    </dependencies>
</project>

 

[Solved] hive find error: The original program header file is: #include Modify to read

These errors occur when executing query statements in hive,

ERROR : Job Submission failed with exception 'java.net.ConnectException(Call From ************ failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused)'
java.net.ConnectException: Call From *************** to ****************:8032 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.GeneratedConstructorAccessor34.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549)
	at org.apache.hadoop.ipc.Client.call(Client.java:1491)
	at org.apache.hadoop.ipc.Client.call(Client.java:1388)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
	at com.sun.proxy.$Proxy85.getNewApplication(Unknown Source)
	at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:274)
	at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy86.getNewApplication(Unknown Source)
	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:270)
	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:278)
	at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:196)
	at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:271)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:157)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
	at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
	at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:423)
	at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:149)
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224)
	at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Denied to connect
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:700)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:804)
	at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:421)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1606)
	at org.apache.hadoop.ipc.Client.call(Client.java:1435)
	... 54 more

ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Call From ****** to hadoop102:8032 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

After careful analysis of this problem, the problem should be that the port belonging to 8032 has an error and has been prompting for abnormal connection. However, I set the 8032 port as the yarn port, which is also 8032. I found that my own yarn service has not been turned on. So you can enter the following command again

start-yarn.sh

Restart yarn and execute the query statement again to display success.

[Solved] Hadoop Error: Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Problem Description:

When testing yarn , starting the wordcount test case fails, and the following prompt appears

Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}<alue>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}<alue>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}<alue>
</property>

For more detailed output, check the application tracking page: http://hadoop103:8088/cluster/app/application_1638539388325_0001 Then click on links to logs of each attempt.
. Failing the application.

Cause analysis

Cannot find the main classpath

Solution:

Follow the prompts to add a classpath

At yen site XML and mapred site XML, add the following

<property>
	<name>yarn.application.classpath</name>
	<value>
		${HADOOP_HOME}/etc/*,
		${HADOOP_HOME}/etc/hadoop/*,
		${HADOOP_HOME}/lib/*,
		${HADOOP_HOME}/share/hadoop/common/*,
		${HADOOP_HOME}/share/hadoop/common/lib/*,
		${HADOOP_HOME}/share/hadoop/mapreduce/*,
		${HADOOP_HOME}/share/hadoop/mapreduce/lib-examples/*,
		${HADOOP_HOME}/share/hadoop/hdfs/*,
		${HADOOP_HOME}/share/hadoop/hdfs/lib/*,
		${HADOOP_HOME}/share/hadoop/yarn/*,
		${HADOOP_HOME}/share/hadoop/yarn/lib/*,
	</value>
</property>

Because ${hadoop_home} is used, you need to inherit environment variables in Yard site XML is added as follows, where Hadoop_Home is what we need. Just put the rest as needed. I put some commonly used ones here

<!--Inheritance of environment variables-->
<property>
  <name>yarn.nodemanager.env-whitelist</name>,
  <value>JAVA_HOME,HADOOP_HOME,HADOOP_COMMON_HOME, HADOOP_ HDFS_HOME, HADOOP_ CONF_DIR, CLASSPATH_PREPEND_DISTCACHE, HADOOP_YARN_HOME, HADOOP_MAPRED_HOME
  </value>
</property>

If the yarn service related to multiple servers is enabled, remember to configure each server