Category Archives: How to Fix

T omcat:removeGeneratedFiles Failed to delete the generated java file solution

Questions

Phenomenon

        Tomcat service startup: JSP page returns 404 phenomenon

Log analysis

02-jun-2021 16:17:30.174 warning [http-nio-8080-exec-14] org.apache.jasper.compiler.compiler.removegeneratedfiles failed to delete the generated java file [E:// cloudvos/apache-tomcat-9.0.39-windows-x64/apache-tomcat-9.0.39/work/Catalina/localhost/CS/org/Apache/JSP/Web]_ 002dINF\views\modules\ri\communityList_ jsp.java]

solve

         File permissions caused by the problem, delete all cache files

Delete the contents of apache-tomcat-9.0.7

Delete the contents of apache-tomcat-9.0.7

The browser requires that samesite by default cookies be disabled

Some websites require that the default in samesite be set to disabled

However, into the browser’s flag page found that there is no problem   Samesite by default cookies because the kernel update page has changed

Solution:

1. Find the browser shortcut and right-click properties

2. Add in the target column

--disable-features=SameSiteByDefaultCookies,CookiesWithoutSameSiteMustBeSecure

 
3. Restart the browser and you will find that it is OK

 
be careful

It is said that this method is invalid after chrome kernel 94 +

 
 

Syntax error in Vue ie10 browser

“29616;” 38169;”35823;”

SyntaxError -2146827286

 

@babel/polyfill

e.g. install –save @babel/polyfill

Main.js

import’@babel/polyfill’

 

“21442;” 32771;”38142;” 255091;

https://www.cnblogs.com/yalong/p/9988615.html

 

Tensorflow ValueError: Failed to convert a NumPy array to a Tensor

    Recently, I’m learning to build tensorflow and keras. There are always all kinds of errors. Thank you very much for your experience. I will see how you solve the problem every time. Of course, some solutions have been tried and found not to work, so we have to continue to look for solutions.

      I will share with you the problems I have encountered and the final solution. In the process, I should refer to the content shared by many predecessors. As a knowledge transmitter, I hope my sharing and summary can also help you.

     


ValueError: Failed to convert a NumPy array to a Tensor

      Thanks to bloggers for solving this problem( https://blog.csdn.net/weixin_ 39653948/article/details/105132995).

Error reason: before training the model, the training samples and test samples were not converted into data types acceptable to tensorflowh and keras.

resolvent:

x_train=x_train.astype('float64')
x_test=x_test.astype('float64')

    The error is removed and the program is running normally.

    Thanks again for sharing.

 

Alfred: How to integrat iterm2

The part of Alfred’s workflow that involves shell uses the system terminal by default. By modifying the configuration, you can call item.

Iterm version: 3.0.11   Theoretically, 2.9 + is OK.

Alfred -> Feature -> Terminal -> Change the application to custom. A code box appears below, and paste the following script.

I’m used to this script. Every time, I will split a column on the right side of the current small window as a new window and execute the incoming command.

If you don’t like it, you can modify it yourself http://www.iterm2.com/documentation-scripting.html

 

on alfred_script(q)
	if application "iTerm2" is running or application "iTerm" is running then
		run script "
			on run {q}
				tell application \":Applications:iTerm.app\"
					activate
					try
						select first window
						set onlywindow to false
					on error
						create window with default profile
						select first window
						set onlywindow to true
					end try
					tell current session of the first window
						if onlywindow is false then
							tell split vertically with default profile
								write text q
							end tell
						end if
					end tell
				end tell
			end run
		" with parameters {q}
	else
		run script "
			on run {q}
				tell application \":Applications:iTerm.app\"
					activate
					try
						select first window
					on error
						create window with default profile
						select first window
					end try
					tell the first window
						tell current session to write text q
					end tell
				end tell
			end run
		" with parameters {q}
	end if
end alfred_script

 

Can not create the Java virtual machine

After the system was reinstalled, a lot of software was installed, including eclipse. However, an error occurred when opening eclipse after installation  

Click OK to show the following:

 
I found the reason on the Internet, because I generated three files java.exe, javaw.exe and javaws.exe in the file C: [windows] system32 when I installed JDK; As shown in the figure:

 

 
Delete the three marked exe files. Be careful not to delete them wrong. Restart eclipse again, and you can run it successfully!

Hope to help you!!!  

MobaXterm error cuda:out of memory

MobaXterm error cuda:out of memory

When using mobaxterm training model, the cuda:out of In addition to the fact that the conventional video memory is too small and the value of subdivisions needs to be adjusted, there may also be a literal meaning that the storage space is insufficient and the data set is too large. At this time, you only need to reduce the capacity of the data set and then reduce it.

CDH Namenode Abnormal stop Error: flush failed for required journal (JournalAndStream(mgr=QJM to

The error information is as follows:

2020-12-09 14:07:56,509 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: Error: flush failed for required journal (JournalAndStream(mgr=QJM to [xxx:8485, xxx:8485, xxx:8485], stream=QuorumOutputStream starting at txid 74798133))
2020-12-09 14:07:56,499 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Aborting QuorumOutputStream starting at txid 74798133
        at java.lang.Thread.run(Thread.java:748)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:243)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:711)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.flush(JournalSet.java:521)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.access$100(JournalSet.java:55)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:385)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$8.apply(JournalSet.java:525)
        at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:107)
        at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:113)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.flushAndSync(QuorumOutputStream.java:109)
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27401)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:162)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:179)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:372)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:484)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:458)
xxx:8485: IPC's epoch 33 is less than the last promised epoch 34

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27401)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:162)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:179)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:372)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:484)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:458)
xxx:8485: IPC's epoch 33 is less than the last promised epoch 34

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27401)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:162)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:179)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:372)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:484)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:458)
xxx:8485: IPC's epoch 33 is less than the last promised epoch 34
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
2020-12-09 14:07:56,496 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: flush failed for required journal (JournalAndStream(mgr=QJM to [xxx:8485, xxx:8485, xxx:8485], stream=QuorumOutputStream starting at txid 74798133))
2020-12-09 14:07:56,494 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 7611ms to send a batch of 2 edits (179 bytes) to remote journal xxx:8485
        at java.lang.Thread.run(Thread.java:748)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:389)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:396)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolTranslatorPB.journal(QJournalProtocolTranslatorPB.java:187)
        at com.sun.proxy.$Proxy19.journal(Unknown Source)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at org.apache.hadoop.ipc.Client.call(Client.java:1355)
        at org.apache.hadoop.ipc.Client.call(Client.java:1445)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499)

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27401)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:162)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:179)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:372)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:484)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:458)
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 33 is less than the last promised epoch 34
2020-12-09 14:07:56,492 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote journal xxx:8485 failed to write txns 74798134-74798135. Will try to write to this JN again after the next log roll.
]
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27401)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:162)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:179)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:372)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:484)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:458)
, xxx:8485: IPC's epoch 33 is less than the last promised epoch 34
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27401)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:162)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:179)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:372)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:484)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:458)
2020-12-09 14:07:55,886 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7003 ms (timeout=20000 ms) for a response for sendEdits. Exceptions so far: [xxx:8485: IPC's epoch 33 is less than the last promised epoch 34
]
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27401)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:162)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:179)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:372)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:484)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:458)
, xxx:8485: IPC's epoch 33 is less than the last promised epoch 34
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27401)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:162)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:179)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:372)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:484)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:458)
2020-12-09 14:07:54,883 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for sendEdits. Exceptions so far: [xxx:8485: IPC's epoch 33 is less than the last promised epoch 34
        at java.lang.Thread.run(Thread.java:748)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:389)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:396)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolTranslatorPB.journal(QJournalProtocolTranslatorPB.java:187)
        at com.sun.proxy.$Proxy19.journal(Unknown Source)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at org.apache.hadoop.ipc.Client.call(Client.java:1355)
        at org.apache.hadoop.ipc.Client.call(Client.java:1445)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499)

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27401)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:162)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:179)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:372)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:484)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:458)
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 33 is less than the last promised epoch 34
2020-12-09 14:07:49,776 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote journal xxx:8485 failed to write txns 74798134-74798135. Will try to write to this JN again after the next log roll.

When HA is configured, one of the namenode stops, and the key message “IPC’s epoch is less than the last committed epoch” is probably due to network failure. After reading the log, every time another namenode is started, port 8485 of the three journalnode services will be detected, indicating that it is failed,
indicating that it is most likely a network problem, The troubleshooting is as follows:
ifconfig – a check whether the network card has packet loss
check whether/etc/sysconfig/SELinux = disabled is correct
/etc/init.d/iptables status check whether the firewall is running, because Hadoop is running in the Intranet environment, remember that the firewall was closed when it was deployed before
check the firewalls of three journalnode servers successively, It’s all closed

Online solutions:
1) adjust the write timeout of journal node
for example, dfs.qjournal.write-txns.timeout.ms = 90000

In fact, in the actual production environment, this kind of timeout is also easy to happen, so we need to change the default 20s timeout to a larger value, such as 60 or 90s.

We can add a set of configurations in hdfs-site.xml under Hadoop/etc/Hadoop

dfs.qjournal.write-txns.timeout.ms
60000

CDH cluster searches dfs.qjournal.write-txns.timeout.ms in HDFS configuration interface
2) adjusts the Java parameters of namenode and triggers full GC in advance, so that the time of full GC will be less
3) the default full GC mode of namenode is parallel GC, which is in STW mode and is changed to CMS format. Adjust the startup parameters of namenode:
– XX: + usecompansedoops
– XX: + useparnewgc – XX: + useconcmarksweepgc – XX: + cmsclassunloadingenabled
– XX: + usecmpackage at full collection – XX: cmsfullgcsbeforecompaction = 0
– XX: + cmsparallelremarkenabled – XX: + disableexplicitgc
– XX: + usecmsinitiatingoccupancyonly – XX: cmsinitiatingoccupancyfraction = 75
– XX: cmsfullgcsbeforecompaction SoftRefLRUPolicyMSPerMB=0

[Solved] Unit test automatically injects error reporting nullpointer

Using @Autowired annotation to generate nullpointer in unit test

terms of settlement

1. Introduce POM

		<dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <version>2.2.1.RELEASE</version>
            <scope>test</scope>
        </dependency>

2. Add notes to the unit test

@SpringBootTest(classes = EduApplication.class)
@RunWith(SpringJUnit4ClassRunner.class)
public class redisTemplate {

  @Autowired
  RedisTemplate redisTemplate;
    @Test
    public void testStringAdd(){
        BoundValueOperations str = redisTemplate.boundValueOps("str");
        // Set the value via redisTemplate
        str.set("test1");
        str.set("test2");
    }
}

[Solved] Spark SQL Error: File xxx could only be written to 0 of the 1 minReplication nodes.

Article Contents
Spark SQL reports an error File xxx could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.

There are 3 datanode(s) running and 3 node(s) are excluded in this operation. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.

21/06/1917:06:27 ERROR Hive: Failed to move: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/warehouse/hdu.db/user_visit_action/user_visit_action.txt could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)

Exception in thread "main" org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/warehouse/hdu.db/user_visit_action/user_visit_action.txt could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
;
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:109)
    at org.apache.spark.sql.hive.HiveExternalCatalog.loadTable(HiveExternalCatalog.scala:874)
    at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.loadTable(ExternalCatalogWithListener.scala:167)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadTable(SessionCatalog.scala:491)
    at org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:389)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
    at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
    at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3614)
    at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
    at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
    at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:606)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:601)
    at com.hdu.bigdata.spark.sql.Spark06_SparkSQL_Test$.main(Spark06_SparkSQL_Test.scala:41)
    at com.hdu.bigdata.spark.sql.Spark06_SparkSQL_Test.main(Spark06_SparkSQL_Test.scala)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/warehouse/hdu.db/user_visit_action/user_visit_action.txt could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)

    at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:2966)
    at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:3297)
    at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:2022)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.sql.hive.client.Shim_v2_1.loadTable(HiveShim.scala:1213)
    at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadTable$1(HiveClientImpl.scala:883)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
    at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
    at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
    at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
    at org.apache.spark.sql.hive.client.HiveClientImpl.loadTable(HiveClientImpl.scala:878)
    at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadTable$1(HiveExternalCatalog.scala:880)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
    ... 24 more
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/warehouse/hdu.db/user_visit_action/user_visit_action.txt could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)

    at org.apache.hadoop.ipc.Client.call(Client.java:1476)
    at org.apache.hadoop.ipc.Client.call(Client.java:1413)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy29.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy30.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1588)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)

Process finished with exit code 1

Cause of the problem:

The namenode node stores the file directory, that is, the folder and file name. The namenode can be accessed locally through the public network, so the folder can be created. When the upload file needs to write data to the datanode, the namenode and the datanode communicate through the LAN, and the namenode returns the private IP address of the datanode, which cannot be accessed locally

Solution:

The returned IP address cannot return the public IP address, so you can set it to return the host name, and you can access the datanode node through the mapping between the host name and the public address. The problem will be solved
because the priority of code setting is the highest, you can set the code directly:

Add configuration information:

config("dfs.client.use.datanode.hostname", "true")
config("dfs.replication", "2")

Add as follows:

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("sparkSQL")
val spark = SparkSession.builder().enableHiveSupport().config(sparkConf)
  .config("dfs.client.use.datanode.hostname", "true")
  .config("dfs.replication", "2")
  .getOrCreate()