thank: [email protected] Questions provided.
Today, they encountered a very interesting bug. When they created a project with @ Vue/cli, they reported an error, which they had never seen before:
At first, I thought it was a version problem. After all, the error message said update, but the version of CLI was the latest. After asking, the node and NPM versions were also the latest (12.16.1, which was the latest as of the time I wrote this article). Most importantly, there was no old version of Vue cli
That’s very interesting. According to convention, when encountering the front-end problem, the first reaction is to unload and reload
npm uninstall -g @vue/cli
npm cache clean --force
npm install -g @vue/cli
However, it’s useless. After checking for a long time, we can’t find the relevant error report on the Internet, which is very embarrassing.
Later, I noticed that there is an output of yarn below. Is the built-in yarn in cli?But this shouldn’t be:
although I think it’s incredible, I still decided to look at the version of yarn. Yarn is highly suspected. Sure enough:
the problem was found. But what is this?Hadoop？
Later, I remembered that yarn is also a part of Hadoop, which is used to schedule resources:
After the spark submit submits the task, the code of the dirver side is executed normally, but the program gets stuck in the exciter stage and frequently reports errors until the task fails
The log failed location prints a lot of warning:
The initial job did not accept any resources. Please check the cluster UI to make sure that the worker process is registered and has enough resources. The initial analysis is about resources. Then yarn logs pull down the logs to see:
The initial heap size of the JVM exceeds the maximum heap size. Check the task environment to find out the truth
The initial memory of the JVM – XMS (the minimum heap value of heap memory) requires 13g, but Excutor.memory Only 12g is given, so the above problem appears. Modify the script to keep it stable excutor.mermory =The size of – XMS is OK, the problem is solved~
Tips: generally – XMS – Xmx (the maximum heap value of heap memory) can be set the same.
Oracle recommends setting the minimum heap size (-Xms)equal to the maximum heap size (-Xmx) to minimize garbage collections.
Yarn ResourceManager restart is invalid.
It could be Hadoop yarn monitored by ambari- resourcemanager.pid The file does not properly record the process ID of ResourceManager.
enter the directory/var/run/Hadoop yarn/yarn; view Hadoop yarn- resourcemanager.pid The process ID of ResourceManager recorded in; JPS view the ID number of ResourceManager running; if it is not equal, copy and fill in the process ID of ResourceManager viewed by JPS to Hadoop yarn- resourcemanager.pid Wait a few seconds, cluster ambari The display is normal; OL>