spark-shell does not support yarn cluster, start it in yarn client mode
spark-shell --master=yarn --deploy-mode=client
Start the log, the error message is as follows
Among them, “Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME” is just a warning. The official explanation is as follows:
Roughly speaking: if spark.yarn.jars and spark.yarn.archive are not configured, all jars under $SPAR_HOME/jars will be packaged into a zip file and uploaded to each work partition, so the packaging and distribution is automatically completed. It doesn’t matter to configure these two parameters.
“Yarn application has already ended! It might have been killed or unable to launch application master”, this is an exception, open the mr management page, mine is http://192.168.128.130/8088,
The focus is on the red box, the actual value of 2.2g of virtual memory exceeds the upper limit of 2.1g. That is to say, the virtual memory exceeds the limit, so the contratrainer is killed, the work is done in the container, the container is killed, and it is a fart.
solution
yarn-site.xml adds configuration:
2 configurations can choose one
1 <!--The following configuration is added to solve the problem of error reporting when spark-shell runs in yarn client mode. It is estimated that spark-summit will also have this problem. The problem can be solved by configuring only one of the two configurations. Of course, there is no problem with both configurations --> 2 <!-- Whether the virtual memory setting is effective, if the actual virtual memory is greater than the set value, spark may report an error when running in client mode, " Yarn application has already ended! It might have been killed or unable to l " --> 3 <property> 4 <name>yarn.nodemanager.vmem-check-enabled</name> 5 <value> false </value> 6 <description>Whether virtual memory limits will be enforced for containers</description> 7 </property> 8 <!--Configure the value of virtual memory/physical memory, the default is 2.1 , <property> 10 <name>yarn.nodemanager.vmem-pmem-ratio</name> 11 <value> 4 </value> 12 <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description> 13 </property>
After modification, start hadoop and spark-shell.
Read More:
- [Solved] Exception in thread “main“ org.apache.spark.SparkException: When running with master ‘yarn‘ either
- [Solved] Spark Error: org.apache.spark.SparkException: A master URL must be set in your configuration
- [Solved] Illegal access: this web application instance has been stopped already
- This application has no explicit mapping for /error, so you are seeing this as a fallback.
- [Solved] This application has no explicit mapping for /error, so you are seeing this as a fallback
- [Solved] Whitelabel Error PageThis application has no explicit mapping for /error, so you are seeing this
- HBase shell Find ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
- flask init-db Error: Error: Could not locate a Flask application. Use the ‘flask –app’ option, ‘FLASK_APP’ environment variable, or a ‘wsgi.py’ or ‘app.py’ file in the current directory.
- [Solved] Failed to load property source from location ‘classpath:/application.yml‘
- [Solved] SpringBoot Error: This application has no explicit mapping for /error, so you are seeing this
- Spark ERROR client.TransportResponseHandler: Still have 1 requests outstanding when connection from
- How to Fix Spoolsv.exe Application Error
- Error starting ApplicationContext. To display the auto-configuration report re-run your application
- _LSOpenURLsWithCompletionHandler() failed for the application xxx with error -10671 [Solved]
- Spark-SQL Error: A JNI error has occurred, please check your installation and try again Exceptio
- How to Solve spark Writes Files to odps Error
- [Solved] yarn error ExitCodeException exitCode=127
- How to Solve Application failed to start error
- Spark Error: java.lang.StackOverflowError [How to Solve]
- Error configuring application listener of class jdbc.ContextListener [One of the solutions]