spark在yarn集群上执行client模式代码
启动历史服务
bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn \
--deploy-mode client \
./examples/jars/spark-examples_2.12-3.0.0.jar \
10
出现问题:
[2021-03-15 20:36:42.553]Container killed on request. Exit code is 143
[2021-03-15 20:36:42.554]Container exited with a non-zero exit code 143.
For more detailed output, check the application tracking page: http://hadoop103:8088/clusterpp/application_1615792406203_0009 Then click on links to logs of each attempt.
. Failing the application.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(rnClientSchedulerBackend.scala:95)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:201)
at org.apache.spark.SparkContext.(SparkContext.scala:550)
at org.apache.spark.SparkContext. g e t O r C r e a t e ( S p a r k C o n t e x t . s c a l a : 2555 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n .getOrCreate(SparkContext.scala:2555) at org.apache.spark.sql.SparkSession.getOrCreate(SparkContext.scala:2555)atorg.apache.spark.sql.SparkSessionBuilder.a n o n f u n anonfunanonfungetOrCreate1 ( S p a r k S e s s i o n . s c a : 930 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 189 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n 1(SparkSession.sca:930) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession1(SparkSession.sca:930)atscala.Option.getOrElse(Option.scala:189)atorg.apache.spark.sql.SparkSessionBuilder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi. m a i n ( S p a r k P i . s c a l a : 30 ) a t o r g . a p a c h e . s p a r k . e x a m p l e s . S p a r k P i . m a i n ( S p a r k P i . s c a l a ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( N a t i v e M e t h o d A c c e s s o r I m p l . j a v a : 62 ) a t s u n . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( D e l e g a t i n g M e t h o d A c c e s s o r I m p l . j a v a 3 ) a t j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( M e t h o d . j a v a : 498 ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkPi.scala:30) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java3) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org.main(SparkPi.scala:30)atorg.apache.spark.examples.SparkPi.main(SparkPi.scala)atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java3)atjava.lang.reflect.Method.invoke(Method.java:498)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapaches p a r k sparksparkdeployS p a r k S u b m i t SparkSubmitSparkSubmit$runMain(arkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit.doRunMain1 ( S p a r k S u b m i t . s c a l a : 180 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 203 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 90 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit1(SparkSubmit.scala:180)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)atorg.apache.spark.deploy.SparkSubmit$anon2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 1007 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:1007) at org.apache.spark.deploy.SparkSubmit2.doSubmit(SparkSubmit.scala:1007)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
解决办法:
修改hadoop配置文件 /opt/module/hadoop/etc/hadoop/yarn-site.xml 并分发、重启hadoop集群
<!--是否启动一个线程检查每个任务正使用的物理内存量,如果任务超出分配值,则直接将其杀掉,默认是true -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<!--是否启动一个线程检查每个任务正使用的虚拟内存量,如果任务超出分配值,则直接将其杀掉,默认是true -->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>