You could also do it programmatically by setting the parameters "spark.executor.instances" and "spark.executor.cores" on the SparkConf object.
Example:
SparkConf conf = new SparkConf()
// 4 executor per instance of each worker
.set("spark.executor.instances", "4")
// 5 cores on each executor
.set("spark.executor.cores", "5");
The second parameter is only for YARN and standalone mode. It allows an application to run multiple executors on the same worker, provided that there are enough cores on that worker.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…