今天来介绍一下spark中两个常用的重分区算子,repartition 和 partitionBy 都是对数据进行重新分区,默认都是使用 HashPartitioner,区别在于partitionBy 只能用于 PairRdd,但是当它们同时都用于 PairRdd时,效果也是不一样的,下面来看一个demo.
package test
import org.apache.log4j.{Level, Logger}
import org.apache.spark.{HashPartitioner, SparkConf, SparkContext, TaskContext}
/**
* repartition和partitionBy的区别
*/
object partitionDemo {
Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("localTest").setMaster("local[4]")
val sc = new SparkContext(conf)
//设置4个分区;
val rdd = sc.parallelize(List("hello", "jason", "what", "are", "you", "doing","hi","jason","do","you","eat","dinner","hello","jason","do","you",
版权声明:本文为xianpanjia4616原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。