前往小程序,Get更优阅读体验!
立即前往
发布
社区首页 >专栏 >3 Spark入门distinct、union、intersection,subtract,cartesian等数学运算

3 Spark入门distinct、union、intersection,subtract,cartesian等数学运算

作者头像
天涯泪小武
发布2019-01-17 12:00:56
发布2019-01-17 12:00:56
1.1K00
代码可运行
举报
文章被收录于专栏:SpringCloud专栏SpringCloud专栏
运行总次数:0
代码可运行

这一篇是一些简单的Spark操作,如去重、合并、取交集等,不管用不用的上,做个档案记录。

distinct去重

代码语言:javascript
代码运行次数:0
复制
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;

import java.util.Arrays;
import java.util.List;

/**
 * 去除重复的元素,不过此方法涉及到混洗,操作开销很大
 * @author wuweifeng wrote on 2018/4/16.
 */
public class TestDistinct {
    public static void main(String[] args) {
        SparkSession sparkSession = SparkSession.builder().appName("JavaWordCount").master("local").getOrCreate();
        //spark对普通List的reduce操作
        JavaSparkContext javaSparkContext = new JavaSparkContext(sparkSession.sparkContext());
        List<Integer> data = Arrays.asList(1, 1, 2, 3, 4, 5);
        JavaRDD<Integer> originRDD = javaSparkContext.parallelize(data);
        List<Integer> results = originRDD.distinct().collect();
        System.out.println(results);
    }
}

结果是[4, 1, 3, 5, 2]

union合并,不去重

这个就是简单的将两个RDD合并到一起

代码语言:javascript
代码运行次数:0
复制
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;

import java.util.Arrays;
import java.util.List;

/**
 * 合并两个RDD
 * @author wuweifeng wrote on 2018/4/16.
 */
public class TestUnion {
    public static void main(String[] args) {
        SparkSession sparkSession = SparkSession.builder().appName("JavaWordCount").master("local").getOrCreate();
        //spark对普通List的reduce操作
        JavaSparkContext javaSparkContext = new JavaSparkContext(sparkSession.sparkContext());
        List<Integer> one = Arrays.asList(1, 2, 3, 4, 5);
        List<Integer> two = Arrays.asList(1, 6, 7, 8, 9);
        JavaRDD<Integer> oneRDD = javaSparkContext.parallelize(one);
        JavaRDD<Integer> twoRDD = javaSparkContext.parallelize(two);
        List<Integer> results = oneRDD.union(twoRDD).collect();
        System.out.println(results);
    }
}

结果是[1, 2, 3, 4, 5, 1, 6, 7, 8, 9]

intersection取交集

代码语言:javascript
代码运行次数:0
复制
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;

import java.util.Arrays;
import java.util.List;

/**
 * 返回两个RDD的交集
 * @author wuweifeng wrote on 2018/4/16.
 */
public class TestIntersection {
    public static void main(String[] args) {
        SparkSession sparkSession = SparkSession.builder().appName("JavaWordCount").master("local").getOrCreate();
        //spark对普通List的reduce操作
        JavaSparkContext javaSparkContext = new JavaSparkContext(sparkSession.sparkContext());
        List<Integer> one = Arrays.asList(1, 2, 3, 4, 5);
        List<Integer> two = Arrays.asList(1, 6, 7, 8, 9);
        JavaRDD<Integer> oneRDD = javaSparkContext.parallelize(one);
        JavaRDD<Integer> twoRDD = javaSparkContext.parallelize(two);
        List<Integer> results = oneRDD.intersection(twoRDD).collect();
        System.out.println(results);
    }
}

结果[1]

subtract

RDD1.subtract(RDD2),返回在RDD1中出现,但是不在RDD2中出现的元素,不去重 

代码语言:javascript
代码运行次数:0
复制
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;

import java.util.Arrays;
import java.util.List;

/**
 * @author wuweifeng wrote on 2018/4/16.
 */
public class TestSubtract {
    public static void main(String[] args) {
        SparkSession sparkSession = SparkSession.builder().appName("JavaWordCount").master("local").getOrCreate();
        //spark对普通List的reduce操作
        JavaSparkContext javaSparkContext = new JavaSparkContext(sparkSession.sparkContext());
        List<Integer> one = Arrays.asList(1, 2, 3, 4, 5);
        List<Integer> two = Arrays.asList(1, 6, 7, 8, 9);
        JavaRDD<Integer> oneRDD = javaSparkContext.parallelize(one);
        JavaRDD<Integer> twoRDD = javaSparkContext.parallelize(two);

        List<Integer> results = oneRDD.subtract(twoRDD).collect();
        System.out.println(results);
    }
}

结果:[2, 3, 4, 5]

cartesian返回笛卡尔积

笛卡尔积就是两两组合的所有组合,这个的开销非常大,譬如A是["a","b","c"],B是["1","2","3"],那笛卡尔积就是(1 a)(1 b)(1 c)(2 a)(2 b)(2 c)(3 a)(3 b)(3 c)

代码语言:javascript
代码运行次数:0
复制
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;
import scala.Tuple2;

import java.util.Arrays;
import java.util.List;

/**
 * 返回笛卡尔积,开销很大
 * @author wuweifeng wrote on 2018/4/16.
 */
public class TestCartesian {
    public static void main(String[] args) {
        SparkSession sparkSession = SparkSession.builder().appName("JavaWordCount").master("local").getOrCreate();
        //spark对普通List的reduce操作
        JavaSparkContext javaSparkContext = new JavaSparkContext(sparkSession.sparkContext());
        List<Integer> one = Arrays.asList(1, 2, 3);
        List<Integer> two = Arrays.asList(1, 4, 5);
        JavaRDD<Integer> oneRDD = javaSparkContext.parallelize(one);
        JavaRDD<Integer> twoRDD = javaSparkContext.parallelize(two);
        List<Tuple2<Integer, Integer>> results = oneRDD.cartesian(twoRDD).collect();
        System.out.println(results);
    }
}

注意,返回的是键值对

[(1,1), (1,4), (1,5), (2,1), (2,4), (2,5), (3,1), (3,4), (3,5)]

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2018年04月16日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • distinct去重
  • union合并,不去重
  • intersection取交集
  • subtract
    • cartesian返回笛卡尔积
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档