我正在努力扁平化由连接另外两个数据集产生的数据集。下面是我的代码:
val family = Seq(
Person(0, "Agata", 0),
Person(1, "Iweta", 0),
Person(2, "Patryk", 2),
Person(3, "Maksym", 0)).toDS
val cities = Seq(
City(0, "Warsaw"),
City(1, "Washington"),
City(2, "Sopot")).toDS
然后连接:
val joined = family.joinWith(cities, family("cityId") ===cities("id"),"crossjoin")
得到的结果是:
joined: org.apache.spark.sql.Dataset[(Person, City)]
_1| _2|
[0,Agata,0]|[0,Warsaw]| |
[1,Iweta,0]|[0,Warsaw]| |
[2,Patryk,2]| [2,Sopot]| |
[3,Maksym,0]|[0,Warsaw] |
我想要展平它,并获得以下数据集:
val output: Dataset=
[0,Agata,0,Warsaw]|
[1,Iweta,0,Warsaw]|
[2,Patryk,2,Sopot]|
[3,Maksym,0,Warsaw]
任何想法如何做到这一点不使用dataframe API,我希望它是完全由Dataset API完成。非常感谢你的帮助。诚挚的问候
发布于 2018-06-20 03:16:39
使用join本身,您将获得相同的输出。
family.join(cities, family("cityId")===cities("id")).drop("id")
示例输出:
+--------+------+--------+
|cityName|cityId|cityName|
+--------+------+--------+
| Agata| 0| Warsaw|
| Iweta| 0| Warsaw|
| Patryk| 2| Sopot|
| Maksym| 0| Warsaw|
+--------+------+--------+
https://stackoverflow.com/questions/50935124
复制相似问题