从pyspark dataframe中查询/提取数组元素可以通过使用pyspark的内置函数和方法来实现。以下是一种常见的方法:
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, explode
spark = SparkSession.builder.getOrCreate()
data = [("Alice", [1, 2, 3]), ("Bob", [4, 5, 6]), ("Charlie", [7, 8, 9])]
df = spark.createDataFrame(data, ["Name", "Numbers"])
df.show()
输出:
+-------+---------+
| Name| Numbers|
+-------+---------+
| Alice|[1, 2, 3]|
| Bob|[4, 5, 6]|
|Charlie|[7, 8, 9]|
+-------+---------+
df_exploded = df.select(col("Name"), explode(col("Numbers")).alias("Number"))
df_exploded.show()
输出:
+-------+------+
| Name|Number|
+-------+------+
| Alice| 1|
| Alice| 2|
| Alice| 3|
| Bob| 4|
| Bob| 5|
| Bob| 6|
|Charlie| 7|
|Charlie| 8|
|Charlie| 9|
+-------+------+
number_2 = df_exploded.filter(col("Number") == 2)
number_2.show()
输出:
+----+------+
|Name|Number|
+----+------+
|Alice| 2|
+----+------+
这样就可以从pyspark dataframe中查询/提取数组元素了。在实际应用中,可以根据具体需求使用其他函数和方法来完成更复杂的操作。
领取专属 10元无门槛券
手把手带您无忧上云