我有一个火花作业,它最后使用saveAsTable将数据写入到内部表w/ a给定的名称中。
dataframe是使用不同的步骤创建的,其中一个步骤是在one中使用" beta“方法,其中我通过=>从scipy.stats导入beta导入它。它运行在google云w/ 20工作节点上,但是我得到了下面的错误,它在抱怨the包,
Caused by: org.apache.spark.SparkException:
Job aborted due to stage failure:
Task 14 in stage 7.0 failed 4 times, most recent failure:
Lost task 14.3 in stage 7.0 (TID 518, name-w-3.c.somenames.internal,
executor 23): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 364, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 69, in read_command
command = serializer._read_with_length(file)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 172, in
_read_with_length
return self.loads(obj)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 583, in loads
return pickle.loads(obj)
ImportError: No module named scipy.stats._continuous_distns
有什么想法或解决办法吗?
我也试图通过图书馆的火花工作:
"spark.driver.extraLibraryPath" : "/usr/lib/spark/python/lib/pyspark.zip",
"spark.driver.extraClassPath" :"/usr/lib/spark/python/lib/pyspark.zip"
发布于 2019-11-20 03:10:24
该库是否安装在集群中的所有节点上?您可以简单地做一个
pip install --user scipy
我在AWS EMR中使用了引导操作,在Google云上也应该有类似的方法
https://stackoverflow.com/questions/58943859
复制相似问题