我是spark新手,在将.csv文件转换为dataframe时遇到错误。else: at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:373)
at org.apache.spark.api.python.PythonRDD.collect
当我在终端中执行spark-shell操作时,spark已经成功安装,并且我有正确的输出(欢迎使用带有图片的SparkVersion3.2.1)。然而,我试着用Kedro项目运行星火,我遇到了麻烦。.master("local[*]") _spark_session = spark_session_conf.getOrCreate()py4j.protocol.Py4JJavaError: An error occurred while calling
当我运行job.init()时,我会得到以下错误跟踪:
py4j.protocol.Py4JJavaError:调用z:com.amazonaws.services.glue.util.Job.init时出错。at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748)引发的调用