我在databricks上有两个集群,我使用一个(cluster1)在datalake上编写了一个表。我需要使用另一个集群(cluster2)来安排负责编写这个表的任务。但是,会发生以下错误:
Py4JJavaError: An error occurred while calling o344.saveAsTable.
: org.apache.spark.SparkException: Job aborted.
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in