在使用sparklyr进行所有操作之后,它被简化为1,880,573 rows和629 columns。当我尝试使用sdf_collect()为Factor Analysis收集它时,它给出了这个内存错误:
Error : org.apache.spark.sql.execution.OutOfMemorySparkException: Total memory usage during row decode exceeds spark.driver.maxResultSize (4.0 GB).The average r