当使用TF 1.9 (即officially supported)时,我们的CloudML训练任务不会在训练完成后终止。工作就这样无限期地坐在那里。有趣的是,在TF 1.8上运行的CloudML作业没有问题。我们的模型是通过tf.Estimator
创建的。
典型的日志(使用TF <=1.8时)为:
I Job completed successfully.
I Finished tearing down training program.
I ps-replica-0 Clean up finished. ps-replica-0
I ps-replica-0 Module completed; cleaning up. ps-replica-0
I ps-replica-0 Signal 15 (SIGTERM) was caught. Terminated by service.
This is normal behavior. ps-replica-0
I Tearing down training program.
I master-replica-0 Task completed successfully. master-replica-0
I master-replica-0 Clean up finished. master-replica-0
I master-replica-0 Module completed; cleaning up. master-replica-0
I master-replica-0 Loss for final step: 0.054428928. master-replica-0
I master-replica-0 SavedModel written to: XXX master-replica-0
当使用TF 1.9时,我们会看到以下内容:
I master-replica-0 Skip the current checkpoint eval due to throttle secs (30 secs). master-replica-0
I master-replica-0 Saving checkpoints for 20034 into gs://bg-dataflow/yuri/nine_gag_recommender_train_test/trained_model/model.ckpt. master-replica-0
I master-replica-0 global_step/sec: 17.7668 master-replica-0
I master-replica-0 SavedModel written to: XXX master-replica-0
有什么想法吗?
发布于 2018-08-20 06:21:02
检查你发送的作业id的日志,看起来只有一半的工人完成了他们的任务,另一半被卡住了,因此主机在等待他们活着,这导致了你的工作被卡住。
默认情况下,当使用tf.Estimator时,主服务器等待所有工作进程都处于活动状态。在有许多工人的大规模分布式培训中,重要的是要设置device_filters,使master只依赖PS才能存活,同样,工人也应该只依靠PS才能存活。
解决方案是在tf.ConfigProto()中设置设备筛选器,并将其传递给tf.estimator.RunConfig()的session_config参数。你可以在这里找到更多细节:https://cloud.google.com/ml-engine/docs/tensorflow/distributed-training-details#set-device-filters
https://stackoverflow.com/questions/51851060
复制