diff --git a/model_zoo/official/cv/xception/README.md b/model_zoo/official/cv/xception/README.md index 0ab75ae016..731d54fea0 100644 --- a/model_zoo/official/cv/xception/README.md +++ b/model_zoo/official/cv/xception/README.md @@ -129,7 +129,6 @@ sh scripts/run_standalone_train.sh DEVICE_ID DATA_PATH ``` > Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_ascend.html), and the device_ip can be got as [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). -> This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh` ### Launch @@ -155,7 +154,7 @@ Training result will be stored in the example path. Checkpoints will be stored a epoch: 1 step: 1251, loss is 4.8427444 epoch time: 701242.350 ms, per step time: 560.545 ms epoch: 2 step: 1251, loss is 4.0637593 -epoch time: 598591.422 ms, per step time: 478.490ms +epoch time: 598591.422 ms, per step time: 478.490 ms ``` ## [Eval process](#contents) @@ -219,11 +218,11 @@ result: {'Loss': 1.7797744848789312, 'Top_1_Acc': 0.7985777243589743, 'Top_5_Acc | ------------------- | --------------------------- | | Model Version | Xception | | Resource | HUAWEI CLOUD Modelarts | -| Uploaded Date | 11/15/2020 | -| MindSpore Version | 1.0.0 | +| Uploaded Date | 12/10/2020 | +| MindSpore Version | 1.1.0 | | Dataset | 50k images | | Batch_size | 128 | -| Accuracy | Top1[79.9%] Top5[94.9%] | +| Accuracy | Top1[79.8%] Top5[94.8%] | | Total time | 3mins | # [Description of Random Situation](#contents)