Browse Source

fix shell lanch of inceptionv3

tags/v1.0.0
zhouyaqiang 5 years ago
parent
commit
57a6ec3c08
1 changed files with 10 additions and 10 deletions
  1. +10
    -10
      model_zoo/official/cv/inceptionv3/README.md

+ 10
- 10
model_zoo/official/cv/inceptionv3/README.md View File

@@ -127,9 +127,9 @@ You can start training using python or shell scripts. The usage of shell scripts
- Ascend: - Ascend:
``` ```
# distribute training example(8p) # distribute training example(8p)
sh run_distribute_train.sh RANK_TABLE_FILE DATA_PATH
sh scripts/run_distribute_train.sh RANK_TABLE_FILE DATA_PATH
# standalone training # standalone training
sh run_standalone_train.sh DEVICE_ID DATA_PATH
sh scripts/run_standalone_train.sh DEVICE_ID DATA_PATH
``` ```
> Notes: > Notes:
RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_ascend.html) , and the device_ip can be got as [Link]https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools. RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_ascend.html) , and the device_ip can be got as [Link]https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools.
@@ -139,9 +139,9 @@ sh run_standalone_train.sh DEVICE_ID DATA_PATH
- GPU: - GPU:
``` ```
# distribute training example(8p) # distribute training example(8p)
sh run_distribute_train_gpu.sh DATA_DIR
sh scripts/run_distribute_train_gpu.sh DATA_DIR
# standalone training # standalone training
sh run_standalone_train_gpu.sh DEVICE_ID DATA_DIR
sh scripts/run_standalone_train_gpu.sh DEVICE_ID DATA_DIR
``` ```


### Launch ### Launch
@@ -155,9 +155,9 @@ sh run_standalone_train_gpu.sh DEVICE_ID DATA_DIR
shell: shell:
Ascend: Ascend:
# distribute training example(8p) # distribute training example(8p)
sh run_distribute_train.sh RANK_TABLE_FILE DATA_PATH
sh scripts/run_distribute_train.sh RANK_TABLE_FILE DATA_PATH
# standalone training # standalone training
sh run_standalone_train.sh DEVICE_ID DATA_PATH
sh scripts/run_standalone_train.sh DEVICE_ID DATA_PATH
GPU: GPU:
# distributed training example(8p) # distributed training example(8p)
sh scripts/run_distribute_train_gpu.sh /dataset/train sh scripts/run_distribute_train_gpu.sh /dataset/train
@@ -183,11 +183,11 @@ You can start training using python or shell scripts. The usage of shell scripts


- Ascend: - Ascend:
``` ```
sh run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
sh scripts/run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
``` ```
- GPU: - GPU:
``` ```
sh run_eval_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
sh scripts/run_eval_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
``` ```


### Launch ### Launch
@@ -199,8 +199,8 @@ You can start training using python or shell scripts. The usage of shell scripts
GPU: python eval.py --dataset_path DATA_DIR --checkpoint PATH_CHECKPOINT --platform GPU GPU: python eval.py --dataset_path DATA_DIR --checkpoint PATH_CHECKPOINT --platform GPU


shell: shell:
Ascend: sh run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
GPU: sh run_eval_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
Ascend: sh scripts/run_eval.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
GPU: sh scripts/run_eval_gpu.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
``` ```


> checkpoint can be produced in training process. > checkpoint can be produced in training process.


Loading…
Cancel
Save