Browse Source

!13256 fix word spell mistakes

From: @zhouneng2
Reviewed-by: @liangchenghui,@oacjiewen
Signed-off-by: @liangchenghui
tags/v1.2.0-rc1
mindspore-ci-bot Gitee 4 years ago
parent
commit
f75e607e7b
3 changed files with 8 additions and 8 deletions
  1. +6
    -6
      model_zoo/official/cv/resnext50/README.md
  2. +1
    -1
      model_zoo/official/cv/resnext50/src/head.py
  3. +1
    -1
      model_zoo/official/cv/resnext50/train.py

+ 6
- 6
model_zoo/official/cv/resnext50/README.md View File

@@ -76,20 +76,20 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
└─run_eval.sh # launch evaluating
├─src
├─backbone
├─_init_.py # initalize
├─_init_.py # initialize
├─resnet.py # resnext50 backbone
├─utils
├─_init_.py # initalize
├─_init_.py # initialize
├─cunstom_op.py # network operation
├─logging.py # print log
├─optimizers_init_.py # get parameters
├─sampler.py # distributed sampler
├─var_init_.py # calculate gain value
├─_init_.py # initalize
├─_init_.py # initialize
├─config.py # parameter configuration
├─crossentropy.py # CrossEntropy loss function
├─dataset.py # data preprocessing
├─head.py # commom head
├─head.py # common head
├─image_classification.py # get resnet
├─linear_warmup.py # linear warmup learning rate
├─warmup_cosine_annealing.py # learning rate each step
@@ -140,7 +140,7 @@ You can start training by python script:
python train.py --data_dir ~/imagenet/train/ --platform Ascend --is_distributed 0
```

or shell stript:
or shell script:

```script
Ascend:
@@ -181,7 +181,7 @@ You can start training by python script:
python eval.py --data_dir ~/imagenet/val/ --platform Ascend --pretrained resnext.ckpt
```

or shell stript:
or shell script:

```script
# Evaluation


+ 1
- 1
model_zoo/official/cv/resnext50/src/head.py View File

@@ -22,7 +22,7 @@ __all__ = ['CommonHead']

class CommonHead(nn.Cell):
"""
commom architecture definition.
common architecture definition.

Args:
num_classes (int): Number of classes.


+ 1
- 1
model_zoo/official/cv/resnext50/train.py View File

@@ -161,7 +161,7 @@ def parse_args(cloud_args=None):
if args.is_dynamic_loss_scale == 1:
args.loss_scale = 1 # for dynamic loss scale can not set loss scale in momentum opt

# select for master rank save ckpt or all rank save, compatiable for model parallel
# select for master rank save ckpt or all rank save, compatible for model parallel
args.rank_save_ckpt_flag = 0
if args.is_save_on_master:
if args.rank == 0:


Loading…
Cancel
Save