Browse Source

!12798 Update readme of CRNN and SSD

From: @c_34
Reviewed-by: 
Signed-off-by:
tags/v1.2.0-rc1
mindspore-ci-bot Gitee 4 years ago
parent
commit
6ac548148d
5 changed files with 230 additions and 29 deletions
  1. +64
    -0
      model_zoo/official/cv/crnn/README.md
  2. +47
    -0
      model_zoo/official/cv/crnn/export.py
  3. +29
    -0
      model_zoo/official/cv/crnn/scripts/docker_start.sh
  4. +61
    -29
      model_zoo/official/cv/ssd/README.md
  5. +29
    -0
      model_zoo/official/cv/ssd/scripts/docker_start.sh

+ 64
- 0
model_zoo/official/cv/crnn/README.md View File

@@ -18,6 +18,10 @@
- [Distributed Training](#distributed-training) - [Distributed Training](#distributed-training)
- [Evaluation Process](#evaluation-process) - [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation) - [Evaluation](#evaluation)
- [Inference Process](#inference-process)
- [Export MindIR](#export-mindir)
- [Infer on Ascend310](#infer-on-ascend310)
- [result](#result)
- [Model Description](#model-description) - [Model Description](#model-description)
- [Performance](#performance) - [Performance](#performance)
- [Training Performance](#training-performance) - [Training Performance](#training-performance)
@@ -76,13 +80,37 @@ We provide `convert_ic03.py`, `convert_iiit5k.py`, `convert_svt.py` as exmples f


# standalone training example in Ascend # standalone training example in Ascend
$ bash run_standalone_train.sh [DATASET_NAME] [DATASET_PATH] [PLATFORM] $ bash run_standalone_train.sh [DATASET_NAME] [DATASET_PATH] [PLATFORM]

# offline inference on Ascend310
$ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [ANN_FILE_PATH] [DATASET] [DEVICE_ID]

``` ```


DATASET_NAME is one of `ic03`, `ic13`, `svt`, `iiit5k`, `synth`.

For distributed training, a hccl configuration file with JSON format needs to be created in advance. For distributed training, a hccl configuration file with JSON format needs to be created in advance.


Please follow the instructions in the link below: Please follow the instructions in the link below:
[hccl_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). [hccl_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).


- Run on docker

Build docker images(Change version to the one you actually used)

```shell
# build docker
docker build -t ssd:20.1.0 . --build-arg FROM_IMAGE_NAME=ascend-mindspore-arm:20.1.0
```

Create a container layer over the created image and start it

```shell
# start docker
bash scripts/docker_start.sh ssd:20.1.0 [DATA_DIR] [MODEL_DIR]
```

Then you can run everything just like on Ascend.

## [Script Description](#contents) ## [Script Description](#contents)


### [Script and Sample Code](#contents) ### [Script and Sample Code](#contents)
@@ -197,6 +225,42 @@ Check the `eval/log.txt` and you will get outputs as following:
result: {'CRNNAccuracy': (0.806)} result: {'CRNNAccuracy': (0.806)}
``` ```


## [Inference Process](#contents)

### [Export MindIR](#contents)

```shell
python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
```

The ckpt_file parameter is required,
`EXPORT_FORMAT` should be in ["AIR", "MINDIR"]

### Infer on Ascend310

Before performing inference, the mindir file must bu exported by export script on the 910 environment. We only provide an example of inference using MINDIR model.
Current batch_Size can only be set to 1. The inference result will be just the network outputs, which will be save in binary file. The accuracy is calculated by `src/metric.`.

```shell
# Ascend310 inference
bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [ANN_FILE_PATH] [DATASET] [DEVICE_ID]
```

`MINDIR_PATH` is the MINDIR model exported by export.py
`DATA_PATH` is the path of dataset. If the data has to be converted, passing the path to the converted data.
`ANN_FILE_PATH` is the path of annotation file. For converted data, the annotation file is exported by convert scripts.
`DATASET` is the name of dataset, which should be in ["synth", "svt", "iiit5k", "ic03", "ic13"]
`DEVICE_ID` is optional, default value is 0.

### result

Inference result is saved in current path, you can find result like this in acc.log file.

```shell
correct num: 2042 , total num: 3000
result CRNNAccuracy is: 0.806666666666
```

## [Model Description](#contents) ## [Model Description](#contents)


### [Performance](#contents) ### [Performance](#contents)


+ 47
- 0
model_zoo/official/cv/crnn/export.py View File

@@ -0,0 +1,47 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================

""" export model for CRNN """

import argparse
import numpy as np
import mindspore as ms
from mindspore import Tensor, context, load_checkpoint, export

from src.crnn import CRNN
from src.config import config1 as config

parser = argparse.ArgumentParser(description="CRNN_export")
parser.add_argument("--device_id", type=int, default=0, help="Device id")
parser.add_argument("--ckpt_file", type=str, required=True, help="Checkpoint file path.")
parser.add_argument("--file_name", type=str, default="crnn", help="output file name.")
parser.add_argument('--file_format', type=str, choices=["AIR", "ONNX", "MINDIR"], default='AIR', help='file format')
parser.add_argument("--device_target", type=str, choices=["Ascend", "GPU", "CPU"], default="Ascend",
help="device target")
args = parser.parse_args()
context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target)
if args.device_target == "Ascend":
context.set_context(device_id=args.device_id)

if __name__ == "__main__":
config.batch_size = 1
net = CRNN(config)

load_checkpoint(args.ckpt_file, net=net)
net.set_train(False)

input_data = Tensor(np.zeros([1, 3, config.image_height, config.image_width]), ms.float32)

export(net, input_data, file_name=args.file_name, file_format=args.file_format)

+ 29
- 0
model_zoo/official/cv/crnn/scripts/docker_start.sh View File

@@ -0,0 +1,29 @@
#!/bin/bash

docker_image=$1
data_dir=$2
model_dir=$3

docker run -it --ipc=host \
--device=/dev/davinci0 \
--device=/dev/davinci1 \
--device=/dev/davinci2 \
--device=/dev/davinci3 \
--device=/dev/davinci4 \
--device=/dev/davinci5 \
--device=/dev/davinci6 \
--device=/dev/davinci7 \
--device=/dev/davinci_manager \
--device=/dev/devmm_svm \
--device=/dev/hisi_hdc \
--privileged \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
-v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons \
-v ${data_dir}:${data_dir} \
-v ${model_dir}:${model_dir} \
-v /var/log/npu/conf/slog/slog.conf:/var/log/npu/conf/slog/slog.conf \
-v /var/log/npu/slog/:/var/log/npu/slog/ \
-v /var/log/npu/profiling/:/var/log/npu/profiling \
-v /var/log/npu/dump/:/var/log/npu/dump \
-v /var/log/npu/:/usr/slog ${docker_image} \
/bin/bash

+ 61
- 29
model_zoo/official/cv/ssd/README.md View File

@@ -14,10 +14,14 @@
- [Training Process](#training-process) - [Training Process](#training-process)
- [Training on Ascend](#training-on-ascend) - [Training on Ascend](#training-on-ascend)
- [Training on GPU](#training-on-gpu) - [Training on GPU](#training-on-gpu)
- [Transfer Training](#transfer-training)
- [Evaluation Process](#evaluation-process) - [Evaluation Process](#evaluation-process)
- [Evaluation on Ascend](#evaluation-on-ascend) - [Evaluation on Ascend](#evaluation-on-ascend)
- [Evaluation on GPU](#evaluation-on-gpu) - [Evaluation on GPU](#evaluation-on-gpu)
- [Inference Process](#inference-process)
- [Export MindIR](#export-mindir) - [Export MindIR](#export-mindir)
- [Infer on Ascend310](#infer-on-ascend310)
- [result](#result)
- [Model Description](#model-description) - [Model Description](#model-description)
- [Performance](#performance) - [Performance](#performance)
- [Evaluation Performance](#evaluation-performance) - [Evaluation Performance](#evaluation-performance)
@@ -130,20 +134,23 @@ After installing MindSpore via the official website, you can start training and


```shell ```shell
# distributed training on Ascend # distributed training on Ascend
sh run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE]
bash run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE]


# run eval on Ascend # run eval on Ascend
sh run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
bash run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]

# run inference on Ascend310, MINDIR_PATH is the mindir model which you can export from checkpoint using export.py
bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
``` ```


- running on GPU - running on GPU


```shell ```shell
# distributed training on GPU # distributed training on GPU
sh run_distribute_train_gpu.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET]
bash run_distribute_train_gpu.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET]


# run eval on GPU # run eval on GPU
sh run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
bash run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
``` ```


- running on CPU(support Windows and Ubuntu) - running on CPU(support Windows and Ubuntu)
@@ -158,6 +165,24 @@ python train.py --run_platform=CPU --lr=[LR] --dataset=[DATASET] --epoch_size=[E
python eval.py --run_platform=CPU --dataset=[DATASET] --checkpoint_path=[PRETRAINED_CKPT] python eval.py --run_platform=CPU --dataset=[DATASET] --checkpoint_path=[PRETRAINED_CKPT]
``` ```


- Run on docker

Build docker images(Change version to the one you actually used)

```shell
# build docker
docker build -t ssd:20.1.0 . --build-arg FROM_IMAGE_NAME=ascend-mindspore-arm:20.1.0
```

Create a container layer over the created image and start it

```shell
# start docker
bash scripts/docker_start.sh ssd:20.1.0 [DATA_DIR] [MODEL_DIR]
```

Then you can run everything just like on ascend.

## [Script Description](#contents) ## [Script Description](#contents)


### [Script and Sample Code](#contents) ### [Script and Sample Code](#contents)
@@ -224,7 +249,7 @@ To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will ge
- Distribute mode - Distribute mode


```shell ```shell
sh run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
bash run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
``` ```


We need five or seven parameters for this scripts. We need five or seven parameters for this scripts.
@@ -261,7 +286,7 @@ epoch time: 39064.8467540741, per step time: 85.29442522723602
- Distribute mode - Distribute mode


```shell ```shell
sh run_distribute_train_gpu.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
bash run_distribute_train_gpu.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
``` ```


We need five or seven parameters for this scripts. We need five or seven parameters for this scripts.
@@ -283,15 +308,23 @@ epoch: 1 step: 3, loss is 476.802
epoch: 1 step: 458, loss is 3.1283689 epoch: 1 step: 458, loss is 3.1283689
epoch time: 150753.701, per step time: 329.157 epoch time: 150753.701, per step time: 329.157
... ...

``` ```


#### Transfer Training

You can train your own model based on either pretrained classification model or pretrained detection model. You can perform transfer training by following steps.

1. Convert your own dataset to COCO or VOC style. Otherwise you havet to add your own data preprocess code.
2. Change config.py according to your own dataset, especially the `num_classes`.
3. Set argument `filter_weight` to `True` while calling `train.py`, this will filter the final detection box weight from the pretrained model.
4. Build your own bash scripts using new config and arguments for further convenient.

### [Evaluation Process](#contents) ### [Evaluation Process](#contents)


#### Evaluation on Ascend #### Evaluation on Ascend


```shell ```shell
sh run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
bash run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
``` ```


We need two parameters for this scripts. We need two parameters for this scripts.
@@ -326,7 +359,7 @@ mAP: 0.23808886505483504
#### Evaluation on GPU #### Evaluation on GPU


```shell ```shell
sh run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
bash run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
``` ```


We need two parameters for this scripts. We need two parameters for this scripts.
@@ -358,6 +391,8 @@ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.686
mAP: 0.2244936111705981 mAP: 0.2244936111705981
``` ```


## Inference Process

### [Export MindIR](#contents) ### [Export MindIR](#contents)


```shell ```shell
@@ -365,18 +400,16 @@ python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [
``` ```


The ckpt_file parameter is required, The ckpt_file parameter is required,
`EXPORT_FORMAT` should be in ["AIR", "ONNX", "MINDIR"]

## Inference Process
`EXPORT_FORMAT` should be in ["AIR", "MINDIR"]


### Usage
### Infer on Ascend310


Before performing inference, the air file must bu exported by export script on the 910 environment.
Current batch_Size can only be set to 1. The precision calculation process needs about 70G+ memory space.
Before performing inference, the mindir file must bu exported by export script on the 910 environment. We only provide an example of inference using MINDIR model.
Current batch_Size can only be set to 1. The precision calculation process needs about 70G+ memory space, otherwise the process will be killed for execeeding memory limits.


```shell ```shell
# Ascend310 inference # Ascend310 inference
sh run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
``` ```


`DEVICE_ID` is optional, default value is 0. `DEVICE_ID` is optional, default value is 0.
@@ -386,19 +419,18 @@ sh run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
Inference result is saved in current path, you can find result like this in acc.log file. Inference result is saved in current path, you can find result like this in acc.log file.


```bash ```bash
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.354
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.459
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.432
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.228
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.455
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.604
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.255
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.409
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.507
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.325
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.670
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.733
mAP: 0.35406563212712244
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.339
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.521
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.370
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.386
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.461
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.310
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.481
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.515
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.293
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.659
mAP: 0.33880018942412393
``` ```


## [Model Description](#contents) ## [Model Description](#contents)


+ 29
- 0
model_zoo/official/cv/ssd/scripts/docker_start.sh View File

@@ -0,0 +1,29 @@
#!/bin/bash

docker_image=$1
data_dir=$2
model_dir=$3

docker run -it --ipc=host \
--device=/dev/davinci0 \
--device=/dev/davinci1 \
--device=/dev/davinci2 \
--device=/dev/davinci3 \
--device=/dev/davinci4 \
--device=/dev/davinci5 \
--device=/dev/davinci6 \
--device=/dev/davinci7 \
--device=/dev/davinci_manager \
--device=/dev/devmm_svm \
--device=/dev/hisi_hdc \
--privileged \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
-v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons \
-v ${data_dir}:${data_dir} \
-v ${model_dir}:${model_dir} \
-v /var/log/npu/conf/slog/slog.conf:/var/log/npu/conf/slog/slog.conf \
-v /var/log/npu/slog/:/var/log/npu/slog/ \
-v /var/log/npu/profiling/:/var/log/npu/profiling \
-v /var/log/npu/dump/:/var/log/npu/dump \
-v /var/log/npu/:/usr/slog ${docker_image} \
/bin/bash

Loading…
Cancel
Save