- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
# [SSD Description](#contents)
SSD discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape.Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes.
[Paper](https://arxiv.org/abs/1512.02325): Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg.European Conference on Computer Vision (ECCV), 2016 (In press).
# [Model Architecture](#contents)
## Description
The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification, which is called the base network. Then add auxiliary structure to the network to produce detections.
SSD network based on MobileNetV2, with support for training and evaluation.
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `image_dir`(dataset directory) and the relative path in `anno_path`(the TXT file path), `image_dir` and `anno_path` are setting in `config.py`.
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `IMAGE_DIR`(dataset directory) and the relative path in `ANNO_PATH`(the TXT file path), `IMAGE_DIR` and `ANNO_PATH` are setting in `config.py`.
# [Quick Start](#contents)
## Running the example
After installing MindSpore via the official website, you can start training and evaluation on Ascend as follows:
### Training
```
# distributed training on Ascend
sh run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE]
To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
# run eval on Ascend
sh run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```shell
.
└─ model_zoo
└─ ssd
├─ README.md ## descriptions about SSD
├─ scripts
└─ run_distribute_train.sh ## shell script for distributed on ascend
└─ run_eval.sh ## shell script for eval on ascend
├─ src
├─ __init__.py ## init file
├─ box_util.py ## bbox utils
├─ coco_eval.py ## coco metrics utils
├─ config.py ## total config
├─ dataset.py ## create dataset and process dataset
├─ init_params.py ## parameters utils
├─ lr_schedule.py ## learning ratio generator
└─ ssd.py ## ssd architecture
├─ eval.py ## eval scripts
└─ train.py ## train scripts
```
- Stand alone mode
## [Script Parameters](#contents)
```
python train.py --dataset coco
```
Major parameters in train.py and config.py as follows:
"image_dir": "" # Other dataset image path, if coco or voc used, it will be useless
"anno_path": "" # Other dataset annotation path, if coco or voc used, it will be useless
```
- Distribute mode
```
sh run_distribute_train.sh 8 500 0.2 coco /data/hccl.json
```
## [Training Process](#contents)
The input parameters are device numbers, epoch size, learning rate, dataset mode and [hccl json configuration file](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). **It is better to use absolute path.**
### Training on Ascend
You will get the loss value of each step as following:
To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
- Distribute mode
```
sh run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
```
We need five or seven parameters for this scripts.
- `DEVICE_NUM`: the device number for distributed train.
- `EPOCH_NUM`: epoch num for distributed train.
- `LR`: learning rate init value for distributed train.
- `DATASET`:the dataset mode for distributed train.
- `RANK_TABLE_FILE :` the path of [rank_table.json](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools), it is better to use absolute path.
- `PRE_TRAINED :` the path of pretrained checkpoint file, it is better to use absolute path.
- `PRE_TRAINED_EPOCH_SIZE :` the epoch num of pretrained.
Training result will be stored in the current path, whose folder name begins with "LOG". Under this, you can find checkpoint file together with result like the followings in log
```
epoch: 1 step: 458, loss is 3.1681802
@@ -87,33 +183,81 @@ epoch: 500 step: 458, loss is 0.5548882
epoch time: 39064.8467540741, per step time: 85.29442522723602
```
### Evaluation
## [Evaluation Process](#contents)
for evaluation , run `eval.py` with `checkpoint_path`. `checkpoint_path` is the path of [checkpoint](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html) file.
The overall network architecture of YOLOv3 is show below:
And we use ResNet18 as the backbone of YOLOv3_ResNet18. The architecture of ResNet18 has 4 stages. The ResNet architecture performs the initial convolution and max-pooling using 7×7 and 3×3 kernel sizes respectively. Afterward, every stage of the network has different Residual blocks (2, 2, 2, 2) containing two 3×3 conv layers. Finally, the network has an Average Pooling layer followed by a fully connected layer.
@@ -29,17 +63,84 @@ YOLOv3 network based on ResNet-18, with support for training and evaluation.
Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. `dataset.py` is the parsing script, we read image from an image path joined by the `image_dir`(dataset directory) and the relative path in `anno_path`(the TXT file path), `image_dir` and `anno_path` are external inputs.
## Running the Example
# [Environment Requirements](#contents)
- Hardware(Ascend)
- Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
After installing MindSpore via the official website, you can start training and evaluation on Ascend as follows:
- runing on Ascend
```shell script
#run standalone training example
sh run_standalone_train.sh [DEVICE_ID] [EPOCH_SIZE] [MINDRECORD_DIR] [IMAGE_DIR] [ANNO_PATH]
#run distributed training example
sh run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [MINDRECORD_DIR] [IMAGE_DIR] [ANNO_PATH] [RANK_TABLE_FILE]
#run evaluation example
sh run_eval.sh [DEVICE_ID] [CKPT_PATH] [MINDRECORD_DIR] [IMAGE_DIR] [ANNO_PATH]
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
```
└── model_zoo
├── README.md // descriptions about all the models
└── yolov3_resnet18
├── README.md // descriptions about yolov3_resnet18
├── scripts
├── run_distribute_train.sh // shell script for distributed on Ascend
├── run_standalone_train.sh // shell script for distributed on Ascend
└── run_eval.sh // shell script for evaluation on Ascend
├── src
├── dataset.py // creating dataset
├── yolov3.py // yolov3 architecture
├── config.py // parameter configuration
└── utils.py // util function
├── train.py // training script
└── eval.py // evaluation script
```
## [Script Parameters](#contents)
```
Major parameters in train.py and config.py as follows:
evice_num: Use device nums, default is 1.
lr: Learning rate, default is 0.001.
epoch_size: Epoch size, default is 10.
batch_size: Batch size, default is 32.
pre_trained: Pretrained Checkpoint file path.
pre_trained_epoch_size: Pretrained epoch size.
mindrecord_dir: Mindrecord directory.
image_dir: Dataset path.
anno_path: Annotation path.
img_shape: Image height and width used as input to the model.
```
### Training
## [Training Process](#contents)
### Training on Ascend
To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and `mindrecord_dir`. If the `mindrecord_dir` is empty, it wil generate [mindrecord](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html) file by `image_dir` and `anno_path`(the absolute image path is joined by the `image_dir` and the relative path in `anno_path`). **Note if `mindrecord_dir` isn't empty, it will use `mindrecord_dir` rather than `image_dir` and `anno_path`.**
- Stand alone mode
```
sh run_standalone_train.sh 0 50 ./Mindrecord_train ./dataset ./dataset/train.txt
```
The input variables are device id, epoch size, mindrecord directory path, dataset directory path and train TXT file path.
@@ -55,40 +156,84 @@ To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and
You will get the loss value and time of each step as following:
```
epoch: 145 step: 156, loss is 12.202981
epoch time: 25599.22742843628, per step time: 164.0976117207454
epoch: 146 step: 156, loss is 16.91706
epoch time: 23199.971675872803, per step time: 148.7177671530308
epoch: 147 step: 156, loss is 13.04007
epoch time: 23801.95164680481, per step time: 152.57661312054364
epoch: 148 step: 156, loss is 10.431475
epoch time: 23634.241580963135, per step time: 151.50154859591754
epoch: 149 step: 156, loss is 14.665991
epoch time: 24118.8325881958, per step time: 154.60790120638333
epoch: 150 step: 156, loss is 10.779521
epoch time: 25319.57221031189, per step time: 162.30495006610187
```
```
epoch: 145 step: 156, loss is 12.202981
epoch time: 25599.22742843628, per step time: 164.0976117207454
epoch: 146 step: 156, loss is 16.91706
epoch time: 23199.971675872803, per step time: 148.7177671530308
epoch: 147 step: 156, loss is 13.04007
epoch time: 23801.95164680481, per step time: 152.57661312054364
epoch: 148 step: 156, loss is 10.431475
epoch time: 23634.241580963135, per step time: 151.50154859591754
epoch: 149 step: 156, loss is 14.665991
epoch time: 24118.8325881958, per step time: 154.60790120638333
epoch: 150 step: 156, loss is 10.779521
epoch time: 25319.57221031189, per step time: 162.30495006610187
```
Note the results is two-classification(person and face) used our own annotations with coco2017, you can change `num_classes` in `config.py` to train your dataset. And we will suport 80 classifications in coco2017 the near future.
### Evaluation
## [Evaluation Process](#contents)
### Evaluation on Ascend
To eval, run `eval.py` with the dataset `image_dir`, `anno_path`(eval txt), `mindrecord_dir` and `ckpt_path`. `ckpt_path` is the path of [checkpoint](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html) file.
```
sh run_eval.sh 0 yolo.ckpt ./Mindrecord_eval ./dataset ./dataset/eval.txt
```
```
sh run_eval.sh 0 yolo.ckpt ./Mindrecord_eval ./dataset ./dataset/eval.txt
```
The input variables are device id, checkpoint path, mindrecord directory path, dataset directory path and train TXT file path.
You will get the precision and recall value of each class:
```
class 0 precision is 88.18%, recall is 66.00%
class 1 precision is 85.34%, recall is 79.13%
```
```
class 0 precision is 88.18%, recall is 66.00%
class 1 precision is 85.34%, recall is 79.13%
```
Note the precision and recall values are results of two-classification(person and face) used our own annotations with coco2017.