Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
|
4 years ago | |
|---|---|---|
| .. | ||
| scripts | 4 years ago | |
| src | 4 years ago | |
| README.md | 4 years ago | |
| eval.py | 5 years ago | |
| export.py | 5 years ago | |
| train.py | 4 years ago | |
MobileNetV1 is a efficient network for mobile and embedded vision applications. MobileNetV1 is based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep n.eural networks
Paper Howard A G , Zhu M , Chen B , et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications[J]. 2017.
The overall network architecture of MobileNetV1 is show below:
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
Dataset used: ImageNet2012
└─dataset
├─ilsvrc # train dataset
└─validation_preprocess # evaluate dataset
The mixed precision training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
├── MobileNetV1
├── README.md # descriptions about MobileNetV1
├── scripts
│ ├──run_distribute_train.sh # shell script for distribute train
│ ├──run_standalone_train.sh # shell script for standalone train
│ ├──run_eval.sh # shell script for evaluation
├── src
│ ├──config.py # parameter configuration
│ ├──dataset.py # creating dataset
│ ├──lr_generator.py # learning rate config
│ ├──mobilenet_v1_fpn.py # MobileNetV1 architecture
│ ├──CrossEntropySmooth.py # loss function
├── train.py # training script
├── eval.py # evaluation script
You can start training using python or shell scripts. The usage of shell scripts as follows:
For distributed training, a hccl configuration file with JSON format needs to be created in advance.
Please follow the instructions in the link hccn_tools.
# training example
python:
Ascend: python train.py --platform Ascend --dataset_path [TRAIN_DATASET_PATH]
shell:
Ascend: sh run_distribute_train.sh [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
Training result will be stored in the example path. Checkpoints will be stored at ckpt_* by default, and training log will be wrote to ./train_parallel*/log with the platform Ascend .
epoch: 89 step: 1251, loss is 2.1829057
Epoch time: 146826.802, per step time: 117.368
epoch: 90 step: 1251, loss is 2.3499017
Epoch time: 150950.623, per step time: 120.664
You can start training using python or shell scripts.If the train method is train or fine tune, should not input the [CHECKPOINT_PATH] The usage of shell scripts as follows:
# eval example
python:
Ascend: python eval.py --dataset [cifar10|imagenet2012] --dataset_path [VAL_DATASET_PATH] --pretrain_ckpt [CHECKPOINT_PATH]
shell:
Ascend: sh run_eval.sh [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
checkpoint can be produced in training process.
Inference result will be stored in the example path, you can find result like the followings in eval/log.
result: {'top_5_accuracy': 0.9010016025641026, 'top_1_accuracy': 0.7128004807692307} ckpt=./train_parallel0/ckpt_0/mobilenetv1-90_1251.ckpt
| Parameters | MobilenetV1 |
|---|---|
| Model Version | V1 |
| Resource | Ascend 910 * 4, cpu:2.60GHz 192cores, memory:755G |
| uploaded Date | 11/28/2020 |
| MindSpore Version | 1.0.0 |
| Dataset | ImageNet2012 |
| Training Parameters | src/config.py |
| Optimizer | Momentum |
| Loss Function | SoftmaxCrossEntropy |
| outputs | probability |
| Loss | 2.3499017 |
| Accuracy | ACC1[71.28%] |
| Total time | 225 min |
| Params (M) | 3.3 M |
| Checkpoint for Fine tuning | 27.3 M |
| Scripts | Link |
In train.py, we set the seed which is used by numpy.random, mindspore.common.Initializer, mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution.
Please check the official homepage.
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
C++ Python Text Unity3D Asset C other