Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
|
4 years ago | |
|---|---|---|
| .. | ||
| scripts | 5 years ago | |
| src | 4 years ago | |
| README.md | 4 years ago | |
| README_CN.md | 5 years ago | |
| eval.py | 5 years ago | |
| export.py | 4 years ago | |
| mindspore_hub_conf.py | 5 years ago | |
| train.py | 4 years ago | |
VGG, a very deep convolutional networks for large-scale image recognition, was proposed in 2014 and won the 1th place in object localization and 2th place in image classification task in ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Paper: Simonyan K, zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
VGG 16 network is mainly consisted by several basic modules (including convolution and pooling layer) and three continuous Dense layer.
here basic modules mainly include basic operation like: 3×3 conv and 2×2 max pooling.
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
CIFAR-10
Unzip the CIFAR-10 dataset to any path you want and the folder structure should be as follows:
. ├── cifar-10-batches-bin # train dataset └── cifar-10-verify-bin # infer dataset
ImageNet2012
Unzip the ImageNet2012 dataset to any path you want and the folder should include train and eval dataset as follows:
. └─dataset ├─ilsvrc # train dataset └─validation_preprocess # evaluate dataset
The mixed precision training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
After installing MindSpore via the official website, you can start training and evaluation as follows:
# run training example
python train.py --data_path=[DATA_PATH] --device_id=[DEVICE_ID] > output.train.log 2>&1 &
# run distributed training example
sh run_distribute_train.sh [RANL_TABLE_JSON] [DATA_PATH]
# run evaluation example
python eval.py --data_path=[DATA_PATH] --pre_trained=[PRE_TRAINED] > output.eval.log 2>&1 &
For distributed training, a hccl configuration file with JSON format needs to be created in advance.
Please follow the instructions in the link below:
https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools
# run training example
python train.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_TYPE] --data_path=[DATA_PATH] > output.train.log 2>&1 &
# run distributed training example
sh run_distribute_train_gpu.sh [DATA_PATH]
# run evaluation example
python eval.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_TYPE] --data_path=[DATA_PATH] --pre_trained=[PRE_TRAINED] > output.eval.log 2>&1 &
├── model_zoo
├── README.md // descriptions about all the models
├── vgg16
├── README.md // descriptions about googlenet
├── scripts
│ ├── run_distribute_train.sh // shell script for distributed training on Ascend
│ ├── run_distribute_train_gpu.sh // shell script for distributed training on GPU
├── src
│ ├── utils
│ │ ├── logging.py // logging format setting
│ │ ├── sampler.py // create sampler for dataset
│ │ ├── util.py // util function
│ │ ├── var_init.py // network parameter init method
│ ├── config.py // parameter configuration
│ ├── crossentropy.py // loss caculation
│ ├── dataset.py // creating dataset
│ ├── linear_warmup.py // linear leanring rate
│ ├── warmup_cosine_annealing_lr.py // consine anealing learning rate
│ ├── warmup_step_lr.py // step or multi step learning rate
│ ├──vgg.py // vgg architecture
├── train.py // training script
├── eval.py // evaluation script
usage: train.py [--device_target TARGET][--data_path DATA_PATH]
[--dataset DATASET_TYPE][--is_distributed VALUE]
[--device_id DEVICE_ID][--pre_trained PRE_TRAINED]
[--ckpt_path CHECKPOINT_PATH][--ckpt_interval INTERVAL_STEP]
parameters/options:
--device_target the training backend type, Ascend or GPU, default is Ascend.
--dataset the dataset type, cifar10 or imagenet2012.
--is_distributed the way of traing, whether do distribute traing, value can be 0 or 1.
--data_path the storage path of dataset
--device_id the device which used to train model.
--pre_trained the pretrained checkpoint file path.
--ckpt_path the path to save checkpoint.
--ckpt_interval the epoch interval for saving checkpoint.
usage: eval.py [--device_target TARGET][--data_path DATA_PATH]
[--dataset DATASET_TYPE][--pre_trained PRE_TRAINED]
[--device_id DEVICE_ID]
parameters/options:
--device_target the evaluation backend type, Ascend or GPU, default is Ascend.
--dataset the dataset type, cifar10 or imagenet2012.
--data_path the storage path of dataset.
--device_id the device which used to evaluate model.
--pre_trained the checkpoint file path used to evaluate model.
Parameters for both training and evaluation can be set in config.py.
"num_classes": 10, # dataset class num
"lr": 0.01, # learning rate
"lr_init": 0.01, # initial learning rate
"lr_max": 0.1, # max learning rate
"lr_epochs": '30,60,90,120', # lr changing based epochs
"lr_scheduler": "step", # learning rate mode
"warmup_epochs": 5, # number of warmup epoch
"batch_size": 64, # batch size of input tensor
"max_epoch": 70, # only valid for taining, which is always 1 for inference
"momentum": 0.9, # momentum
"weight_decay": 5e-4, # weight decay
"loss_scale": 1.0, # loss scale
"label_smooth": 0, # label smooth
"label_smooth_factor": 0, # label smooth factor
"buffer_size": 10, # shuffle buffer size
"image_size": '224,224', # image size
"pad_mode": 'same', # pad mode for conv2d
"padding": 0, # padding value for conv2d
"has_bias": False, # whether has bias in conv2d
"batch_norm": True, # wether has batch_norm in conv2d
"keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
"initialize_mode": "XavierUniform", # conv2d init mode
"has_dropout": True # wether using Dropout layer
"num_classes": 1000, # dataset class num
"lr": 0.01, # learning rate
"lr_init": 0.01, # initial learning rate
"lr_max": 0.1, # max learning rate
"lr_epochs": '30,60,90,120', # lr changing based epochs
"lr_scheduler": "cosine_annealing", # learning rate mode
"warmup_epochs": 0, # number of warmup epoch
"batch_size": 32, # batch size of input tensor
"max_epoch": 150, # only valid for taining, which is always 1 for inference
"momentum": 0.9, # momentum
"weight_decay": 1e-4, # weight decay
"loss_scale": 1024, # loss scale
"label_smooth": 1, # label smooth
"label_smooth_factor": 0.1, # label smooth factor
"buffer_size": 10, # shuffle buffer size
"image_size": '224,224', # image size
"pad_mode": 'pad', # pad mode for conv2d
"padding": 1, # padding value for conv2d
"has_bias": True, # whether has bias in conv2d
"batch_norm": False, # wether has batch_norm in conv2d
"keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
"initialize_mode": "KaimingNormal", # conv2d init mode
"has_dropout": True # wether using Dropout layer
python train.py --data_path=your_data_path --device_id=6 > out.train.log 2>&1 &
The python command above will run in the background, you can view the results through the file out.train.log.
After training, you'll get some checkpoint files in specified ckpt_path, default in ./output directory.
You will get the loss value as following:
# grep "loss is " output.train.log
epoch: 1 step: 781, loss is 2.093086
epcoh: 2 step: 781, loss is 1.827582
...
sh run_distribute_train.sh rank_table.json your_data_path
The above shell script will run distribute training in the background, you can view the results through the file train_parallel[X]/log.
You will get the loss value as following:
# grep "result: " train_parallel*/log
train_parallel0/log:epoch: 1 step: 97, loss is 1.9060308
train_parallel0/log:epcoh: 2 step: 97, loss is 1.6003821
...
train_parallel1/log:epoch: 1 step: 97, loss is 1.7095519
train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579
...
...
About rank_table.json, you can refer to the distributed training tutorial.
Attention This will bind the processor cores according to the
device_numand total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations abouttasksetinscripts/run_distribute_train.sh
python train.py --device_target="GPU" --dataset="imagenet2012" --is_distributed=0 --data_path=$DATA_PATH > output.train.log 2>&1 &
# distributed training(8p)
bash scripts/run_distribute_train_gpu.sh /path/ImageNet2012/train"
# when using cifar10 dataset
python eval.py --data_path=your_data_path --dataset="cifar10" --device_target="Ascend" --pre_trained=./*-70-781.ckpt > output.eval.log 2>&1 &
# when using imagenet2012 dataset
python eval.py --data_path=your_data_path --dataset="imagenet2012" --device_target="GPU" --pre_trained=./*-150-5004.ckpt > output.eval.log 2>&1 &
output.eval.log. You will get the accuracy as following:# when using cifar10 dataset
# grep "result: " output.eval.log
result: {'acc': 0.92}
# when using the imagenet2012 dataset
after allreduce eval: top1_correct=36636, tot=50000, acc=73.27%
after allreduce eval: top5_correct=45582, tot=50000, acc=91.16%
| Parameters | VGG16(Ascend) | VGG16(GPU) |
|---|---|---|
| Model Version | VGG16 | VGG16 |
| Resource | Ascend 910 ;CPU 2.60GHz,192cores;Memory,755G | NV SMX2 V100-32G |
| uploaded Date | 10/28/2020 | 10/28/2020 |
| MindSpore Version | 1.0.0 | 1.0.0 |
| Dataset | CIFAR-10 | ImageNet2012 |
| Training Parameters | epoch=70, steps=781, batch_size = 64, lr=0.1 | epoch=150, steps=40036, batch_size = 32, lr=0.1 |
| Optimizer | Momentum | Momentum |
| Loss Function | SoftmaxCrossEntropy | SoftmaxCrossEntropy |
| outputs | probability | probability |
| Loss | 0.01 | 1.5~2.0 |
| Speed | 1pc: 79 ms/step; 8pcs: 104 ms/step | 1pc: 81 ms/step; 8pcs 94.4ms/step |
| Total time | 1pc: 72 mins; 8pcs: 11.8 mins | 8pcs: 19.7 hours |
| Checkpoint for Fine tuning | 1.1G(.ckpt file) | 1.1G(.ckpt file) |
| Scripts | vgg16 |
| Parameters | VGG16(Ascend) | VGG16(GPU) |
|---|---|---|
| Model Version | VGG16 | VGG16 |
| Resource | Ascend 910 | GPU |
| Uploaded Date | 10/28/2020 | 10/28/2020 |
| MindSpore Version | 1.0.0 | 1.0.0 |
| Dataset | CIFAR-10, 10,000 images | ImageNet2012, 5000 images |
| batch_size | 64 | 32 |
| outputs | probability | probability |
| Accuracy | 1pc: 93.4% | 1pc: 73.0%; |
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
Please check the official homepage.
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
C++ Python Text Unity3D Asset C other