@@ -66,7 +65,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- Hardware(Ascend/GPU)
- Hardware(Ascend/GPU)
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
python eval.py > eval.log 2>&1 & OR Ascend: sh run_eval.sh OR GPU: sh run_eval_gpu.sh
```
@@ -100,8 +128,10 @@ python eval.py > eval.log 2>&1 & OR Ascend: sh run_eval.sh OR GPU: sh run_eval
├── googlenet
├── googlenet
├── README.md // descriptions about googlenet
├── README.md // descriptions about googlenet
├── scripts
├── scripts
│ ├──run_train.sh // shell script for distributed
│ ├──run_eval.sh // shell script for evaluation
│ ├──run_train.sh // shell script for distributed on Ascend
│ ├──run_train_gpu.sh // shell script for distributed on GPU
│ ├──run_eval.sh // shell script for evaluation on Ascend
│ ├──run_eval_gpu.sh // shell script for evaluation on GPU
├── src
├── src
│ ├──dataset.py // creating dataset
│ ├──dataset.py // creating dataset
│ ├──googlenet.py // googlenet architecture
│ ├──googlenet.py // googlenet architecture
@@ -113,98 +143,153 @@ python eval.py > eval.log 2>&1 & OR Ascend: sh run_eval.sh OR GPU: sh run_eval
## [Script Parameters](#contents)
## [Script Parameters](#contents)
```python
Major parameters in train.py and config.py are:
--data_path: The absolute full path to the train and evaluation datasets.
--epoch_size: Total training epochs.
--batch_size: Training batch size.
--lr_init: Initial learning rate.
--num_classes: The number of classes in the training set.
--weight_decay: Weight decay value.
--image_height: Image height used as input to the model.
--image_width: Image width used as input the model.
--pre_trained: Whether training from scratch or training based on the
pre-trained model.Optional values are True, False.
--device_target: Device where the code will be implemented. Optional values
are "Ascend", "GPU".
--device_id: Device ID used to train or evaluate the dataset. Ignore it
when you use run_train.sh for distributed training.
--checkpoint_path: The absolute full path to the checkpoint file saved
after training.
--onnx_filename: File name of the onnx model used in export.py.
--air_filename: File name of the air model used in export.py.
```
Parameters for both training and evaluation can be set in config.py
- config for GoogleNet, CIFAR-10 dataset
```python
'pre_trained': 'False' # whether training based on the pre-trained model
'nump_classes': 10 # the number of classes in the dataset
'lr_init': 0.1 # initial learning rate
'batch_size': 128 # training batch size
'epoch_size': 125 # total training epochs
'momentum': 0.9 # momentum
'weight_decay': 5e-4 # weight decay value
'buffer_size': 10 # buffer size
'image_height': 224 # image height used as input to the model
'image_width': 224 # image width used as input to the model
'data_path': './cifar10' # absolute full path to the train and evaluation datasets
'device_target': 'Ascend' # device running the program
'device_id': 4 # device ID used to train or evaluate the dataset. Ignore it when you use run_train.sh for distributed training
'keep_checkpoint_max': 10 # only keep the last keep_checkpoint_max checkpoint
'checkpoint_path': './train_googlenet_cifar10-125_390.ckpt' # the absolute full path to save the checkpoint file
'onnx_filename': 'googlenet.onnx' # file name of the onnx model used in export.py
'geir_filename': 'googlenet.geir' # file name of the geir model used in export.py
```
## [Training Process](#contents)
## [Training Process](#contents)
### Training
### Training
```
python train.py > train.log 2>&1 &
```
The python command above will run in the background, you can view the results through the file `train.log`.
After training, you'll get some checkpoint files under the script folder by default. The loss value will be achieved as follows:
- running on Ascend
```
python train.py > train.log 2>&1 &
```
The python command above will run in the background, you can view the results through the file `train.log`.
After training, you'll get some checkpoint files under the script folder by default. The loss value will be achieved as follows:
```
# grep "loss is " train.log
epoch: 1 step: 390, loss is 1.4842823
epcoh: 2 step: 390, loss is 1.0897788
...
```
The model checkpoint will be saved in the current directory.
- running on GPU
```
export CUDA_VISIBLE_DEVICES=0
python train.py > train.log 2>&1 &
```
The python command above will run in the background, you can view the results through the file `train.log`.
After training, you'll get some checkpoint files under the folder `./ckpt_0/` by default.
```
# grep "loss is " train.log
epoch: 1 step: 390, loss is 1.4842823
epcoh: 2 step: 390, loss is 1.0897788
...
```
The model checkpoint will be saved in the current directory.
### Distributed Training
### Distributed Training
```
Ascend: sh scripts/run_train.sh rank_table.json OR GPU: sh scripts/run_train_gpu.sh 8 0,1,2,3,4,5,6,7
```
The above shell script will run distribute training in the background. You can view the results through the file `train_parallel[X]/log`. The loss value will be achieved as follows:
```
# grep "result: " train_parallel*/log
train_parallel0/log:epoch: 1 step: 48, loss is 1.4302931
train_parallel0/log:epcoh: 2 step: 48, loss is 1.4023874
...
train_parallel1/log:epoch: 1 step: 48, loss is 1.3458025
train_parallel1/log:epcoh: 2 step: 48, loss is 1.3729336
...
...
```
- running on Ascend
```
sh scripts/run_train.sh rank_table.json
```
The above shell script will run distribute training in the background. You can view the results through the file `train_parallel[X]/log`. The loss value will be achieved as follows:
```
# grep "result: " train_parallel*/log
train_parallel0/log:epoch: 1 step: 48, loss is 1.4302931
train_parallel0/log:epcoh: 2 step: 48, loss is 1.4023874
...
train_parallel1/log:epoch: 1 step: 48, loss is 1.3458025
train_parallel1/log:epcoh: 2 step: 48, loss is 1.3729336
...
...
```
- running on GPU
```
sh scripts/run_train_gpu.sh 8 0,1,2,3,4,5,6,7
```
The above shell script will run distribute training in the background. You can view the results through the file `train/train.log`.
## [Evaluation Process](#contents)
## [Evaluation Process](#contents)
### Evaluation
### Evaluation
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path, e.g., "username/googlenet/train_googlenet_cifar10-125_390.ckpt".
```
python eval.py > eval.log 2>&1 &
OR
Ascned: sh scripts/run_eval.sh
OR
GPU: sh scripts/run_eval_gpu.sh
```
The above python command will run in the background. You can view the results through the file "eval.log". The accuracy of the test dataset will be as follows:
```
# grep "accuracy: " eval.log
accuracy: {'acc': 0.934}
```
Note that for evaluation after distributed training, please set the checkpoint_path to be the last saved checkpoint file such as "username/googlenet/train_parallel0/train_googlenet_cifar10-125_48.ckpt". The accuracy of the test dataset will be as follows:
```
# grep "accuracy: " dist.eval.log
accuracy: {'acc': 0.9217}
```
- evaluation on CIFAR-10 dataset when running on Ascend
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path, e.g., "username/googlenet/train_googlenet_cifar10-125_390.ckpt".
```
python eval.py > eval.log 2>&1 &
OR
sh scripts/run_eval.sh
```
The above python command will run in the background. You can view the results through the file "eval.log". The accuracy of the test dataset will be as follows:
```
# grep "accuracy: " eval.log
accuracy: {'acc': 0.934}
```
Note that for evaluation after distributed training, please set the checkpoint_path to be the last saved checkpoint file such as "username/googlenet/train_parallel0/train_googlenet_cifar10-125_48.ckpt". The accuracy of the test dataset will be as follows:
```
# grep "accuracy: " dist.eval.log
accuracy: {'acc': 0.9217}
```
- evaluation on CIFAR-10 dataset when running on GPU
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path, e.g., "username/googlenet/train/ckpt_0/train_googlenet_cifar10-125_390.ckpt".
The above python command will run in the background. You can view the results through the file "eval.log". The accuracy of the test dataset will be as follows:
```
# grep "accuracy: " eval.log
accuracy: {'acc': 0.930}
```
OR,
```
sh scripts/run_eval_gpu.sh [CHECKPOINT_PATH]
```
The above python command will run in the background. You can view the results through the file "eval/eval.log". The accuracy of the test dataset will be as follows:
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html). Following the steps below, this is a simple example:
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html). Following the steps below, this is a simple example: