@@ -4,16 +4,16 @@
- [Model Architecture](#model-architecture)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
- [Dataset](#dataset)
- [Environment Requirements](#environment-requirements)
- [Environment Requirements](#environment-requirements)
- [Quick Start](#quick-start)
- [Quick Start](#quick-start)
- [Script Description](#script-description)
- [Script Description](#script-description)
- [Script and Sample Code](#script-and-sample-code)
- [Script and Sample Code](#script-and-sample-code)
- [Script Parameters](#script-parameters)
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Training Process](#training-process)
- [Training](#training)
- [Training](#training)
- [Evaluation Process](#evaluation-process)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Evaluation](#evaluation)
- [Model Description](#model-description)
- [Model Description](#model-description)
- [Performance](#performance)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [Evaluation Performance](#evaluation-performance)
- [ModelZoo Homepage](#modelzoo-homepage)
- [ModelZoo Homepage](#modelzoo-homepage)
@@ -26,15 +26,15 @@ AlexNet was proposed in 2012, one of the most influential neural networks. It go
# [Model Architecture](#contents)
# [Model Architecture](#contents)
AlexNet composition consists of 5 convolutional layers and 3 fully connected layers. Multiple convolutional kernels can extract interesting features in images and get more accurate classification.
AlexNet composition consists of 5 convolutional layers and 3 fully connected layers. Multiple convolutional kernels can extract interesting features in images and get more accurate classification.
# [Dataset](#contents)
# [Dataset](#contents)
Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
- Dataset size:175M,60,000 32*32 colorful images in 10 classes
- Dataset size:175M,60,000 32*32 colorful images in 10 classes
- Train:146M,50,000 images
- Test:29.3M,10,000 images
- Train:146M,50,000 images
- Test:29.3M,10,000 images
- Data format:binary files
- Data format:binary files
- Note:Data will be processed in dataset.py
- Note:Data will be processed in dataset.py
- Download the dataset, the directory structure is as follows:
- Download the dataset, the directory structure is as follows:
@@ -48,20 +48,20 @@ Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
# [Environment Requirements](#contents)
# [Environment Requirements](#contents)
- Hardware(Ascend/GPU)
- Hardware(Ascend/GPU)
- Prepare hardware environment with Ascend or GPU processor.
- Prepare hardware environment with Ascend or GPU processor.
- Framework
- Framework
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
- For more information, please check the resources below:
- For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
# [Quick Start](#contents)
# [Quick Start](#contents)
After installing MindSpore via the official website, you can start training and evaluation as follows:
After installing MindSpore via the official website, you can start training and evaluation as follows:
```python
```python
# enter script dir, train AlexNet
# enter script dir, train AlexNet
sh run_standalone_train_ascend.sh [DATA_PATH] [CKPT_SAVE_PATH]
sh run_standalone_train_ascend.sh [DATA_PATH] [CKPT_SAVE_PATH]
# enter script dir, evaluate AlexNet
# enter script dir, evaluate AlexNet
sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
```
```
@@ -72,20 +72,20 @@ sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
```
```
├── cv
├── cv
├── alexnet
├── alexnet
├── README.md // descriptions about alexnet
├── README.md // descriptions about alexnet
├── requirements.txt // package needed
├── requirements.txt // package needed
├── scripts
│ ├──run_standalone_train_gpu.sh // train in gpu
│ ├──run_standalone_train_ascend.sh // train in ascend
│ ├──run_standalone_eval_gpu.sh // evaluate in gpu
│ ├──run_standalone_eval_ascend.sh // evaluate in ascend
├── src
├── scripts
│ ├──run_standalone_train_gpu.sh // train in gpu
│ ├──run_standalone_train_ascend.sh // train in ascend
│ ├──run_standalone_eval_gpu.sh // evaluate in gpu
│ ├──run_standalone_eval_ascend.sh // evaluate in ascend
├── src
│ ├──dataset.py // creating dataset
│ ├──dataset.py // creating dataset
│ ├──alexnet.py // alexnet architecture
│ ├──alexnet.py // alexnet architecture
│ ├──config.py // parameter configuration
├── train.py // training script
├── eval.py // evaluation script
│ ├──config.py // parameter configuration
├── train.py // training script
├── eval.py // evaluation script
```
```
## [Script Parameters](#contents)
## [Script Parameters](#contents)
@@ -93,39 +93,61 @@ sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
```python
```python
Major parameters in train.py and config.py as follows:
Major parameters in train.py and config.py as follows:
--data_path: The absolute full path to the train and evaluation datasets.
--epoch_size: Total training epochs.
--batch_size: Training batch size.
--data_path: The absolute full path to the train and evaluation datasets.
--epoch_size: Total training epochs.
--batch_size: Training batch size.
--image_height: Image height used as input to the model.
--image_height: Image height used as input to the model.
--image_width: Image width used as input the model.
--device_target: Device where the code will be implemented. Optional values are "Ascend", "GPU".
--image_width: Image width used as input the model.
--device_target: Device where the code will be implemented. Optional values are "Ascend", "GPU".
--checkpoint_path: The absolute full path to the checkpoint file saved after training.
--checkpoint_path: The absolute full path to the checkpoint file saved after training.
--data_path: Path where the dataset is saved
--data_path: Path where the dataset is saved
```
```
## [Training Process](#contents)
## [Training Process](#contents)
### Training
### Training
```
python train.py --data_path cifar-10-batches-bin --ckpt_path ckpt > log.txt 2>&1 &
# or enter script dir, and run the script
sh run_standalone_train_ascend.sh cifar-10-batches-bin ckpt
```
- running on Ascend
After training, the loss value will be achieved as follows:
```
python train.py --data_path cifar-10-batches-bin --ckpt_path ckpt > log.txt 2>&1 &
# or enter script dir, and run the script
sh run_standalone_train_ascend.sh cifar-10-batches-bin ckpt
```
```
# grep "loss is " train.log
epoch: 1 step: 1, loss is 2.2791853
...
epoch: 1 step: 1536, loss is 1.9366643
epoch: 1 step: 1537, loss is 1.6983616
epoch: 1 step: 1538, loss is 1.0221305
...
```
After training, the loss value will be achieved as follows:
```
# grep "loss is " train.log
epoch: 1 step: 1, loss is 2.2791853
...
epoch: 1 step: 1536, loss is 1.9366643
epoch: 1 step: 1537, loss is 1.6983616
epoch: 1 step: 1538, loss is 1.0221305
...
```
The model checkpoint will be saved in the current directory.
- running on GPU
```
python train.py --device_target "GPU" --data_path cifar-10-batches-bin --ckpt_path ckpt > log.txt 2>&1 &
# or enter script dir, and run the script
sh run_standalone_train_for_gpu.sh cifar-10-batches-bin ckpt
```
After training, the loss value will be achieved as follows:
```
# grep "loss is " train.log
epoch: 1 step: 1, loss is 2.3125906
...
epoch: 30 step: 1560, loss is 0.6687547
epoch: 30 step: 1561, loss is 0.20055409
epoch: 30 step: 1561, loss is 0.103845775
```
The model checkpoint will be saved in the current directory.
## [Evaluation Process](#contents)
## [Evaluation Process](#contents)
@@ -133,44 +155,61 @@ The model checkpoint will be saved in the current directory.
Before running the command below, please check the checkpoint path used for evaluation.
Before running the command below, please check the checkpoint path used for evaluation.
```
python eval.py --data_path cifar-10-verify-bin --ckpt_path ckpt/checkpoint_alexnet-1_1562.ckpt > log.txt 2>&1 &
or enter script dir, and run the script
sh run_standalone_eval_ascend.sh cifar-10-verify-bin ckpt/checkpoint_alexnet-1_1562.ckpt
```
- running on Ascend
You can view the results through the file "log.txt". The accuracy of the test dataset will be as follows:
```
python eval.py --data_path cifar-10-verify-bin --ckpt_path ckpt/checkpoint_alexnet-1_1562.ckpt > log.txt 2>&1 &
or enter script dir, and run the script
sh run_standalone_eval_ascend.sh cifar-10-verify-bin ckpt/checkpoint_alexnet-1_1562.ckpt
```
```
# grep "Accuracy: " log.txt
'Accuracy': 0.8832
```
You can view the results through the file "log.txt". The accuracy of the test dataset will be as follows:
```
# grep "Accuracy: " log.txt
'Accuracy': 0.8832
```
- running on GPU
```
python eval.py --device_target "GPU" --data_path cifar-10-verify-bin --ckpt_path ckpt/checkpoint_alexnet-30_1562.ckpt > log.txt 2>&1 &
or enter script dir, and run the script
sh run_standalone_eval_for_gpu.sh cifar-10-verify-bin ckpt/checkpoint_alexnet-30_1562.ckpt
```
You can view the results through the file "log.txt". The accuracy of the test dataset will be as follows:
```
# grep "Accuracy: " log.txt
'Accuracy': 0.88512
```
# [Model Description](#contents)
# [Model Description](#contents)
## [Performance](#contents)
## [Performance](#contents)
### Evaluation Performance
| Parameters | AlexNet |
| -------------------------- | ----------------------------------------------------------- |
| Resource | Ascend 910 ;CPU 2.60GHz,56cores;Memory,314G |
| uploaded Date | 06/09/2020 (month/day/year) |
| MindSpore Version | 0.5.0-beta |
| Dataset | CIFAR-10 |
| Training Parameters | epoch=30, steps=1562, batch_size = 32, lr=0.002 |
| Optimizer | Momentum |
| Loss Function | Softmax Cross Entropy |
| outputs | probability |
| Loss | 0.0016 |
| Speed | 21 ms/step |
| Total time | 17 mins |
| Checkpoint for Fine tuning | 445M (.ckpt file) |
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/alexnet |
### Evaluation Performance
| Parameters | Ascend | GPU |
| -------------------------- | ------------------------------------------------------------| ------------------------------------------------- |
| Resource | Ascend 910; CPU 2.60GHz, 56cores; Memory, 314G | NV SMX2 V100-32G |
| uploaded Date | 06/09/2020 (month/day/year) | 17/09/2020 (month/day/year) |
| MindSpore Version | 0.5.0-beta | 0.7.0-beta |
| Dataset | CIFAR-10 | CIFAR-10 |
| Training Parameters | epoch=30, steps=1562, batch_size = 32, lr=0.002 | epoch=30, steps=1562, batch_size = 32, lr=0.002 |
| Optimizer | Momentum | Momentum |
| Loss Function | Softmax Cross Entropy | Softmax Cross Entropy |
| outputs | probability | probability |
| Loss | 0.0016 | 0.01 |
| Speed | 21 ms/step | 16.8 ms/step |
| Total time | 17 mins | 14 mins |
| Checkpoint for Fine tuning | 445M (.ckpt file) | 445M (.ckpt file) |
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/alexnet | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/alexnet |
# [Description of Random Situation](#contents)
# [Description of Random Situation](#contents)
In dataset.py, we set the seed inside ```create_dataset``` function.
In dataset.py, we set the seed inside ```create_dataset``` function.
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).