You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 21 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475
  1. # Contents
  2. - [GoogleNet Description](#googlenet-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Training](#training)
  14. - [Distributed Training](#distributed-training)
  15. - [Evaluation Process](#evaluation-process)
  16. - [Evaluation](#evaluation)
  17. - [Model Description](#model-description)
  18. - [Performance](#performance)
  19. - [Evaluation Performance](#evaluation-performance)
  20. - [Inference Performance](#evaluation-performance)
  21. - [How to use](#how-to-use)
  22. - [Inference](#inference)
  23. - [Continue Training on the Pretrained Model](#continue-training-on-the-pretrained-model)
  24. - [Transfer Learning](#transfer-learning)
  25. - [Description of Random Situation](#description-of-random-situation)
  26. - [ModelZoo Homepage](#modelzoo-homepage)
  27. # [GoogleNet Description](#contents)
  28. GoogleNet, a 22 layers deep network, was proposed in 2014 and won the first place in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). GoogleNet, also called Inception v1, has significant improvement over ZFNet (The winner in 2013) and AlexNet (The winner in 2012), and has relatively lower error rate compared to VGGNet. Typically deeper deep learning network means larger number of parameters, which makes it more prone to overfitting. Furthermore, the increased network size leads to increased use of computational resources. To tackle these issues, GoogleNet adopts 1*1 convolution middle of the network to reduce dimension, and thus further reduce the computation. Global average pooling is used at the end of the network, instead of using fully connected layers. Another technique, called inception module, is to have different sizes of convolutions for the same input and stacking all the outputs.
  29. [Paper](https://arxiv.org/abs/1409.4842): Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich. "Going deeper with convolutions." *Proceedings of the IEEE conference on computer vision and pattern recognition*. 2015.
  30. # [Model Architecture](#contents)
  31. Specifically, the GoogleNet contains numerous inception modules, which are connected together to go deeper. In general, an inception module with dimensionality reduction consists of **1×1 conv**, **3×3 conv**, **5×5 conv**, and **3×3 max pooling**, which are done altogether for the previous input, and stack together again at output.
  32. # [Dataset](#contents)
  33. Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
  34. - Dataset size:175M,60,000 32*32 colorful images in 10 classes
  35. - Train:146M,50,000 images
  36. - Test:29M,10,000 images
  37. - Data format:binary files
  38. - Note:Data will be processed in src/dataset.py
  39. # [Features](#contents)
  40. ## Mixed Precision
  41. The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
  42. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
  43. # [Environment Requirements](#contents)
  44. - Hardware(Ascend/GPU)
  45. - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  46. - Framework
  47. - [MindSpore](https://www.mindspore.cn/install/en)
  48. - For more information, please check the resources below:
  49. - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
  50. - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
  51. # [Quick Start](#contents)
  52. After installing MindSpore via the official website, you can start training and evaluation as follows:
  53. - runing on Ascend
  54. ```python
  55. # run training example
  56. python train.py > train.log 2>&1 &
  57. # run distributed training example
  58. sh scripts/run_train.sh rank_table.json
  59. # run evaluation example
  60. python eval.py > eval.log 2>&1 &
  61. OR
  62. sh run_eval.sh
  63. ```
  64. For distributed training, a hccl configuration file with JSON format needs to be created in advance.
  65. Please follow the instructions in the link below:
  66. https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools.
  67. - running on GPU
  68. For running on GPU, please change `device_target` from `Ascend` to `GPU` in configuration file src/config.py
  69. ```python
  70. # run training example
  71. export CUDA_VISIBLE_DEVICES=0
  72. python train.py > train.log 2>&1 &
  73. # run distributed training example
  74. sh scripts/run_train_gpu.sh 8 0,1,2,3,4,5,6,7
  75. # run evaluation example
  76. python eval.py --checkpoint_path=[CHECKPOINT_PATH] > eval.log 2>&1 &
  77. OR
  78. sh run_eval_gpu.sh [CHECKPOINT_PATH]
  79. ```
  80. # [Script Description](#contents)
  81. ## [Script and Sample Code](#contents)
  82. ```
  83. ├── model_zoo
  84. ├── README.md // descriptions about all the models
  85. ├── googlenet
  86. ├── README.md // descriptions about googlenet
  87. ├── scripts
  88. │ ├──run_train.sh // shell script for distributed on Ascend
  89. │ ├──run_train_gpu.sh // shell script for distributed on GPU
  90. │ ├──run_eval.sh // shell script for evaluation on Ascend
  91. │ ├──run_eval_gpu.sh // shell script for evaluation on GPU
  92. ├── src
  93. │ ├──dataset.py // creating dataset
  94. │ ├──googlenet.py // googlenet architecture
  95. │ ├──config.py // parameter configuration
  96. ├── train.py // training script
  97. ├── eval.py // evaluation script
  98. ├── export.py // export checkpoint files into air/onnx
  99. ```
  100. ## [Script Parameters](#contents)
  101. Parameters for both training and evaluation can be set in config.py
  102. - config for GoogleNet, CIFAR-10 dataset
  103. ```python
  104. 'pre_trained': 'False' # whether training based on the pre-trained model
  105. 'nump_classes': 10 # the number of classes in the dataset
  106. 'lr_init': 0.1 # initial learning rate
  107. 'batch_size': 128 # training batch size
  108. 'epoch_size': 125 # total training epochs
  109. 'momentum': 0.9 # momentum
  110. 'weight_decay': 5e-4 # weight decay value
  111. 'buffer_size': 10 # buffer size
  112. 'image_height': 224 # image height used as input to the model
  113. 'image_width': 224 # image width used as input to the model
  114. 'data_path': './cifar10' # absolute full path to the train and evaluation datasets
  115. 'device_target': 'Ascend' # device running the program
  116. 'device_id': 4 # device ID used to train or evaluate the dataset. Ignore it when you use run_train.sh for distributed training
  117. 'keep_checkpoint_max': 10 # only keep the last keep_checkpoint_max checkpoint
  118. 'checkpoint_path': './train_googlenet_cifar10-125_390.ckpt' # the absolute full path to save the checkpoint file
  119. 'onnx_filename': 'googlenet.onnx' # file name of the onnx model used in export.py
  120. 'geir_filename': 'googlenet.geir' # file name of the geir model used in export.py
  121. ```
  122. ## [Training Process](#contents)
  123. ### Training
  124. - running on Ascend
  125. ```
  126. python train.py > train.log 2>&1 &
  127. ```
  128. The python command above will run in the background, you can view the results through the file `train.log`.
  129. After training, you'll get some checkpoint files under the script folder by default. The loss value will be achieved as follows:
  130. ```
  131. # grep "loss is " train.log
  132. epoch: 1 step: 390, loss is 1.4842823
  133. epcoh: 2 step: 390, loss is 1.0897788
  134. ...
  135. ```
  136. The model checkpoint will be saved in the current directory.
  137. - running on GPU
  138. ```
  139. export CUDA_VISIBLE_DEVICES=0
  140. python train.py > train.log 2>&1 &
  141. ```
  142. The python command above will run in the background, you can view the results through the file `train.log`.
  143. After training, you'll get some checkpoint files under the folder `./ckpt_0/` by default.
  144. ### Distributed Training
  145. - running on Ascend
  146. ```
  147. sh scripts/run_train.sh rank_table.json
  148. ```
  149. The above shell script will run distribute training in the background. You can view the results through the file `train_parallel[X]/log`. The loss value will be achieved as follows:
  150. ```
  151. # grep "result: " train_parallel*/log
  152. train_parallel0/log:epoch: 1 step: 48, loss is 1.4302931
  153. train_parallel0/log:epcoh: 2 step: 48, loss is 1.4023874
  154. ...
  155. train_parallel1/log:epoch: 1 step: 48, loss is 1.3458025
  156. train_parallel1/log:epcoh: 2 step: 48, loss is 1.3729336
  157. ...
  158. ...
  159. ```
  160. - running on GPU
  161. ```
  162. sh scripts/run_train_gpu.sh 8 0,1,2,3,4,5,6,7
  163. ```
  164. The above shell script will run distribute training in the background. You can view the results through the file `train/train.log`.
  165. ## [Evaluation Process](#contents)
  166. ### Evaluation
  167. - evaluation on CIFAR-10 dataset when running on Ascend
  168. Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path, e.g., "username/googlenet/train_googlenet_cifar10-125_390.ckpt".
  169. ```
  170. python eval.py > eval.log 2>&1 &
  171. OR
  172. sh scripts/run_eval.sh
  173. ```
  174. The above python command will run in the background. You can view the results through the file "eval.log". The accuracy of the test dataset will be as follows:
  175. ```
  176. # grep "accuracy: " eval.log
  177. accuracy: {'acc': 0.934}
  178. ```
  179. Note that for evaluation after distributed training, please set the checkpoint_path to be the last saved checkpoint file such as "username/googlenet/train_parallel0/train_googlenet_cifar10-125_48.ckpt". The accuracy of the test dataset will be as follows:
  180. ```
  181. # grep "accuracy: " dist.eval.log
  182. accuracy: {'acc': 0.9217}
  183. ```
  184. - evaluation on CIFAR-10 dataset when running on GPU
  185. Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path, e.g., "username/googlenet/train/ckpt_0/train_googlenet_cifar10-125_390.ckpt".
  186. ```
  187. python eval.py --checkpoint_path=[CHECKPOINT_PATH] > eval.log 2>&1 &
  188. ```
  189. The above python command will run in the background. You can view the results through the file "eval.log". The accuracy of the test dataset will be as follows:
  190. ```
  191. # grep "accuracy: " eval.log
  192. accuracy: {'acc': 0.930}
  193. ```
  194. OR,
  195. ```
  196. sh scripts/run_eval_gpu.sh [CHECKPOINT_PATH]
  197. ```
  198. The above python command will run in the background. You can view the results through the file "eval/eval.log". The accuracy of the test dataset will be as follows:
  199. ```
  200. # grep "accuracy: " eval/eval.log
  201. accuracy: {'acc': 0.930}
  202. ```
  203. # [Model Description](#contents)
  204. ## [Performance](#contents)
  205. ### Evaluation Performance
  206. | Parameters | Ascend | GPU |
  207. | -------------------------- | ----------------------------------------------------------- | ---------------------- |
  208. | Model Version | Inception V1 | Inception V1 |
  209. | Resource | Ascend 910 ;CPU 2.60GHz,56cores;Memory,314G | NV SMX2 V100-32G |
  210. | uploaded Date | 08/31/2020 (month/day/year) | 08/20/2020 (month/day/year) |
  211. | MindSpore Version | 0.7.0-alpha | 0.6.0-alpha |
  212. | Dataset | CIFAR-10 | CIFAR-10 |
  213. | Training Parameters | epoch=125, steps=390, batch_size = 128, lr=0.1 | epoch=125, steps=390, batch_size=128, lr=0.1 |
  214. | Optimizer | SGD | SGD |
  215. | Loss Function | Softmax Cross Entropy | Softmax Cross Entropy |
  216. | outputs | probability | probobility |
  217. | Loss | 0.0016 | 0.0016 |
  218. | Speed | 1pc: 79 ms/step; 8pcs: 82 ms/step | 1pc: 150 ms/step; 8pcs: 164 ms/step |
  219. | Total time | 1pc: 63.85 mins; 8pcs: 11.28 mins | 1pc: 126.87 mins; 8pcs: 21.65 mins |
  220. | Parameters (M) | 13.0 | 13.0 |
  221. | Checkpoint for Fine tuning | 43.07M (.ckpt file) | 43.07M (.ckpt file) |
  222. | Model for inference | 21.50M (.onnx file), 21.60M(.air file) | |
  223. | Scripts | [googlenet script](https://gitee.com/mindspore/mindspore/tree/r0.7/model_zoo/official/cv/googlenet) | [googlenet script](https://gitee.com/mindspore/mindspore/tree/r0.6/model_zoo/official/cv/googlenet) |
  224. ### Inference Performance
  225. | Parameters | Ascend | GPU |
  226. | ------------------- | --------------------------- | --------------------------- |
  227. | Model Version | Inception V1 | Inception V1 |
  228. | Resource | Ascend 910 | GPU |
  229. | Uploaded Date | 08/31/2020 (month/day/year) | 08/20/2020 (month/day/year) |
  230. | MindSpore Version | 0.7.0-alpha | 0.6.0-alpha |
  231. | Dataset | CIFAR-10, 10,000 images | CIFAR-10, 10,000 images |
  232. | batch_size | 128 | 128 |
  233. | outputs | probability | probability |
  234. | Accuracy | 1pc: 93.4%; 8pcs: 92.17% | 1pc: 93%, 8pcs: 92.89% |
  235. | Model for inference | 21.50M (.onnx file) | |
  236. ## [How to use](#contents)
  237. ### Inference
  238. If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html). Following the steps below, this is a simple example:
  239. - Running on Ascend
  240. ```
  241. # Set context
  242. context.set_context(mode=context.GRAPH_HOME, device_target=cfg.device_target)
  243. context.set_context(device_id=cfg.device_id)
  244. # Load unseen dataset for inference
  245. dataset = dataset.create_dataset(cfg.data_path, 1, False)
  246. # Define model
  247. net = GoogleNet(num_classes=cfg.num_classes)
  248. opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.01,
  249. cfg.momentum, weight_decay=cfg.weight_decay)
  250. loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean',
  251. is_grad=False)
  252. model = Model(net, loss_fn=loss, optimizer=opt, metrics={'acc'})
  253. # Load pre-trained model
  254. param_dict = load_checkpoint(cfg.checkpoint_path)
  255. load_param_into_net(net, param_dict)
  256. net.set_train(False)
  257. # Make predictions on the unseen dataset
  258. acc = model.eval(dataset)
  259. print("accuracy: ", acc)
  260. ```
  261. - Running on GPU:
  262. ```
  263. # Set context
  264. context.set_context(mode=context.GRAPH_HOME, device_target="GPU")
  265. # Load unseen dataset for inference
  266. dataset = dataset.create_dataset(cfg.data_path, 1, False)
  267. # Define model
  268. net = GoogleNet(num_classes=cfg.num_classes)
  269. opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.01,
  270. cfg.momentum, weight_decay=cfg.weight_decay)
  271. loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean',
  272. is_grad=False)
  273. model = Model(net, loss_fn=loss, optimizer=opt, metrics={'acc'})
  274. # Load pre-trained model
  275. param_dict = load_checkpoint(args_opt.checkpoint_path)
  276. load_param_into_net(net, param_dict)
  277. net.set_train(False)
  278. # Make predictions on the unseen dataset
  279. acc = model.eval(dataset)
  280. print("accuracy: ", acc)
  281. ```
  282. ### Continue Training on the Pretrained Model
  283. - running on Ascend
  284. ```
  285. # Load dataset
  286. dataset = create_dataset(cfg.data_path, 1)
  287. batch_num = dataset.get_dataset_size()
  288. # Define model
  289. net = GoogleNet(num_classes=cfg.num_classes)
  290. # Continue training if set pre_trained to be True
  291. if cfg.pre_trained:
  292. param_dict = load_checkpoint(cfg.checkpoint_path)
  293. load_param_into_net(net, param_dict)
  294. lr = lr_steps(0, lr_max=cfg.lr_init, total_epochs=cfg.epoch_size,
  295. steps_per_epoch=batch_num)
  296. opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()),
  297. Tensor(lr), cfg.momentum, weight_decay=cfg.weight_decay)
  298. loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean', is_grad=False)
  299. model = Model(net, loss_fn=loss, optimizer=opt, metrics={'acc'},
  300. amp_level="O2", keep_batchnorm_fp32=False, loss_scale_manager=None)
  301. # Set callbacks
  302. config_ck = CheckpointConfig(save_checkpoint_steps=batch_num * 5,
  303. keep_checkpoint_max=cfg.keep_checkpoint_max)
  304. time_cb = TimeMonitor(data_size=batch_num)
  305. ckpoint_cb = ModelCheckpoint(prefix="train_googlenet_cifar10", directory="./",
  306. config=config_ck)
  307. loss_cb = LossMonitor()
  308. # Start training
  309. model.train(cfg.epoch_size, dataset, callbacks=[time_cb, ckpoint_cb, loss_cb])
  310. print("train success")
  311. ```
  312. - running on GPU
  313. ```
  314. # Load dataset
  315. dataset = create_dataset(cfg.data_path, 1)
  316. batch_num = dataset.get_dataset_size()
  317. # Define model
  318. net = GoogleNet(num_classes=cfg.num_classes)
  319. # Continue training if set pre_trained to be True
  320. if cfg.pre_trained:
  321. param_dict = load_checkpoint(cfg.checkpoint_path)
  322. load_param_into_net(net, param_dict)
  323. lr = lr_steps(0, lr_max=cfg.lr_init, total_epochs=cfg.epoch_size,
  324. steps_per_epoch=batch_num)
  325. opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()),
  326. Tensor(lr), cfg.momentum, weight_decay=cfg.weight_decay)
  327. loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean', is_grad=False)
  328. model = Model(net, loss_fn=loss, optimizer=opt, metrics={'acc'},
  329. amp_level="O2", keep_batchnorm_fp32=True, loss_scale_manager=None)
  330. # Set callbacks
  331. config_ck = CheckpointConfig(save_checkpoint_steps=batch_num * 5,
  332. keep_checkpoint_max=cfg.keep_checkpoint_max)
  333. time_cb = TimeMonitor(data_size=batch_num)
  334. ckpoint_cb = ModelCheckpoint(prefix="train_googlenet_cifar10", directory="./ckpt_" + str(get_rank()) + "/",
  335. config=config_ck)
  336. loss_cb = LossMonitor()
  337. # Start training
  338. model.train(cfg.epoch_size, dataset, callbacks=[time_cb, ckpoint_cb, loss_cb])
  339. print("train success")
  340. ```
  341. ### Transfer Learning
  342. To be added.
  343. # [Description of Random Situation](#contents)
  344. In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
  345. # [ModelZoo Homepage](#contents)
  346. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).