You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 13 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306
  1. ![](https://www.mindspore.cn/static/img/logo.a3e472c9.png)
  2. # Welcome to the Model Zoo for MindSpore
  3. In order to facilitate developers to enjoy the benefits of MindSpore framework and Huawei chips, we will continue to add typical networks and models . If you have needs for the model zoo, you can file an issue on [gitee](https://gitee.com/mindspore/mindspore/issues) or [MindSpore](https://bbs.huaweicloud.com/forum/forum-1076-1.html), We will consider it in time.
  4. - SOTA models using the latest MindSpore APIs
  5. - The best benefits from MindSpore and Huawei chips
  6. - Officially maintained and supported
  7. # Table of Contents
  8. - [Models and Implementations](#models-and-implementations)
  9. - [Computer Vision](#computer-vision)
  10. - [Image Classification](#image-classification)
  11. - [GoogleNet](#googlenet)
  12. - [ResNet50[benchmark]](#resnet50)
  13. - [ResNet101](#resnet101)
  14. - [VGG16](#vgg16)
  15. - [AlexNet](#alexnet)
  16. - [LeNet](#lenet)
  17. - [Object Detection and Segmentation](#object-detection-and-segmentation)
  18. - [YoloV3](#yolov3)
  19. - [MobileNetV2](#mobilenetv2)
  20. - [MobileNetV3](#mobilenetv3)
  21. - [SSD](#ssd)
  22. - [Natural Language Processing](#natural-language-processing)
  23. - [BERT](#bert)
  24. - [MASS](#mass)
  25. # Announcements
  26. | Date | News |
  27. | ------------ | ------------------------------------------------------------ |
  28. | May 31, 2020 | Support [MindSpore v0.3.0-alpha](https://www.mindspore.cn/news/newschildren?id=215) |
  29. # Models and Implementations
  30. ## Computer Vision
  31. ### Image Classification
  32. #### [GoogleNet](#table-of-contents)
  33. | Parameters | GoogleNet |
  34. | -------------------------- | ------------------------------------------------------------ |
  35. | Published Year | 2014 |
  36. | Paper | [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842) |
  37. | Resource | Ascend 910 |
  38. | Features | • Mixed Precision • Multi-GPU training support with Ascend |
  39. | MindSpore Version | 0.3.0-alpha |
  40. | Dataset | CIFAR-10 |
  41. | Training Parameters | epoch=125, batch_size = 128, lr=0.1 |
  42. | Optimizer | Momentum |
  43. | Loss Function | Softmax Cross Entropy |
  44. | Accuracy | 1pc: 93.4%; 8pcs: 92.17% |
  45. | Speed | 79 ms/Step |
  46. | Loss | 0.0016 |
  47. | Params (M) | 6.8 |
  48. | Checkpoint for Fine tuning | 43.07M (.ckpt file) |
  49. | Model for inference | 21.50M (.onnx file), 21.60M(.geir file) |
  50. | Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/googlenet |
  51. #### [ResNet50](#table-of-contents)
  52. | Parameters | ResNet50 |
  53. | -------------------------- | -------- |
  54. | Published Year | |
  55. | Paper | |
  56. | Resource | |
  57. | Features | |
  58. | MindSpore Version | |
  59. | Dataset | |
  60. | Training Parameters | |
  61. | Optimizer | |
  62. | Loss Function | |
  63. | Accuracy | |
  64. | Speed | |
  65. | Loss | |
  66. | Params (M) | |
  67. | Checkpoint for Fine tuning | |
  68. | Model for inference | |
  69. | Scripts | |
  70. #### [ResNet101](#table-of-contents)
  71. | Parameters | ResNet101 |
  72. | -------------------------- | --------- |
  73. | Published Year | |
  74. | Paper | |
  75. | Resource | |
  76. | Features | |
  77. | MindSpore Version | |
  78. | Dataset | |
  79. | Training Parameters | |
  80. | Optimizer | |
  81. | Loss Function | |
  82. | Accuracy | |
  83. | Speed | |
  84. | Loss | |
  85. | Params (M) | |
  86. | Checkpoint for Fine tuning | |
  87. | Model for inference | |
  88. | Scripts | |
  89. #### [VGG16](#table-of-contents)
  90. | Parameters | VGG16 |
  91. | -------------------------- | ----- |
  92. | Published Year | |
  93. | Paper | |
  94. | Resource | |
  95. | Features | |
  96. | MindSpore Version | |
  97. | Dataset | |
  98. | Training Parameters | |
  99. | Optimizer | |
  100. | Loss Function | |
  101. | Accuracy | |
  102. | Speed | |
  103. | Loss | |
  104. | Params (M) | |
  105. | Checkpoint for Fine tuning | |
  106. | Model for inference | |
  107. | Scripts | |
  108. #### [AlexNet](#table-of-contents)
  109. | Parameters | AlexNet |
  110. | -------------------------- | ------- |
  111. | Published Year | |
  112. | Paper | |
  113. | Resource | |
  114. | Features | |
  115. | MindSpore Version | |
  116. | Dataset | |
  117. | Training Parameters | |
  118. | Optimizer | |
  119. | Loss Function | |
  120. | Accuracy | |
  121. | Speed | |
  122. | Loss | |
  123. | Params (M) | |
  124. | Checkpoint for Fine tuning | |
  125. | Model for inference | |
  126. | Scripts | |
  127. #### [LeNet](#table-of-contents)
  128. | Parameters | LeNet |
  129. | -------------------------- | ----- |
  130. | Published Year | |
  131. | Paper | |
  132. | Resource | |
  133. | Features | |
  134. | MindSpore Version | |
  135. | Dataset | |
  136. | Training Parameters | |
  137. | Optimizer | |
  138. | Loss Function | |
  139. | Accuracy | |
  140. | Speed | |
  141. | Loss | |
  142. | Params (M) | |
  143. | Checkpoint for Fine tuning | |
  144. | Model for inference | |
  145. | Scripts | |
  146. ### Object Detection and Segmentation
  147. #### [YoloV3](#table-of-contents)
  148. | Parameters | YoLoV3 |
  149. | -------------------------------- | ------ |
  150. | Published Year | |
  151. | Paper | |
  152. | Resource | |
  153. | Features | |
  154. | MindSpore Version | |
  155. | Dataset | |
  156. | Training Parameters | |
  157. | Optimizer | |
  158. | Loss Function | |
  159. | Mean Average Precision (mAP@0.5) | |
  160. | Speed | |
  161. | Loss | |
  162. | Params (M) | |
  163. | Checkpoint for Fine tuning | |
  164. | Model for inference | |
  165. | Scripts | |
  166. #### [MobileNetV2](#table-of-contents)
  167. | Parameters | MobileNetV2 |
  168. | -------------------------------- | ----------- |
  169. | Published Year | |
  170. | Paper | |
  171. | Resource | |
  172. | Features | |
  173. | MindSpore Version | |
  174. | Dataset | |
  175. | Training Parameters | |
  176. | Optimizer | |
  177. | Loss Function | |
  178. | Mean Average Precision (mAP@0.5) | |
  179. | Speed | |
  180. | Loss | |
  181. | Params (M) | |
  182. | Checkpoint for Fine tuning | |
  183. | Model for inference | |
  184. | Scripts | |
  185. #### [MobileNetV3](#table-of-contents)
  186. | Parameters | MobileNetV3 |
  187. | -------------------------------- | ----------- |
  188. | Published Year | |
  189. | Paper | |
  190. | Resource | |
  191. | Features | |
  192. | MindSpore Version | |
  193. | Dataset | |
  194. | Training Parameters | |
  195. | Optimizer | |
  196. | Loss Function | |
  197. | Mean Average Precision (mAP@0.5) | |
  198. | Speed | |
  199. | Loss | |
  200. | Params (M) | |
  201. | Checkpoint for Fine tuning | |
  202. | Model for inference | |
  203. | Scripts | |
  204. #### [SSD](#table-of-contents)
  205. | Parameters | SSD |
  206. | -------------------------------- | ---- |
  207. | Published Year | |
  208. | Paper | |
  209. | Resource | |
  210. | Features | |
  211. | MindSpore Version | |
  212. | Dataset | |
  213. | Training Parameters | |
  214. | Optimizer | |
  215. | Loss Function | |
  216. | Mean Average Precision (mAP@0.5) | |
  217. | Speed | |
  218. | Loss | |
  219. | Params (M) | |
  220. | Checkpoint for Fine tuning | |
  221. | Model for inference | |
  222. | Scripts | |
  223. ## Natural Language Processing
  224. #### [BERT](#table-of-contents)
  225. | Parameters | BERT |
  226. | -------------------------- | ---- |
  227. | Published Year | |
  228. | Paper | |
  229. | Resource | |
  230. | Features | |
  231. | MindSpore Version | |
  232. | Dataset | |
  233. | Training Parameters | |
  234. | Optimizer | |
  235. | Loss Function | |
  236. | GLUE Score | |
  237. | Speed | |
  238. | Loss | |
  239. | Params (M) | |
  240. | Checkpoint for Fine tuning | |
  241. | Model for inference | |
  242. | Scripts | |
  243. #### [MASS](#table-of-contents)
  244. | Parameters | MASS |
  245. | -------------------------- | ---- |
  246. | Published Year | |
  247. | Paper | |
  248. | Resource | |
  249. | Features | |
  250. | MindSpore Version | |
  251. | Dataset | |
  252. | Training Parameters | |
  253. | Optimizer | |
  254. | Loss Function | |
  255. | ROUGE Score | |
  256. | Speed | |
  257. | Loss | |
  258. | Params (M) | |
  259. | Checkpoint for Fine tuning | |
  260. | Model for inference | |
  261. | Scripts | |
  262. #### License
  263. [Apache License 2.0](https://github.com/mindspore-ai/mindspore/blob/master/LICENSE)