You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

RELEASE.md 9.8 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151
  1. # Release 0.3.0-alpha
  2. ## Major Features and Improvements
  3. ### TODO
  4. # Release 0.2.0-alpha
  5. ## Major Features and Improvements
  6. ### Ascend 910 Training and Inference Framework
  7. * New models
  8. * MobileNetV2: Inverted Residuals and Linear Bottlenecks.
  9. * ResNet101: Deep Residual Learning for Image Recognition.
  10. * Frontend and User Interface
  11. * Support for all python comparison operators.
  12. * Support for math operators **,//,%. Support for other python operators like and/or/not/is/is not/ in/ not in.
  13. * Support for the gradients of function with variable arguments.
  14. * Support for tensor indexing assignment for certain indexing type.
  15. * Support for dynamic learning rate.
  16. * User interfaces change log
  17. * DepthwiseConv2dNative, DepthwiseConv2dNativeBackpropFilter, DepthwiseConv2dNativeBackpropInput([!424](https://gitee.com/mindspore/mindspore/pulls/424))
  18. * ReLU6, ReLU6Grad([!224](https://gitee.com/mindspore/mindspore/pulls/224))
  19. * GeneratorDataset([!183](https://gitee.com/mindspore/mindspore/pulls/183))
  20. * VOCDataset([!477](https://gitee.com/mindspore/mindspore/pulls/477))
  21. * MindDataset, PKSampler([!514](https://gitee.com/mindspore/mindspore/pulls/514))
  22. * map([!506](https://gitee.com/mindspore/mindspore/pulls/506))
  23. * Conv([!226](https://gitee.com/mindspore/mindspore/pulls/226))
  24. * Adam([!253](https://gitee.com/mindspore/mindspore/pulls/253))
  25. * _set_fusion_strategy_by_idx, _set_fusion_strategy_by_size([!189](https://gitee.com/mindspore/mindspore/pulls/189))
  26. * CheckpointConfig([!122](https://gitee.com/mindspore/mindspore/pulls/122))
  27. * Constant([!54](https://gitee.com/mindspore/mindspore/pulls/54))
  28. * Executor and Performance Optimization
  29. * Support parallel execution of data prefetching and forward/backward computing.
  30. * Support parallel execution of gradient aggregation and forward/backward computing in distributed training scenarios.
  31. * Support operator fusion optimization.
  32. * Optimize compilation process and improve the performance.
  33. * Data processing, augmentation, and save format
  34. * Support multi-process of GeneratorDataset/PyFunc for high performance
  35. * Support variable batchsize
  36. * Support new Dataset operators, such as filter,skip,take,TextLineDataset
  37. ### Other Hardware Support
  38. * GPU platform
  39. * Use dynamic memory pool by default on GPU.
  40. * Support parallel execution of computation and communication.
  41. * Support continuous address allocation by memory pool.
  42. * CPU platform
  43. * Support for windows 10 OS.
  44. ## Bugfixes
  45. * Models
  46. * Fix mixed precision bug for VGG16 model ([!629](https://gitee.com/mindspore/mindspore/pulls/629)).
  47. * Python API
  48. * Fix ControlDepend operator bugs on CPU and GPU ([!396](https://gitee.com/mindspore/mindspore/pulls/396)).
  49. * Fix ArgMinWithValue operator bugs ([!338](https://gitee.com/mindspore/mindspore/pulls/338)).
  50. * Fix Dense operator bugs on PyNative mode ([!276](https://gitee.com/mindspore/mindspore/pulls/276)).
  51. * Fix MatMul operator bugs on PyNative mode ([!288](https://gitee.com/mindspore/mindspore/pulls/288)).
  52. * Executor
  53. * Fix operator selection bugs and make it general ([!300](https://gitee.com/mindspore/mindspore/pulls/300)).
  54. * Fix memory reuse bug for GetNext op ([!291](https://gitee.com/mindspore/mindspore/pulls/291)).
  55. * GPU platform
  56. * Fix memory allocation in multi-graph scenarios ([!444](https://gitee.com/mindspore/mindspore/pulls/444)).
  57. * Fix bias_add_grad under fp16 precision ([!598](https://gitee.com/mindspore/mindspore/pulls/598)).
  58. * Fix support for fp16 kernels on nvidia 1080Ti([!571](https://gitee.com/mindspore/mindspore/pulls/571)).
  59. * Fix parsing of tuple type parameters ([!316](https://gitee.com/mindspore/mindspore/pulls/316)).
  60. * Data processing
  61. * Fix TypeErrors about can't pickle mindspore._c_dataengine.DEPipeline objects([!434](https://gitee.com/mindspore/mindspore/pulls/434)).
  62. * Add TFRecord file verification([!406](https://gitee.com/mindspore/mindspore/pulls/406)).
  63. ## Contributors
  64. Thanks goes to these wonderful people:
  65. Alexey_Shevlyakov, Cathy, Chong, Hoai, Jonathan, Junhan, JunhanHu, Peilin, SanjayChan, StrawNoBerry, VectorSL, Wei, WeibiaoYu, Xiaoda, Yanjun, YuJianfeng, ZPaC, Zhang, ZhangQinghua, ZiruiWu, amongo, anthonyaje, anzhengqi, biffex, caifubi, candanzg, caojian05, casgj, cathwong, ch-l, chang, changzherui, chenfei, chengang, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, dengwentao, dinghao, fanglei, fary86, flywind, gaojing, geekun, gengdongjie, ghzl, gong, gongchen, gukecai, guohongzilong, guozhijian, gziyan, h.farahat, hesham, huangdongrun, huanghui, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, jonathan_yan, jonyguo, jzw, kingfo, kisnwang, laiyongqiang, leonwanghui, lianliguang, lichen, lichenever, limingqi107, liubuyu, liuxiao, liyong, liyong126, lizhenyu, lupengcheng, lvliang, maoweiyong, ms_yan, mxm, ougongchang, panfengfeng, panyifeng, pengyanjun, penn, qianlong, seatea, simson, suteng, thlinh, vlne-v1, wangchengke, wanghua, wangnan39, wangqiuliang, wenchunjiang, wenkai, wukesong, xiefangqi, xulei, yanghaitao, yanghaoran, yangjie159, yangzhenzhang, yankai10, yanzhenxiang2020, yao_yf, yoonlee666, zhangbuxue, zhangz0911gm, zhangzheng, zhaojichen, zhaoting, zhaozhenlong, zhongligeng, zhoufeng, zhousiyi, zjun, zyli2020, yuhuijun, limingqi107, lizhenyu, chenweifeng.
  66. Contributions of any kind are welcome!
  67. # Release 0.1.0-alpha
  68. ## Main Features
  69. ### Ascend 910 Training and Inference Framework
  70. * Recommended OS: Ubuntu 16.04 (or later) or EulerOS 2.5 or EulerOS 2.8
  71. * Python version: 3.7.5
  72. * Preset models
  73. * ResNet-50: residual structure-based convolutional neural network (CNN) for image classification, which is widely used.
  74. * AlexNet: classic CNN for image classification, achieving historical results in ImageNet LSVRC-2012.
  75. * LeNet: classic CNN for image classification, which was proposed by Yann LeCun.
  76. * VGG16: classic CNN for image classification, which was proposed by Oxford Visual Geometry Group.
  77. * YoloV3: real-time object detection network.
  78. * NEZHA: BERT-based Chinese pre-training network produced by Huawei Noah's Ark Laboratory.
  79. * Execution modes
  80. * Graph mode: provides graph optimization methods such as memory overcommitment, IR fusion, and buffer fusion to achieve optimal execution performance.
  81. * PyNative mode: single-step execution mode, facilitating process debugging.
  82. * Debugging capability and methods
  83. * Save CheckPoints and Summary data during training.
  84. * Support asynchronous printing.
  85. * Dump the computing data.
  86. * Support profiling analysis of the execution process performance.
  87. * Distributed execution
  88. * Support AllReduce, AllGather, and BroadCast collective communication.
  89. * AllReduce data parallel: Each device obtains different training data, which accelerates the overall training process.
  90. * Collective communication-based layerwise parallel: Models are divided and allocated to different devices to solve the problem of insufficient memory for large model processing and improve the training speed.
  91. * Automatic parallel mode: The better data and model parallel mode can be predicted based on the cost model. It is recommended that this mode be used on ResNet series networks.
  92. * Automatic differentiation
  93. * Implement automatic differentiation based on Source to Source.
  94. * Support distributed scenarios and automatic insertion of reverse communication operators.
  95. * Data processing, augmentation, and save format
  96. * Load common datasets such as ImageNet, MNIST, CIFAR-10, and CIFAR-100.
  97. * Support common data loading pipeline operations, such as shuffle, repeat, batch, map, and sampler.
  98. * Provide basic operator libraries to cover common CV scenarios.
  99. * Support users to customize Python data augmentation operators through the Pyfunc mechanism.
  100. * Support the access of user-defined datasets through the GeneratorDataset mechanism.
  101. * Provide the MindSpore data format, data aggregation and storage, random access example, data partition, efficient parallel read, user-defined index, and dataset search.
  102. * Convert user datasets to the MindSpore data format.
  103. * After data processing and augmentation, provide training applications in feed and graph modes.
  104. * FP32/16 mixed precision computation, supporting automatic and manual configuration
  105. * Provide common operators such as nn, math, and array, which can be customized.
  106. ### Inference Deployment
  107. * Deploy models in MindSpore format on the Ascend 310 platform for inference.
  108. * Save models in ONNX format.
  109. * Support saving models in LITE format and running models based on the lightweight inference framework.
  110. * Recommended OS: Android 4.3 or later
  111. * Supported network type: LeNet
  112. * Provide the generalization operators generated by TVM and operators generated after specific networks are tuned.
  113. ### Other Hardware Support
  114. * GPU platform training
  115. * Recommended OS: Ubuntu 16.04
  116. * CUDA version: 9.2 or 10.1
  117. * CuDNN version: 7.6 or later
  118. * Python version: 3.7.5
  119. * NCCL version: 2.4.8-1
  120. * OpenMPI version: 3.1.5
  121. * Supported models: AlexNet, LeNet, and LSTM
  122. * Supported datasets: MNIST and CIFAR-10
  123. * Support data parallel.
  124. * CPU platform training
  125. * Recommended OS: Ubuntu 16.04
  126. * Python version: 3.7.5
  127. * Supported model: LeNet
  128. * Supported dataset: MNIST
  129. * Provide only the stand-alone operation version.
  130. ## Peripherals and Tools
  131. * [MindSpore Official Website] (https://www.mindspore.cn/)
  132. * [MindInsight Visualization Debugging and Optimization] (https://gitee.com/mindspore/mindinsight)
  133. * [MindArmour Model Security Hardening Package] (https://gitee.com/mindspore/mindarmour)
  134. * [GraphEngine Computational Graph Engine] (https://gitee.com/mindspore/graphengine)