You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

RELEASE.md 74 kB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142
  1. # MindSpore 1.1.0 Release Notes
  2. ## MindSpore
  3. ### Major Features and Improvements
  4. #### NewModels
  5. - [STABLE] GNMT v2: similar to the model described in Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, which is mainly used for corpus translation, on WMT Englis-German dataset.(Ascend)
  6. - [STABLE] MaskRCNN: a conceptually simple, flexible, and general framework for object instance segmentation on COCO2017 dataset.(Ascend)
  7. - [STABLE] YOLOv4: a state-of-the-art detector which is faster and more accurate than all available alternative detectors on MS COCO dataset.(Ascend)
  8. - [STABLE] Openpose: proposes a bottom-up human attitude estimation algorithm using Part Affinity Fields on COCO2017 dataset.(Ascend)
  9. - [STABLE] CNN-CTC: proposes three major contributions to addresses scene text recognition (STR) on MJSynth and SynthText dataset.(Ascend)
  10. - [STABLE] CenterFace: a practical anchor-free face detection and alignment method for edge devices on WiderFace dataset.(Ascend)
  11. - [STABLE] ShuffleNetV2: a much faster and more accurate netowrk than the previous networks on ImageNet 2012 dataset.(GPU)
  12. - [STABLE] EfficientNet-B0: a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient on ImageNet 2012 dataset.(GPU)
  13. - [BETA] SSD-GhostNet: based on an Ghost module structure which generate more features from cheap operations on Oxford-IIIT Pet dataset.(Ascend)
  14. - [BETA] DS-CNN: Depthwise separable convolutional neural network on Speech commands dataset.(Ascend)
  15. - [BETA] DeepPotentialH2O: A neural network model for molecular dynamics simulations. (Ascend)
  16. - [BETA] GOMO: A classical numerical method called GOMO for ocean simulation. (GPU)
  17. #### FrontEnd
  18. - [STABLE] Refactor the MINDIR to support 310 inference(Ascend).
  19. - [STABLE] The execution backend of sparse operations in optimizer can be set through 'target'. (Ascend/GPU/CPU)
  20. - [STABLE] Support saving specified network to checkpoint and filtering parameters according to prefix when load checkpoint. (Ascend/GPU/CPU)
  21. - [STABLE] Allow users choose whether to load parameter into network strictly.(Ascend/GPU/CPU)
  22. - [STABLE] Before training, in graph mode, in order to have the same network initialization parameter values ​​for all devices, broadcast the parameters on device 0 to other devices. (Ascend/GPU)
  23. - [STABLE] Support if by if of control flow subgraph. (Ascend/GPU)
  24. - [STABLE] Support the judgment that whether a tensor is in a list. (Ascend/GPU/CPU)
  25. - [STABLE] Support to get a value by using the corresponding key in a dictionary in the network; Support to get keys and values of a dictionary in the network. (Ascend/GPU/CPU)
  26. - [STABLE] Support Tensor in enumerate. (Ascend/GPU/CPU)
  27. - [STABLE] Support multilevel index assignment. (Ascend/GPU/CPU)
  28. - [STABLE] Support the 'expand_as','view','abs','mean' method of Tensor. (Ascend/GPU/CPU)
  29. - [STABLE] Support ResizeBilinear operation transfer ratio. (Ascend)
  30. - [STABLE] nn.Matmul supports matrix-vector product and batched matrix multiply. (Ascend/GPU)
  31. - [STABLE] nn.Dense supports input tensor whose dimension can be greater than 2. (Ascend/GPU)
  32. - [BETA] Support higher order differentiation for partial operators.(CPU/GPU/Ascend)
  33. - [STABLE] Support Tensor Augassign.(Ascend/GPU)
  34. - [BETA] Support 22 numpy native interfaces.
  35. #### Auto Parallel
  36. - [STABLE] Support parallel optimizer with weight shard. (Ascend/GPU)
  37. - [STABLE] Support distributed operators: element-wise series, UnsortedSegmentSum, UnsortedSegmentMin, Split, BroadcastTo and Unique etc. (Ascend/GPU)
  38. - [STABLE] Support distributed model prediction. (Ascend/GPU)
  39. - [STABLE] Support auto mixed precision level "O2" in auto and semi auto parallel mode. (Ascend/GPU)
  40. - [STABLE] Add MultiFieldEmbeddingLookup high-level interface. (Ascend/GPU)
  41. #### Executor
  42. - [STABLE] ResNet50 performance optimize. (GPU)
  43. - [STABLE] Support modelzoo net in PyNative mode(Ascend 29, GPU 23, CPU 2).(Ascend/GPU/CPU)
  44. - [STABLE] Support PyNative mode on CPU.(CPU)
  45. - [STABLE] Optimize performance in PyNative mode.(Ascend/GPU/CPU)
  46. - [STABLE] Support Safe Optimized Memory Allocation Solver (SOMAS) on Ascend to improve the memory-reuse, the batch size of Bert large model (128 sequence length) is increased from 160 to 208.(Ascend)
  47. - [BETA] Support second order differentiation in PyNative mode.(Ascend/GPU)
  48. - [DEMO] Add distributed trainning in PyNative mode.(Ascend/GPU)
  49. #### MDP
  50. - [STABLE] Add new operators for Ascend and GPU: IGamma, LGamma, DiGamma;
  51. - [STABLE] Add new distributions for Ascend and GPU: LogNormal, and Logistic;
  52. - [BETA] Add new distributions for Ascend only: Gumbel, Cauchy, Gamma, Beta, and Poisson; Add Categorical distribution for GPU;
  53. - [STABLE] Add new bijectors for Ascend and GPU: GumbelCDF, Invert;
  54. - [STABLE] Add Bayesian layer realized by local reparameterization method for Ascend and GPU;
  55. - [STABLE] Add Anomaly Detection Toolbox based on VAE for Ascend and GPU.
  56. #### DataSet
  57. - [STABLE] Support single node multi-p distributed cache data sharing
  58. - [STABLE] Support GPU profiling with data processing
  59. - [STABLE] Support YOLOV3 dynamic shape in sink mode with dataset
  60. - [STABLE] Support unique processing in the data processing pipeline
  61. - [STABLE] Python layer parameter verification error information unified
  62. ### API Change
  63. #### Backwards Incompatible Change
  64. ##### Python API
  65. ###### Parts of `Optimizer` add target interface ([!6760](https://gitee.com/mindspore/mindspore/pulls/6760/files))
  66. The usage of the sparse optimizer is changed.
  67. The target interface is used to set the execution backend of the sparse operator.
  68. The add_primitive_attr interface is no longer allowed.
  69. The following optimizers add the target interface: Adam, FTRL, LazyAdam, ProximalAdagrad
  70. <table>
  71. <tr>
  72. <td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
  73. </tr>
  74. <tr>
  75. <td>
  76. ```python
  77. >>> from mindspore.nn import Adam
  78. >>>
  79. >>> net = LeNet5()
  80. >>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters()))
  81. >>> optimizer.sparse_opt.add_prim_attr("primitive_target", "CPU")
  82. ```
  83. </td>
  84. <td>
  85. ```python
  86. >>> from mindspore.nn import Adam
  87. >>>
  88. >>> net = LeNet5()
  89. >>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters()))
  90. >>> optimizer.target = 'CPU'
  91. ```
  92. </td>
  93. </tr>
  94. </table>
  95. ###### `export` Modify the input parameters and export's file name ([!7385](https://gitee.com/mindspore/mindspore/pulls/7385), [!9057](https://gitee.com/mindspore/mindspore/pulls/9057/files))
  96. Export the MindSpore prediction model to a file in the specified format.
  97. The reference includes: `net`, `*inputs`, `file_name`, `file_format`, `**kwargs`.
  98. Input parameters can be input according to specific export requirements.
  99. Add the file name extension based on the format.
  100. <table>
  101. <tr>
  102. <td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
  103. </tr>
  104. <tr>
  105. <td>
  106. ```python
  107. >>> from mindspore.train.quant import quant
  108. >>>
  109. >>> network = LeNetQuant()
  110. >>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32)
  111. >>> quant.export(network, inputs, file_name="lenet_quant.mindir", file_format='MINDIR')
  112. lenet_quant.mindir
  113. ```
  114. </td>
  115. <td>
  116. ```python
  117. >>> from mindspore import export
  118. >>>
  119. >>> network = LeNetQuant()
  120. >>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32)
  121. >>> export(network, inputs, file_name="lenet_quant", file_format='MINDIR', quant_mode='AUTO')
  122. lenet_quant.mindir
  123. ```
  124. </td>
  125. </tr>
  126. </table>
  127. ###### `Dense`, `Conv2dBnAct`, `DenseBnAct`, `DenseQuant` support setting the activation attribute as an instance of a class derived from `nn.Cell` or `Primtive` ([!7581](https://gitee.com/mindspore/mindspore/pulls/7581))
  128. activation (Union[str, Cell, Primitive]): activate function applied to the output of the fully connected layer
  129. <table>
  130. <tr>
  131. <td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
  132. </tr>
  133. <tr>
  134. <td>
  135. ```python
  136. >>> import mindspore.nn as nn
  137. >>>
  138. >>> dense = nn.Dense(1, 1, activation='relu')
  139. ```
  140. </td>
  141. <td>
  142. ```python
  143. >>> import mindspore.nn as nn
  144. >>> import mindspore.ops as ops
  145. >>>
  146. >>> dense = nn.Dense(1, 1, activation=nn.ReLU())
  147. >>> dense = nn.Dense(1, 1, activation=ops.ReLU())
  148. ```
  149. </td>
  150. </tr>
  151. </table>
  152. ###### `tensor.dim()`, `tensor.size()` has been renamed to `tensor.ndim`, `tensor.size` ([!10175](https://gitee.com/mindspore/mindspore/pulls/10175))
  153. Previously, tensor.size() and tensor.dim() were used for checking the total number of elements/dimensions in the tensor.
  154. However, from a user's perspective, tensor.size and tensor.ndim (methods -> properties) are better choices, since they follow the numpy naming convention.
  155. <table>
  156. <tr>
  157. <td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
  158. </tr>
  159. <tr>
  160. <td>
  161. ```python
  162. >>> from mindspore import Tensor
  163. >>>
  164. >>> Tensor((1,2,3)).size()
  165. >>> Tensor((1,2,3)).dim()
  166. ```
  167. </td>
  168. <td>
  169. ```python
  170. >>> from mindspore import Tensor
  171. >>>
  172. >>> Tensor((1,2,3)).size
  173. >>> Tensor((1,2,3)).ndim
  174. ```
  175. </td>
  176. </tr>
  177. </table>
  178. ###### `EmbeddingLookup` add a config in the interface: sparse ([!8202](https://gitee.com/mindspore/mindspore/pulls/8202))
  179. sparse (bool): Using sparse mode. When 'target' is set to 'CPU', 'sparse' has to be true. Default: True.
  180. <table>
  181. <tr>
  182. <td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
  183. </tr>
  184. <tr>
  185. <td>
  186. ```python
  187. >>> from mindspore.nn import EmbeddingLookup
  188. >>>
  189. >>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32)
  190. >>> result = EmbeddingLookup(4,2)(input_indices)
  191. >>> print(result.shape)
  192. (2, 2, 2)
  193. ```
  194. </td>
  195. <td>
  196. ```python
  197. >>> from mindspore.nn import EmbeddingLookup
  198. >>>
  199. >>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32)
  200. >>> result = EmbeddingLookup(4,2)(input_indices, sparse=False)
  201. >>> print(result.shape)
  202. (2, 2, 2)
  203. ```
  204. </td>
  205. </tr>
  206. </table>
  207. ###### `nn.probability.bijector` change types of attributes from (int, float) to (float, list, numpy.ndarray, Tensor) ([!8191](https://gitee.com/mindspore/mindspore/pulls/8191))
  208. Attributes Type change: (int, float) -> (float, list, numpy.ndarray, Tensor).
  209. Int type is not supported anymore. Parameters of all bijectors should be type float, list, numpy.ndarray or Tensor.
  210. <table>
  211. <tr>
  212. <td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
  213. </tr>
  214. <tr>
  215. <td>
  216. ```python
  217. >>> import mindspore.nn.probability.bijector as msb
  218. >>>
  219. >>> power = 2
  220. >>> bijector = msb.PowerTransform(power=power)
  221. ```
  222. </td>
  223. <td>
  224. ```python
  225. >>> import mindspore.nn.probability.bijector as msb
  226. >>>
  227. >>> power = 2.0
  228. >>> bijector = msb.PowerTransform(power=power)
  229. ```
  230. </td>
  231. </tr>
  232. </table>
  233. ###### `nn.probability.bijector.GumbelCDF` remove a attribute in the interface: dtype ([!8191](https://gitee.com/mindspore/mindspore/pulls/8191))
  234. dtype is removed from GumbelCDF and is no longer an argument of the class.
  235. <table>
  236. <tr>
  237. <td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
  238. </tr>
  239. <tr>
  240. <td>
  241. ```python
  242. >>> import mindspore.nn.probability.bijector as msb
  243. >>> from mindspore import dtype as mstype
  244. >>>
  245. >>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0, dtype=mstype.float32)
  246. ```
  247. </td>
  248. <td>
  249. ```python
  250. >>> import mindspore.nn.probability.bijector as msb
  251. >>>
  252. >>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0)
  253. ```
  254. </td>
  255. </tr>
  256. </table>
  257. ###### `nn.layer.combined.Conv2dBnAct`, `nn.layer.combined.DenseBnAct` move from nn.layer.quant to nn.layer.combined ([!8187](https://gitee.com/mindspore/mindspore/pulls/8187))
  258. Previously Conv2dBnAct and DenseBnAct are in nn.layer.quant, since they are not quant cells, now they are moved to nn.layer.combined. If you import Conv2dBnAct, DenseBnAct from mindspore.nn, then your code dosen't need any change.
  259. <table>
  260. <tr>
  261. <td style="text-align:center"> 1.0.1 </td> <td style="text-align:center"> 1.1.0 </td>
  262. </tr>
  263. <tr>
  264. <td>
  265. ```python
  266. >>> from mindspore.nn.layer.quant import Conv2dBnAct, DenseBnAct
  267. ```
  268. </td>
  269. <td>
  270. ```python
  271. >>> from mindspore.nn import Conv2dBnAct, DenseBnAct
  272. ```
  273. </td>
  274. </tr>
  275. </table>
  276. ###### `nn.layer.conv.Conv2D`, `nn.layer.quant.Conv2dBnFoldQuant`, `nn.layer.quant.Conv2dBnWithoutFoldQuant` change weight shape when group > 1 in Ascend platform ([!9723](https://gitee.com/mindspore/mindspore/pulls/9723))
  277. In Ascend platform, if group > 1, the weight shape of Conv2D change from [in_channels//group, out_channels, kernel_size, kernel_size] to [out_channels, in_channels//group, kernel_size, kernel_size]. Previously, checkpoints of the networks are used, which use Conv2D with group > 1, such as MobileNet, can not be directly used now, need to transpose the first and second axis of the weight.
  278. ### Bug fixes
  279. #### FrontEnd
  280. - [STABLE] Fix the problem of the cse optimization in the situation of control flow. (Ascend/GPU)
  281. #### Auto Parallel
  282. - [STABLE] Resolve the restriction: input and output layouts of Reshape are restricted in tensor redistribution. (Ascend/GPU)
  283. - [STABLE] Resolve the restriction: output strategy should be data parallel in model evaluation. (Ascend/GPU)
  284. #### Executor
  285. - [STABLE] Fix fusion operator compilation cache. (Ascend)
  286. - [STABLE] Fix compilation error of dynamic shape operator. (Ascend)
  287. - [STABLE] Fix bug of pynative cannot insert transdata of node output when node should be spilted in the backend opt.(Ascend)
  288. - [STABLE] Fix the bug of TensorMove and memcpy_async merge to one after backend cse pass (Ascend)
  289. #### DataSet
  290. - [STABLE] Fix cache server hang on RequestFreeTag. (Ascend/GPU/CPU)
  291. - [STABLE] Fix hung when use pyfunc multi-processing. (Ascend/GPU/CPU)
  292. - [STABLE] Fix add multiple parent nodes to tree node cause core dump. (Ascend/GPU/CPU)
  293. ## MindSpore Lite
  294. ### Major Features and Improvements
  295. #### Converter and runtime
  296. 1. Support dynamic shape in MindSpore Lite Converter.
  297. 2. Optimize sub-graph mechanism by dynamically splitting the entire graph into multiple subgraphs based on the operator supported, backend hardware and user configuration.
  298. 3. Support TensorList and TensorList operators such as TensorListFromTensor, TensorListGetItem and so on.
  299. 4. Support BatchMatMul fusion and LSTM fusion in MindSpore Lite Converter.
  300. 5. Support converting model and run inference on Windows operator system.
  301. 6. Support Model(.ms) visualization on Netron.
  302. 7. Support Tensorflow model in MindSpore Lite Converter
  303. 8. Add 86 converter parsers.
  304. 9. Convert aware training model without user’s awareness
  305. 10. Support scalar tensor in MindSpore Lite Converter and Runtime
  306. 11. Support NPU backend on HUAWEI Kirin SoC.[BETA]
  307. 12. Merge timeprofiler into benchmark
  308. #### CPU backend optimization
  309. 1. Add 50+ new operators, including new Op type(like Adder, Gru).
  310. 2. Enhanced performance on armv8.2 supported platform. For example, utilizing sdot instruction more efficiently.
  311. 3. Optimize all operators(fp32, fp16, int8) by implementing multi-thread, SIMD tech as much as possible. Model inference time can reduce at least 20% after these optimizations.
  312. 4. Extending to support operators for x86_64 platform based on SSE/AVX instruction set.
  313. #### OpenCL backend
  314. 1. Add new ops: add 10+ ops, total 58 ops;
  315. 2. Performance optimization: by memory layout optimize, Winograd Convolution select strategyoptimize, SIMT local size optimize, local cache optimize, GPU performance improvement up to 20+% vs MSLITE Version1.0
  316. 3. Add Online Graph optimzation: by fusion Convolution/Matmul/Fullconnection and add/mul/pad/reshape, improve performance up to 50+% for some networks;
  317. 4. Add auto tuning: by online tuning in the graph compilation phase, optimize performance up to 10%;
  318. 5. Add weight quant: support weight quant
  319. 6. Add opencl kernel binary cache: improve Initilization time .
  320. #### Post quantization
  321. MindSpore Lite supports both weight quantization and full quantization. Currently, Weights can be quantized into 1 ~ 16 bits according to user configuration. In internal testing, quantization of networks, such as classification, detection, segmentation and transformer are well supported. To ensure high accuracy of quantized models, MindSpore Lite uses a pipeline quantization method. In the first phase, the weight and activation value are quantized using linear quantization methods, such as MIN-MAX. In the second phase, the quantization error is analyzed, and uses statistical methods to compensate loss caused by fp32 quantization to a fixed point such as Int8 to quantized models. The features of Post-training quantization are:
  322. 1. perchannel asymmetric quantization for weights, such as MAX_MIN and KMEANS
  323. 2. Perlayer symmetric quantization for activation, such as KL and MAX_MIN.
  324. 3. perlayer asymmetrical quantization for activation, such as, RemoveOutlier.
  325. 4. accuracy loss compensation, such as BiasCorrection
  326. | mobilenet_v2 | ACC (ImageNet) |
  327. |---|---|
  328. | FP32 | 71.56% |
  329. |A8W8 | 71.16% |
  330. | A8W8(without BiasCorrection) | 70.74% |
  331. | A8W7 | 71.06% |
  332. | A7W7 | 70.78% |
  333. The above table uses the mobilenet_v2 model from TF official website. Using MindSpore Lite quantization, the precision of A8W8 (8-bit activation value quantization and 8-bit weight quantization) decreases from 0.82% to 0.4% after accuracy loss compensation, for 7-bit quantization, the precision loss is still no more than 1%.
  334. #### Training on Device
  335. Within MindSpore 1.1 release, the MindSpore Lite provides the following Training-on-Device (ToD) capabilities:
  336. 1. Learning from scratch and Transfer Learning strategies are supported
  337. 2. MindSpore based models can be converted and used in training on the device. (Third-party models such as TensorFlow and PyTorch for now cannot be directly imported to the framework)
  338. 3. Grad operations are supported for more than 30 operators such as Dense layers, Convolutions and Batch Normalizations. Momentum, SGD, and ADAM optimizers are supported.
  339. 4. Supports networks such as LeNet, Alexnet, Resnet, MobileNetV1/V2/V3, and EffectiveNet, and provides complete model loading, conversion, and Python training scripts on the device side.
  340. The MindSpore Lite ToD framework is already in use in the newest Huawei Smart TV, providing a unique and personalized user experience as a family entertainment center.
  341. ### API Change
  342. #### API Incompatible Change
  343. ##### C++ API
  344. - [Modify] Context now support multi-context configuration.(Context.h)
  345. - [Modify] Callback is move from lite_session.h into ms_tensor.h.
  346. - [Modify] GetInputsByName in lite_session.h is changed into GetInputsByTensorName
  347. - [Add] add static LiteSession *CreateSession(const char*model_buf, size_t size, const lite::Context *context) in lite_session.h
  348. - [Add] add GetErrorInfo interface returning error message in errorcode.h
  349. - [Delete] Remove model_generated.h, ops_generated.h and headers of FlatBuffers library from interfaces
  350. ##### Java API
  351. - [Add] Implament JNI layer and add Java api for CPU and GPU backend
  352. #### Deprecations
  353. ##### C++ API
  354. Deprecate Interface GetOutputsByNodeName
  355. ### Bug fixes
  356. - [BUGFIX] Fix the bug in sub-graph segmentation
  357. - [BUGFIX] Fix the bug in Tensor getitem in which the ellipsis matches the wrong dim-size.
  358. - [BUGFIX] Fix the bug that activation modification after defining Dense will not take effect.
  359. ### Contributors
  360. zhouyifengCode, huqi, JulyAi, damon0626, chenbo116, rmdyh, davidmc, gray0v0, doitH, Gogery, zymaa, xinyunfan
  361. # MindSpore 1.0.0 Release Notes
  362. ## Major Features and Improvements
  363. ### MindSpore Training and Inference Framework
  364. #### Ascend 910
  365. - New models
  366. - DenseNet121: a dense convolutional neural network, which connects each layer to every other layer in a feed-forward fashion for object recognition on ImageNet dataset.
  367. - UNet2D-Medical: Unet Medical model for 2D image segmentation, Convolutional Networks for Biomedical Image Segmentation on ISBI Challenge database.
  368. - Frontend and user interface
  369. - Second-Order Optimization
  370. - Enable second-order optimization for Bert on Ascend 910, which can achieve a masked lm accuracy of 71.3% in 800 seconds using 8 Ascend 910 (Bert-Large @MLPerf v0.7 dataset).
  371. - New GNN model BGCF
  372. - Bayesian Graph Convolutional Filtering network which naturally incorporate the uncertainty in the user-item interaction graph shows excellent recommendation performance on Amazon-Beauty dataset.
  373. - Add append interface for SequentialCell.
  374. - Add a level `auto` for AMP.
  375. - Executor and performance optimization
  376. - Support quantitative network (Resnet50 & YoloV3 & MobileNetV2).
  377. - Project ease of use optimization: project compilation time optimization, CMakelist regularization, cudnn, cuda independent compilation and installation independent.
  378. - Data processing, augmentation, and save format
  379. - Support GeneratorDataset return string type
  380. #### Other Hardware Support
  381. - GPU platform
  382. - Enable second-order optimization for resnet50 on GPU, which achieve 30% improvement on training time compared to SGD with Momentum (Resnet50 @ImageNet).
  383. #### User interfaces change log
  384. - Remove global object GradOperation in Autodiff([!5011](https://gitee.com/mindspore/mindspore/pulls/5011))
  385. - Remove useless attribute 'name' in Autodiff([!5172](https://gitee.com/mindspore/mindspore/pulls/5172))
  386. - Rectification distributed init([!5350](https://gitee.com/mindspore/mindspore/pulls/5350))
  387. - Move the setting of ParalleMode from train.parallel_utils to context([!5351](https://gitee.com/mindspore/mindspore/pulls/5351))
  388. - Modification of save_checkpoint([!5482](https://gitee.com/mindspore/mindspore/pulls/5482))
  389. - Wrap numpy random seed into an api([!5634](https://gitee.com/mindspore/mindspore/pulls/5634))
  390. - Delete enable_fused_layernorm in some modelzoo scripts([!5665](https://gitee.com/mindspore/mindspore/pulls/5665))
  391. - Move 'multi-subgraphs' interface to internal([!5696](https://gitee.com/mindspore/mindspore/pulls/5696))
  392. - Rename mirror_mean to gradient_mean([!5700](https://gitee.com/mindspore/mindspore/pulls/5700))
  393. - Remove default value of 'group' of DepthWiseConv2d([!5865](https://gitee.com/mindspore/mindspore/pulls/5865))
  394. - Modify interface for function and remove duplicated def([!5958](https://gitee.com/mindspore/mindspore/pulls/5958))
  395. - Unify Conv2d and DepthwiseConv2d([!5916](https://gitee.com/mindspore/mindspore/pulls/5916))
  396. - Modification of SoftmaxCrossEntropyWithLogits([!5502](https://gitee.com/mindspore/mindspore/pulls/5502))
  397. - Change API set_strategy() to shard()([!5991](https://gitee.com/mindspore/mindspore/pulls/5991))
  398. - Move batch_size from bert_cfg_cfg to cfg([!6233](https://gitee.com/mindspore/mindspore/pulls/6233))
  399. - Remove unused parameters from SummaryRecord __init__([!5548](https://gitee.com/mindspore/mindspore/pulls/5548))
  400. - remove sens parameter of TrainOneStepWithLossScaleCell([!5753](https://gitee.com/mindspore/mindspore/pulls/5753))
  401. - optimize the TrainOneStepCell for user's define([!6159](https://gitee.com/mindspore/mindspore/pulls/6159))
  402. - delete seed0 and seed1 of nn.Dropout([!5735](https://gitee.com/mindspore/mindspore/pulls/5735))
  403. - delete DataWrapper([!6101](https://gitee.com/mindspore/mindspore/pulls/6101))
  404. - LSTM API optimization([!6374](https://gitee.com/mindspore/mindspore/pulls/6374))
  405. - Merge P\C\F of ops([!5645](https://gitee.com/mindspore/mindspore/pulls/5645))
  406. - delete SoftmaxCrossEntropyExpand interface([!6607](https://gitee.com/mindspore/mindspore/pulls/6607))
  407. - Adjust GroupNorm interface([!6329](https://gitee.com/mindspore/mindspore/pulls/6329))
  408. - Modify init interface to internal interface([!6651](https://gitee.com/mindspore/mindspore/pulls/6651))
  409. - Log optimization([!5842](https://gitee.com/mindspore/mindspore/pulls/5842))
  410. - Remove useless API dataset.set_dataset_size([!5806](https://gitee.com/mindspore/mindspore/pulls/5806))
  411. - Some of Dataset API add usage parameter([!5605](https://gitee.com/mindspore/mindspore/pulls/5605))
  412. - Change the import path, such as from mindspore.dataset.transforms.vision to mindspore.dataset.vision.transforms([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
  413. - Rename ImageFolderDatasetV2 to ImageFolderDataset([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
  414. - Dataset.map parameter optimization([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
  415. - Add new api dataset.get_col_names([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
  416. - Add new api dataset.get_col_names([!5384](https://gitee.com/mindspore/mindspore/pulls/5384))
  417. - Remove useless API MindRecord finish([!5580](https://gitee.com/mindspore/mindspore/pulls/5580))
  418. ### MindSpore Lite
  419. - Converter
  420. - Add 6 TFLite op, 7 Caffe op, 1 ONNX op.
  421. - Add support for Windows.
  422. - Support parallel inference of multiple sessions to adapt to more scenarios
  423. - Support 8bits only weight-quantization, most main-stream models has small accuracy loss (less than 0.5%) when compared to non-qunantized fp32 model.
  424. - CPU & GPU
  425. - Add 20 CPU ops,include FP32, int8/uint8, FP16 and int32 ops.
  426. - Add supporting FP16 for GPU, add 14 GPU ops include FP32/FP16.
  427. - Add Buffer/Image2D transform op for GPU
  428. - Performance optimization for CPU ops focus on ARM32.
  429. - Performance optimization for GPU Convolution using winograd.
  430. - Tool & example
  431. - Add object detection Android Demo.
  432. ## Bugfixes
  433. - Models
  434. - fix the constant folding problem in multiply.([!6092](https://gitee.com/mindspore/mindspore/pulls/6092))
  435. - move batch_size from bert_net_cfg to cfg in bert scripts.([!6233](https://gitee.com/mindspore/mindspore/pulls/6233))
  436. - modify the checkpoint file path.([!6137](https://gitee.com/mindspore/mindspore/pulls/6137))
  437. - Python API
  438. - fix semi auto parallel parameter of reshape has another user([!5722](https://gitee.com/mindspore/mindspore/pulls/5722))
  439. - raise ValueError when call hook function in graph mode([!5831](https://gitee.com/mindspore/mindspore/pulls/5831))
  440. - Executor
  441. - fix pynative mode to build temporary nn objects.([!6189](https://gitee.com/mindspore/mindspore/pulls/6189))
  442. - fix the accuracy problem of multiple inputs of multi-card communication operator broadcast.([!6522](https://gitee.com/mindspore/mindspore/pulls/5622))
  443. - fix the problem that the sample distribution interface categorical does not support graph mode.([!5772](https://gitee.com/mindspore/mindspore/pulls/5772))
  444. - fix the random seed failure problem of the polynomial downsampling distribution operator.([!5948](https://gitee.com/mindspore/mindspore/pulls/5948))
  445. - fix unnecessary address binding issues in GPU heterogeneous scenarios.([!6232](https://gitee.com/mindspore/mindspore/pulls/6232))
  446. - GPU platform
  447. - fix for kernel resource leak([!5315](https://gitee.com/mindspore/mindspore/pulls/5315))
  448. - fix for insufficient memory for continuous unit test running([!5617](https://gitee.com/mindspore/mindspore/pulls/5617))
  449. - fix for the memory leak in the sparse slicer([!5578](https://gitee.com/mindspore/mindspore/pulls/5578))
  450. - Data processing
  451. - fix hang when use pyfunc([!6346](https://gitee.com/mindspore/mindspore/pulls/6346))
  452. - fix GPU device queue does not release GIL during resource clean up([!5964](https://gitee.com/mindspore/mindspore/pulls/5964))
  453. - fix hang if scripte exit unnormally([!6441](https://gitee.com/mindspore/mindspore/pulls/6441))
  454. - Third party
  455. - Sqlite : Update sqlite to 3.32.2 to handle [CVE-2020-11656](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11656), [CVE-2020-13871](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13871), [CVE-2020-11655](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655), [CVE-2020-9327](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9327), [CVE-2020-13630](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630), [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358), [CVE-2020-13631](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13631), [CVE-2020-13632](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632), [CVE-2020-13434](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13434), [CVE-2020-13435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13435), and [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655).
  456. - Libjpeg-turbo : Update libjpeg-turbo to 2.0.4 to handle [CVE-2020-13790](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13790).
  457. ## Contributors
  458. Thanks goes to these wonderful people:
  459. Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, danish, Danish, dayschan, eric, Eric, fary86, fuzhiye, Gaoxiong, gengdongjie, gongdaguo, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huzhifeng, hwjiaorui, Jesse, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, r1chardf1d0, riemann_penn, root, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wukesong, wuweikang, wuxuejian, Xiaoda, xiefangqi, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhoufeng, zhousiyi, zhouyaqiang, Zichun, Zirui, Ziyan, zjun, ZPaC
  460. Contributions of any kind are welcome!
  461. # MindSpore 0.7.0-beta Release Notes
  462. ## Major Features and Improvements
  463. ### MindSpore Training and Inference Framework
  464. #### Ascend 910
  465. - New models
  466. - TinyBert: a smaller and faster version of BERT using transformer distillation for natural language understanding on GLUE benchmark.
  467. - SE-ResNet50: add Squeeze-and-Excitation blocks(SE-Blocks) to the resnet50 network to improve channel interdependencies for image classification on ImageNet 2012 dataset.
  468. - Inception V3: the third version of Inception convolutional architectures for image classification on ImageNet 2012 dataset.
  469. - Frontend and user interface
  470. - Embedding operator high-level packaging to support segmented by field for Wide&Deep.
  471. - Load multi-node checkpoint into single-process to support host-device hybrid inference.
  472. - Support Concat/Tile/Strideslice distributed operators.
  473. - Support cumulative gradient and batch training split.
  474. - Support variable parameter input for Cell object.
  475. - Parameter mixed calculation optimization for pynative mode.
  476. - Deep Probabilistic Programming
  477. - Support statistical distributions classes used to generate stochastic tensors.
  478. - Support probabilistic inference algorithms.
  479. - Support BNN layers used to construct BNN in Graph mode.
  480. - Support interfaces for the transformation between BNN and DNN in Graph mode.
  481. - Support uncertainty estimation to estimate epistemic uncertainty and aleatoric uncertainty.
  482. - User interfaces change log
  483. - change base class of parameter([!3473](https://gitee.com/mindspore/mindspore/pulls/3473))
  484. - change binary to mindir([!4258](https://gitee.com/mindspore/mindspore/pulls/4258))
  485. - change export from geir to air([!4269](https://gitee.com/mindspore/mindspore/pulls/4269))
  486. - Init parameter data by default([!3967](https://gitee.com/mindspore/mindspore/pulls/3967))
  487. - change IndexedSlices to RowTensor([!4031](https://gitee.com/mindspore/mindspore/pulls/4031))
  488. - Must set or change parallel mode before any Initializer created([!4801](https://gitee.com/mindspore/mindspore/pulls/4801))
  489. - Executor and performance optimization
  490. - MindSpore graph compilation process performance improved by 20%.
  491. - Decoupling C++ and Python modules to achieve separate compilation of core modules.
  492. - Data processing, augmentation, and save format
  493. - Support automatic data augmentation
  494. - Support GNN distributed cache in single node
  495. - Support ConcatDataset using distributed sampler
  496. #### Other Hardware Support
  497. - GPU platform
  498. - New model supported: VGG16, ResNet101, DeepFM.
  499. - Support some distributed operators in ResNet50 and Wide&Deep.
  500. - Support automatic parallel for Wide&Deep.
  501. - Support function funcs[i](*inputs) (such as switch-case).
  502. - Support distributed training with parameter server.
  503. - Support GPU operator profiling.
  504. - Performance optimization of the distributed training with allreduce.
  505. - Performance optimization of the mixed precision training.
  506. - Performance optimization of the pynative mode.
  507. - Performance optimization of the convolution operator, batch normalization operator.
  508. - CPU platform
  509. - Support MobileNetV2 Re-Training: Re-train the network with different class number.
  510. ### MindSpore Lite
  511. - Converter
  512. - Support third-party models, including TFLite/Caffe/ONNX.
  513. - Add 93 TFLite op.
  514. - Add 24 Caffe op.
  515. - Add 62 ONNX op.
  516. - Add 11 optimized passes, include fusion/const fold.
  517. - Support aware-training and Post-training quantization.
  518. - CPU
  519. - Add 100+ops,support fp32, int8/uint8, FP16 ops
  520. - Support fast convolution algorithms: Sliding Window, Img2col + Gemm, Strassen, Winograd
  521. - Support assembly/neon instruction.
  522. - Support CPU fp16 and sdot on ARM v8.2+.
  523. - GPU
  524. - Add 20+ ops for OpenCL.
  525. - Support image2D/buffer format.
  526. - Optimize online initialization time.
  527. - add optimized convolution1X1/3X3/depthwise/convolution_transposed for OpenCL.
  528. - Tool & example
  529. - Add benchmark and TimeProfile tools.
  530. - Add image classification Android Demo.
  531. ## Bugfixes
  532. - Models
  533. - normalize the readme file([!5410](https://gitee.com/mindspore/mindspore/pulls/5410))
  534. - fix a sink_size bug for transformer([!5393](https://gitee.com/mindspore/mindspore/pulls/5393))
  535. - fix bool type optional for resnet50([!5363](https://gitee.com/mindspore/mindspore/pulls/5363))
  536. - Python API
  537. - improve interface '__bool__' for tensor([!4000](https://gitee.com/mindspore/mindspore/pulls/4000))
  538. - fix GPU-ResizeNearestNeighbor([!3760](https://gitee.com/mindspore/mindspore/pulls/3760))
  539. - fix topK multi dimention grad func([!3711](https://gitee.com/mindspore/mindspore/pulls/3711))
  540. - fix scatterop error msg([!3699](https://gitee.com/mindspore/mindspore/pulls/3699))
  541. - fix bug of cast dtype when using mix_presion in pynative mode([!3730](https://gitee.com/mindspore/mindspore/pulls/3730))
  542. - Executor
  543. - fix etsnet train error when UnsegmentSum's first input shape is (1,) ([!4573](https://gitee.com/mindspore/mindspore/pulls/4573))
  544. - fix bug of result error in while control flow because of unsupporting for value reference ([!4103](https://gitee.com/mindspore/mindspore/pulls/4103))
  545. - fix bug of the output tensor does not carry device data type ([!3774](https://gitee.com/mindspore/mindspore/pulls/3774))
  546. - fix bug of avoiding multi attr value are eliminated in pynative mode ([!4225](https://gitee.com/mindspore/mindspore/pulls/4225))
  547. - fix bug of AssignAdd unable to work normally in multi-cases ([!5171](https://gitee.com/mindspore/mindspore/pulls/5171))
  548. - GPU platform
  549. - improve the environment variable checking for nvcc compiler path ([!5140](https://gitee.com/mindspore/mindspore/pulls/5140))
  550. - fix bug of error in cast operator conversion from fp16 to fp32 ([!4147](https://gitee.com/mindspore/mindspore/pulls/4147))
  551. - fix bug of the array out of bound in case of make_tuple operator ([!5219](https://gitee.com/mindspore/mindspore/pulls/5219))
  552. - Data processing and Pro
  553. - fix GeneratorDataset time out([!3624](https://gitee.com/mindspore/mindspore/pulls/3624))
  554. - fix concat operator get_dataset_size error([!4701](https://gitee.com/mindspore/mindspore/pulls/4701))
  555. - fixing python validator for Repeat Op([!4366](https://gitee.com/mindspore/mindspore/pulls/4366))
  556. - Third party
  557. - Sqlite : Update sqlite to 3.32.2 to handle [CVE-2020-11656](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11656), [CVE-2020-13871](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13871), [CVE-2020-11655](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655), [CVE-2020-9327](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9327), [CVE-2020-13630](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630), [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358), [CVE-2020-13631](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13631), [CVE-2020-13632](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632), [CVE-2020-13434](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13434), [CVE-2020-13435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13435), and [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655).
  558. - Libjpeg-turbo : Update libjpeg-turbo to 2.0.4 to handle [CVE-2020-13790](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13790).
  559. ## Contributors
  560. Thanks goes to these wonderful people:
  561. Adel, Alexey, andy, andy_wangrui, anthonyaje, anzhengqi, askmiao, avakh, baihuawei, bingyaweng, BowenK, buxue, caifubi, CaoJian, caozhou, Cathy, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chentingting, chenzomi, chenzupeng, chujinjin, cjh9368, Corleone, cristoval, danish, dengyutao, eric, Eric, ervinzhang, etone-chan, fangzehua, fary86, fuzhiye, gengdongjie, genglishuai, Giancarlo, gongdaguo, gukecai, guohongzilong, GuoMengHao, hangq, hanhaocheng, hanhuifeng2020, hanjun996, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, hongxing, huangdongrun, huanghui, huangxinjing, islam_amin, Jesse, jianghui58, jiangzhiwen, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, kai00, kingfo, kpy, kswang, laiyongqiang, leilei_snow, leopz, Li, liangzelang, lianliguang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, lingyunli63, linqingke, lirongzhen1, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuzhongkai, Lixia, lixian, liyong, lizhenyu, looop5, luoyang, lvchangquan, lvliang, lvwenyuan, lyvette, mahdi, Mahdi, mamba_ni, maning202007, Margaret_wangrui, mayang, meixiaowei, meng_chunyang, ms_yan, nhussain, panbingao, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, pengyongrong, Pengyongrong, qianlong, qujianwei, root, shenwei41, shibeiji, simson, songhonglei413, Su, sunsuodong, suteng, tao_yunhao, TFbunny, tinazhang, tom__chen, tony_liu2, tronzhang, VectorSL, wandongdong, wangdongxu, wanghua, wangmin, wangshaocong, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wukesong, wuweikang, wuxuejian, wuyongkang, xiefangqi, xuanyue, Xun, xutianchun, xuyongfei, yanghaitao, yangjie159, YangLuo, yangruoqi713, yangyongjie, yangzhenzhang, yankai, yao_yf, yelihua, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zhangxuetong, zhaizhiqiang, Zhang, zhangxinfeng3, zhangxuetong, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaoting, zhaozhenlong, zhengjun10, zhongligeng, zhoufeng, zhousiyi, zhouyaqiang, zhouyuanshen, Zichun, Zirui, zjun, zongha, ZPaC, lijiaqi, liangchenghui, wangminggui
  562. Contributions of any kind are welcome!
  563. # MindSpore 0.6.0-beta Release Notes
  564. ## Major Features and Improvements
  565. ### Ascend 910 Training and Inference Framework
  566. - New models
  567. - There are official, research and community under modelzoo.
  568. - Official is maintained with the newest APIs by MindSpore team, MaskRCNN are added.
  569. - Research is uploaded by researchers for official review, and APIs may not be updated in time.
  570. - Community reprints the relevant links of partner research results.
  571. - Hub added on the same level as modelzoo, synchronous storage of materials needed for official hub web pages which will be launched soon.
  572. - Support pre-trained models, few lines of code can be used to download and load pre-trained models, supporting inference or transfer learning.
  573. - Frontend and user interface
  574. - Supports user side operator compilation and graph execution error rendering.
  575. - Uniform definition dynamic learning rate behavior in optimizers.
  576. - Support IndexSlice in sparse expression.
  577. - Support use parent construct method during construct.
  578. - Support asynchronous execution save checkpoint file.
  579. - Support implicit type conversion in pynative mode.
  580. - User interfaces change log
  581. - unform learning rate behavior in optimizers([!2755](https://gitee.com/mindspore/mindspore/pulls/2755))
  582. - rename operator of sparse optimizer([!3217](https://gitee.com/mindspore/mindspore/pulls/3217))
  583. - move profiler module from mindinsight to mindspore([!3075](https://gitee.com/mindspore/mindspore/pulls/3075))
  584. - VOCDataset output change to multi-columns([!3093](https://gitee.com/mindspore/mindspore/pulls/3093))
  585. - GetDatasize feature([!3212](https://gitee.com/mindspore/mindspore/pulls/3212))
  586. - dataset: modify config api([!2936](https://gitee.com/mindspore/mindspore/pulls/2936))
  587. - Executor and performance optimization
  588. - Decouple C++ and python, so make the architecture more extensible.
  589. - Parameter Server for distributed deep learning supported.
  590. - Serving:a flexible service deployment framework for deep learning models.
  591. - Memory reuse is enhanced, and the batch size of Bert large model is increased from 96 to 160 on a single server.
  592. - Data processing, augmentation, and save format
  593. - Support MindRecord save operator after date processing
  594. - Support automatic fusion operator, such as decode/resize/crop
  595. - Support CSV dataset loading
  596. ### Other Hardware Support
  597. - GPU platform
  598. - New model supported: ResNext50, WarpCTC and GoogLeNet.
  599. - Support hyperparametric search and data enhanced automl on GPU.
  600. - Support Resnet50 automatic parallel in GPU backend.
  601. ## Bugfixes
  602. - Models
  603. - Improved the performance and accuracy on ResNet50([!3456](https://gitee.com/mindspore/mindspore/pulls/3456))
  604. - Fixed the performance test case of bert([!3486](https://gitee.com/mindspore/mindspore/pulls/3486))
  605. - Python API
  606. - Fix assign used in while loop([!2720](https://gitee.com/mindspore/mindspore/pulls/2720))
  607. - Revert optimize the graph output of all nop node.([!2857](https://gitee.com/mindspore/mindspore/pulls/2857))
  608. - Print tensor as numpy.([!2859](https://gitee.com/mindspore/mindspore/pulls/2859))
  609. - Support weight decay for sparse optimizer([!2668](https://gitee.com/mindspore/mindspore/pulls/2668))
  610. - Fix BatchToSpaceND([!2741](https://gitee.com/mindspore/mindspore/pulls/2741))
  611. - Fixing type check mistakes of InplaceAdd and Inplace Sub ops([!2744](https://gitee.com/mindspore/mindspore/pulls/2744]))
  612. - Change order param only equal to group param([!2748](https://gitee.com/mindspore/mindspore/pulls/2748))
  613. - Executor
  614. - The performance of graph whith control flow is optimized([!2931](https://gitee.com/mindspore/mindspore/pulls/2931))
  615. - Fix bug of wrong number of tuple layers([!3390](https://gitee.com/mindspore/mindspore/pulls/3390))
  616. - Fix cpu multi graph memory exception([!3631](https://gitee.com/mindspore/mindspore/pulls/3631))
  617. - Enable data sync when calling operator without defining a cell([!3081](https://gitee.com/mindspore/mindspore/pulls/3081))
  618. - Fix argmaxwith value error in pynative mode on GPU([!3082](https://gitee.com/mindspore/mindspore/pulls/3082))
  619. - Fix precision error with fp16 input on pynative mode([!3196](https://gitee.com/mindspore/mindspore/pulls/3196))
  620. - Data processing
  621. - Fix bug of RandomColor and RandomSharpness default parameter checking ([!2833](https://gitee.com/mindspore/mindspore/pulls/2833))
  622. - Fix process hung when training and eval ([!3469](https://gitee.com/mindspore/mindspore/pulls/3469))
  623. - Third party
  624. - Sqlite : Update sqlite to 3.32.2 to handle [CVE-2020-11656](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11656), [CVE-2020-13871](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13871), [CVE-2020-11655](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655), [CVE-2020-9327](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9327), [CVE-2020-13630](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630), [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358), [CVE-2020-13631](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13631), [CVE-2020-13632](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632), [CVE-2020-13434](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13434), [CVE-2020-13435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13435), and [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655).
  625. - Libjpeg-turbo : Update libjpeg-turbo to 2.0.4 to handle [CVE-2020-13790](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13790).
  626. ## Contributors
  627. Thanks goes to these wonderful people:
  628. Alexey Shevlyakov, avakh, baihuawei, BowenK, buxue, caifubi, caojian05, Cathy Wong, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, Danish Farid, dayschan, dengwentao, dinghao, etone-chan, fangzehua, fary86, geekun, Giancarlo Colmenares, gong chen, gukecai, guohongzilong, hangangqiang, heleiwang, hesham, He Wei, hexia, hongxing, huangdongrun, huanghui, islam_amin, Jamie Nisbet, Jesse Lee, jiangjinsheng, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, Jonathan Yan, jonyguo, Junhan Hu, Kang, kingfo, kouzhenzhong, kpy, kswang, laiyongqiang, leopz, liangzelang, lichenever, lihongkang, Li Hongzhang, lilei, limingqi107, lirongzhen1, liubuyu, liuchongming74, liuwenhao4, liuxiao, Lixia Chen, liyanliu, liyong, lizhenyu, lvliang, Mahdi, Margaret_wangrui, meixiaowei, ms_yan, nhussain, ougongchang, panfengfeng, panyifeng, peilinwang, Peilin Wang, pkuliuliu, qianlong, rick_sanchez, shibeiji, Shida He, shijianning, simson, sunsuodong, suteng, Tinazhang, Tron Zhang, unknown, VectorSL, wandongdong, wangcong, wangdongxu, wangdongxu6, wanghua, wangnan39, Wei Luning, wenchunjiang, wenkai, wilfChen, WilliamLian, wukesong, Xian Weizhao, Xiaoda Zhang, xiefangqi, xulei2020, xunxue, xutianchun, Yang, yanghaitao, yanghaitao1, yanghaoran, yangjie, yangjie159, YangLuo, Yanjun Peng, yankai, yanzhenxiang2020, yao_yf, Yi Huaijie, yoonlee666, yuchaojie, yujianfeng, zhangzhongpeng, zhangdengcheng, Zhang Qinghua, zhangyinxia, zhangz0911gm, zhaojichen, zhaoting, zhaozhenlong, zhoufeng, zhouneng, zhousiyi, Zirui Wu, Ziyan, zjun, ZPaC, lihongzhang, wangdongxu
  629. Contributions of any kind are welcome!
  630. # MindSpore 0.5.2-beta Release Notes
  631. ## Major Features and Improvements
  632. ### Ascend 910 Training and Inference Framework
  633. - New models
  634. - DenseNet121: a convolution based neural network for the task of image classification on ImageNet 2012 dataset.
  635. ## Bugfixes
  636. - Models
  637. - VGG16,Alexnet,GoogleNet,optimize network for better performance. ([!5539](https://gitee.com/mindspore/mindspore/pulls/5539))
  638. - YOLOV3, fix yolov3_darknet53 dataset bug. ([!5658](https://gitee.com/mindspore/mindspore/pulls/5658))
  639. ## Contributors
  640. Thanks goes to these wonderful people:
  641. Alexey Shevlyakov, avakh, baihuawei, BowenK, buxue, caifubi, caojian05, Cathy Wong, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, Danish Farid, dayschan, dengwentao, dinghao, etone-chan, fangzehua, fary86, geekun, Giancarlo Colmenares, gong chen, gukecai, guohongzilong, hangangqiang, heleiwang, hesham, He Wei, hexia, hongxing, huangdongrun, huanghui, islam_amin, Jamie Nisbet, Jesse Lee, jiangjinsheng, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, Jonathan Yan, jonyguo, Junhan Hu, Kang, kingfo, kouzhenzhong, kpy, kswang, laiyongqiang, leopz, liangzelang, lichenever, lihongkang, Li Hongzhang, lilei, limingqi107, lirongzhen1, liubuyu, liuchongming74, liuwenhao4, liuxiao, Lixia Chen, liyanliu, liyong, lizhenyu, lvliang, Mahdi, Margaret_wangrui, meixiaowei, ms_yan, nhussain, ougongchang, panfengfeng, panyifeng, peilinwang, Peilin Wang, pkuliuliu, qianlong, rick_sanchez, shibeiji, Shida He, shijianning, simson, sunsuodong, suteng, Tinazhang, Tron Zhang, unknown, VectorSL, wandongdong, wangcong, wangdongxu, wangdongxu6, wanghua, wangnan39, Wei Luning, wenchunjiang, wenkai, wilfChen, WilliamLian, wukesong, Xian Weizhao, Xiaoda Zhang, xiefangqi, xulei2020, xunxue, xutianchun, Yang, yanghaitao, yanghaitao1, yanghaoran, yangjie, yangjie159, YangLuo, Yanjun Peng, yankai, yanzhenxiang2020, yao_yf, Yi Huaijie, yoonlee666, yuchaojie, yujianfeng, zhangzhongpeng, zhangdengcheng, Zhang Qinghua, zhangyinxia, zhangz0911gm, zhaojichen, zhaoting, zhaozhenlong, zhoufeng, zhouneng, zhousiyi, Zirui Wu, Ziyan, zjun, ZPaC, lihongzhang, wangdongxu
  642. Contributions of any kind are welcome!
  643. # MindSpore 0.5.0-beta Release Notes
  644. ## Major Features and Improvements
  645. ### Ascend 910 Training and Inference Framework
  646. - New models
  647. - ResNext50: a simple, highly modularized network architecture using aggregated resdiual transformations for image classification on ImageNet 2012 dataset.
  648. - MASS: a pre-training method for sequence to sequence based language generation tasks on Text Summarization and Conversational Response Generation using News Crawls 2007-2017 dataset, Gigaword corpus and Cornell movie dialog corpus.
  649. - Transformer: a neural network architecture for language understanding on WMT 2014 English-German dataset.
  650. - GCN:Graph Convolutional Networks for the task of classification of nodes in a graph on Cora and Citeseer datasets.
  651. - GAT:an attention-based graph neural network for node classification on Cora and CiteSeer dataset.
  652. - Frontend and user interface
  653. - Support tensor value and assignment of mixed tensor index in graph mode.
  654. - Support tensor comparison, len operator, constexpr syntax, value and assignment of tensor index in pynative mode.
  655. - Support converting MindSpore IR to pb format for infer model.
  656. - Support print operator to write data directly on the hard disk.
  657. - Add the double recursive programming solution for very high speed parallel strategy search in automatic parallel.
  658. - User interfaces change log
  659. - Allow the learning rate of AdamWeightDecayDynamicLR and Lamb to be 0([!1826](https://gitee.com/mindspore/mindspore/pulls/1826))
  660. - Restricting the entire network input parameter is Tensor([!1967](https://gitee.com/mindspore/mindspore/pulls/1967))
  661. - Turn shape and dtype into attributes instead of interfaces([!1919](https://gitee.com/mindspore/mindspore/pulls/1919))
  662. - Delete multitypefungraph([!2116](https://gitee.com/mindspore/mindspore/pulls/2116))
  663. - Refactor the callback module in an encapsulated way, use _CallbackManager instead of_build_callbacks([!2236](https://gitee.com/mindspore/mindspore/pulls/2236))
  664. - Delete EmbeddingLookup([!2163](https://gitee.com/mindspore/mindspore/pulls/2163))
  665. - Checkpoint add model_type([!2517](https://gitee.com/mindspore/mindspore/pulls/2517))
  666. - Executor and performance optimization
  667. - Heterogeneous execution on CPU and Ascend devices supported, and is verified in Wide&Deep model.
  668. - Quantitative training of MobileNetV2, Lenet and Resnet50 on Ascend-910 are supported.
  669. - Support new fusion architecture, which can do fusion optimization across graphs and kernels to improve execution speed.
  670. - Data processing, augmentation, and save format
  671. - Support data processing pipeline performance profiling.
  672. - Support public dataset loading, such as CLUE and Coco.
  673. - Support more text processing, such as more tokenizers and vocab data.
  674. - Support MindRecord padded data.
  675. ### Other Hardware Support
  676. - GPU platform
  677. - New model supported: Bert / Wide&Deep.
  678. - Support setting max device memory.
  679. - CPU platform
  680. - New model supported: LSTM.
  681. ## Bugfixes
  682. - Models
  683. - Bert, Move Bert from `example` to `model_zoo`, optimize network for better performance. ([!1902](https://gitee.com/mindspore/mindspore/pulls/1902))
  684. - VGG16, Move VGG16 from `example` to `model_zoo`, optimize network for better accuracy. ([!2645](https://gitee.com/mindspore/mindspore/pulls/2645))
  685. - Alexnet, modify parameter setting to improve accuracy ([!1364](https://gitee.com/mindspore/mindspore/pulls/2370))
  686. - Wide&Deep, Move Wide&Deep from `example` to `model_zoo`, optimize network for better performance. ([!2221](https://gitee.com/mindspore/mindspore/pulls/2221))
  687. - Python API
  688. - Fix bug in auto cast([!1766](https://gitee.com/mindspore/mindspore/pulls/1766))
  689. - Fix bug of register_backward_hook([!2148](https://gitee.com/mindspore/mindspore/pulls/2148))
  690. - Fix bug of tuple args in pynative mode([!1878](https://gitee.com/mindspore/mindspore/pulls/1878))
  691. - Fix bug of checking numbers of arguments and graph parameters([!1701](https://gitee.com/mindspore/mindspore/pulls/1701))
  692. - Executor
  693. - Fix bug of loading input data repeatedly in pynative mode([!1966](https://gitee.com/mindspore/mindspore/pulls/1966))
  694. - Fix bug of list cannot be used as input in pynative mode([!1765](https://gitee.com/mindspore/mindspore/pulls/1765))
  695. - Fix bug of kernel select ([!2103](https://gitee.com/mindspore/mindspore/pulls/2103))
  696. - Fix bug of pattern matching for batchnorm fusion in the case of auto mix precision.([!1851](https://gitee.com/mindspore/mindspore/pulls/1851))
  697. - Fix bug of generate hccl's kernel info.([!2393](https://gitee.com/mindspore/mindspore/pulls/2393))
  698. - GPU platform
  699. - Fix bug of summary feature invalid([!2173](https://gitee.com/mindspore/mindspore/pulls/2173))
  700. - Data processing
  701. - Fix bug of Cifar dataset reading([!2096](https://gitee.com/mindspore/mindspore/pulls/2096))
  702. - Fix bug of C++ behavior in RandomCropAndResize([!2026](https://gitee.com/mindspore/mindspore/pulls/2026))
  703. - Fix the bug of mindrecord shuffle([!2420](https://gitee.com/mindspore/mindspore/pulls/2420))
  704. - Third party
  705. - Sqlite : Update sqlite to 3.32.2 to handle [CVE-2020-11656](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11656), [CVE-2020-13871](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13871), [CVE-2020-11655](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655), [CVE-2020-9327](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9327), [CVE-2020-13630](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630), [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15358), [CVE-2020-13631](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13631), [CVE-2020-13632](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632), [CVE-2020-13434](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13434), [CVE-2020-13435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13435), and [CVE-2020-15358](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11655).
  706. ## Contributors
  707. Thanks goes to these wonderful people:
  708. Alexey Shevlyakov, avakh, baihuawei, BowenK, buxue, caifubi, caojian05, Cathy Wong, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, Danish Farid, dayschan, dengwentao, dinghao, etone-chan, fangzehua, fary86, geekun, Giancarlo Colmenares, gong chen, gukecai, guohongzilong, hangangqiang, heleiwang, hesham, He Wei, hexia, hongxing, huangdongrun, huanghui, islam_amin, Jamie Nisbet, Jesse Lee, jiangjinsheng, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, Jonathan Yan, jonyguo, Junhan Hu, Kang, kingfo, kouzhenzhong, kpy, kswang, laiyongqiang, leopz, liangzelang, lichenever, lihongkang, Li Hongzhang, lilei, limingqi107, lirongzhen1, liubuyu, liuchongming74, liuwenhao4, liuxiao, Lixia Chen, liyanliu, liyong, lizhenyu, lvliang, Mahdi, Margaret_wangrui, meixiaowei, ms_yan, nhussain, ougongchang, panfengfeng, panyifeng, peilinwang, Peilin Wang, pkuliuliu, qianlong, rick_sanchez, shibeiji, Shida He, shijianning, simson, sunsuodong, suteng, Tinazhang, Tron Zhang, unknown, VectorSL, wandongdong, wangcong, wangdongxu, wangdongxu6, wanghua, wangnan39, Wei Luning, wenchunjiang, wenkai, wilfChen, WilliamLian, wukesong, Xian Weizhao, Xiaoda Zhang, xiefangqi, xulei2020, xunxue, xutianchun, Yang, yanghaitao, yanghaitao1, yanghaoran, yangjie, yangjie159, YangLuo, Yanjun Peng, yankai, yanzhenxiang2020, yao_yf, Yi Huaijie, yoonlee666, yuchaojie, yujianfeng, zhangzhongpeng, zhangdengcheng, Zhang Qinghua, zhangyinxia, zhangz0911gm, zhaojichen, zhaoting, zhaozhenlong, zhoufeng, zhouneng, zhousiyi, Zirui Wu, Ziyan, zjun, ZPaC, lihongzhang, wangdongxu
  709. Contributions of any kind are welcome!
  710. # MindSpore 0.3.1-alpha Release Notes
  711. ## Major Features and Improvements
  712. ### Ascend 910 Training and Inference Framework
  713. - Frontend and User Interface
  714. - Independent model init interface.
  715. - Data processing, augmentation, and save format
  716. - Support sample padding for minddataset.
  717. ## Bugfixes
  718. - Python API
  719. - Fix bugs in the lars optimizer([!1894](https://gitee.com/mindspore/mindspore/pulls/1894))
  720. - Data processing
  721. - Fix accuracy problem of RandomCropDecodeResize ([!2340](https://gitee.com/mindspore/mindspore/pulls/2340))
  722. # Release 0.3.0-alpha
  723. ## Major Features and Improvements
  724. ### Ascend 910 Training and Inference Framework
  725. - New models
  726. - DeepFM: a factorization-machine based neural network for CTR prediction on Criteo dataset.
  727. - DeepLabV3: significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2007 semantic image segmentation benchmark.
  728. - Faster-RCNN: towards real-time object detection with region proposal networks on COCO 2017 dataset.
  729. - SSD: a single stage object detection methods on COCO 2017 dataset.
  730. - GoogLeNet: a deep convolutional neural network architecture codenamed Inception V1 for classification and detection on CIFAR-10 dataset.
  731. - Wide&Deep: jointly trained wide linear models and deep neural networks for recommender systems on Criteo dataset.
  732. - Frontend and User Interface
  733. - Complete numpy advanced indexing method. Supports value and assignment through tensor index.
  734. - Some optimizers support separating parameter groups. Different parameter groups can set different `learning_rate` and `weight_decay`.
  735. - Support setting submodule's logging level independently, e.g. you can set logging level of module `A` to warning and set logging level of module `B` to info.
  736. - Support weights to be compiled according to shape to solve the problem of large memory overhead.
  737. - Add some operators implement and grammar support in pynative mode. To be consistent with graph mode.
  738. - User interfaces change log
  739. - Learning rate and weight decay making group params([!637](https://gitee.com/mindspore/mindspore/pulls/637))
  740. - Support weights to be compiled according to shape([!1015](https://gitee.com/mindspore/mindspore/pulls/1015))
  741. - delete some context param([!1100](https://gitee.com/mindspore/mindspore/pulls/1100))
  742. - ImageSummary/ScalarSummary/TensorSummary/HistogramSummary([!1329](https://gitee.com/mindspore/mindspore/pulls/1329))([!1425](https://gitee.com/mindspore/mindspore/pulls/1425))
  743. - Executor and Performance Optimization
  744. - Support doing evaluation while in training process, so that the accuracy of training can be easily obtained.
  745. - Enable second-order optimization for resnet50, which can achieve 75.9% accuracy in 45 epochs (Resnet50 @ImageNet).
  746. - Optimize pynative implementation and improve it's execution performance.
  747. - Optimize summary record implementation and improve its performance.
  748. - Data processing, augmentation, and save format
  749. - Support simple text processing, such as tokenizer/buildvocab/lookup.
  750. - Support padding batch.
  751. - Support split or concat dataset.
  752. - Support MindDataset reading from file list.
  753. ### Other Hardware Support
  754. - GPU platform
  755. - New models supported: MobileNetV2, MobileNetV3.
  756. - Support mixed precision training.
  757. - Support device memory swapping.
  758. ## Bugfixes
  759. - Python API
  760. - An exception to the broadcast input data type check([!712](https://gitee.com/mindspore/mindspore/pulls/712))
  761. - Fix issues assignsub return value 0([!1036](https://gitee.com/mindspore/mindspore/pulls/1036))
  762. - Fix issue Conv2dBackpropInput bprop should return 3 instead of 2 items([!1001](https://gitee.com/mindspore/mindspore/pulls/1001))
  763. - Fix sens shape error of TrainOneStepWithLossScaleCell([!1050](https://gitee.com/mindspore/mindspore/pulls/1050))
  764. - Fix BatchNormGrad operator([!1344](https://gitee.com/mindspore/mindspore/pulls/1344))
  765. - Executor
  766. - Fix dropout,topK and addn errors in PyNative mode ([!1285](https://gitee.com/mindspore/mindspore/pulls/1285), [!1138](https://gitee.com/mindspore/mindspore/pulls/1138), [!1033](https://gitee.com/mindspore/mindspore/pulls/1033)).
  767. - Fix memory leaks after execution in PyNatvie mode ([!1201](https://gitee.com/mindspore/mindspore/pulls/1201)).
  768. - Fix HCCL failure in some special scenes ([!1204](https://gitee.com/mindspore/mindspore/pulls/1204), [!1252](https://gitee.com/mindspore/mindspore/pulls/1252)).
  769. - Fix SSD network when Select failed, cann't find kernel info([!1449](https://gitee.com/mindspore/mindspore/pulls/1449)).
  770. - Fix Topk operator selection strategy bug between aicore and aicpu([!1367](https://gitee.com/mindspore/mindspore/pulls/1367)).
  771. - Fix input memory size of 'assign' op unequal in control sink mode when assigning a data from one child graph to another child graph([!802](https://gitee.com/mindspore/mindspore/pulls/802)).
  772. - Fix allreduce ir inconsistency([!989](https://gitee.com/mindspore/mindspore/pulls/989)).
  773. - GPU platform
  774. - Fix summary for gradient collection ([!1364](https://gitee.com/mindspore/mindspore/pulls/1364))
  775. - Fix the slice operator ([!1489](https://gitee.com/mindspore/mindspore/pulls/1489))
  776. - Data processing
  777. - Fix memory problems of GeneratorDataset of sub-process ([!907](https://gitee.com/mindspore/mindspore/pulls/907))
  778. - Fix getting data timeout when training the cifar10 dataset under the lenet([!1391](https://gitee.com/mindspore/mindspore/pulls/1391))
  779. ## Contributors
  780. Thanks goes to these wonderful people:
  781. Alexey Shevlyakov, Amir Lashkari, anthony, baihuawei, biffex, buxue, caifubi, candanzg, caojian05, Cathy Wong, changzherui, chenfei, chengxianbin, chenhaozhe, chenzomi, chujinjin, cristoval, dengwentao, eric, etone-chan, fary86, gaojing, gengdongjie, gongchen, guohongzilong, guozhijian, heleiwang, hesham, He Wei, Hoai Linh Tran, hongxing, huangdongrun, huanghui, Jamie Nisbet, Jesse Lee, jiangjinsheng, jiangzhiwen, jinyaohui, jjfeing, jonwe, jonyguo, Junhan Hu, Kang, kingfo, kswang, laiyongqiang, leopz, lichenever, lihongkang, limingqi107, liubuyu, liuliyan2, liuwenhao4, liuxiao, liuxiao, liyong, lizhenyu, lvliang, Margaret_wangrui, meixiaowei, ms_yan, Nat Sutyanyong, ougongchang, panfengfeng, panyifeng, Peilin Wang, peixu_ren, qianlong, rick_sanchez, seatea, sheng, shijianning, simson, sunsuodong, Tinazhang, VectorSL, wandongdong, wangcong, wanghua, wangnan39, Wei Luning, wenchunjiang, wilfChen, WilliamLian, wsc, wukesong, wuxuejian, Xiaoda Zhang, xiefangqi, xulei2020, Yang, yangjie159, yangruoqi713, yangyongjie, yangzhenzhang, Yanjun Peng, yanzhenxiang2020, yao_yf, Yi Huaijie, yoonlee666, yujianfeng, YuJianfeng, yvetteliu, zhangdengcheng, Zhang Qinghua, zhangz0911gm, zhaojichen, zhaoting, zhaozhenlong, zhoufeng, zhouneng, zhousiyi, zhouyuanshen, Zirui Wu, Ziyan, zjun, ZPaC, lihongzhang
  782. Contributions of any kind are welcome!
  783. # MindSpore 0.2.0-alpha Release Notes
  784. ## Major Features and Improvements
  785. ### Ascend 910 Training and Inference Framework
  786. - New models
  787. - MobileNetV2: Inverted Residuals and Linear Bottlenecks.
  788. - ResNet101: Deep Residual Learning for Image Recognition.
  789. - Frontend and User Interface
  790. - Support for all python comparison operators.
  791. - Support for math operators **,//,%. Support for other python operators like and/or/not/is/is not/ in/ not in.
  792. - Support for the gradients of function with variable arguments.
  793. - Support for tensor indexing assignment for certain indexing type.
  794. - Support for dynamic learning rate.
  795. - User interfaces change log
  796. - DepthwiseConv2dNative, DepthwiseConv2dNativeBackpropFilter, DepthwiseConv2dNativeBackpropInput([!424](https://gitee.com/mindspore/mindspore/pulls/424))
  797. - ReLU6, ReLU6Grad([!224](https://gitee.com/mindspore/mindspore/pulls/224))
  798. - GeneratorDataset([!183](https://gitee.com/mindspore/mindspore/pulls/183))
  799. - VOCDataset([!477](https://gitee.com/mindspore/mindspore/pulls/477))
  800. - MindDataset, PKSampler([!514](https://gitee.com/mindspore/mindspore/pulls/514))
  801. - map([!506](https://gitee.com/mindspore/mindspore/pulls/506))
  802. - Conv([!226](https://gitee.com/mindspore/mindspore/pulls/226))
  803. - Adam([!253](https://gitee.com/mindspore/mindspore/pulls/253))
  804. - _set_fusion_strategy_by_idx,_set_fusion_strategy_by_size([!189](https://gitee.com/mindspore/mindspore/pulls/189))
  805. - CheckpointConfig([!122](https://gitee.com/mindspore/mindspore/pulls/122))
  806. - Constant([!54](https://gitee.com/mindspore/mindspore/pulls/54))
  807. - Executor and Performance Optimization
  808. - Support parallel execution of data prefetching and forward/backward computing.
  809. - Support parallel execution of gradient aggregation and forward/backward computing in distributed training scenarios.
  810. - Support operator fusion optimization.
  811. - Optimize compilation process and improve the performance.
  812. - Data processing, augmentation, and save format
  813. - Support multi-process of GeneratorDataset/PyFunc for high performance
  814. - Support variable batchsize
  815. - Support new Dataset operators, such as filter,skip,take,TextLineDataset
  816. ### Other Hardware Support
  817. - GPU platform
  818. - Use dynamic memory pool by default on GPU.
  819. - Support parallel execution of computation and communication.
  820. - Support continuous address allocation by memory pool.
  821. - CPU platform
  822. - Support for windows 10 OS.
  823. ## Bugfixes
  824. - Models
  825. - Fix mixed precision bug for VGG16 model ([!629](https://gitee.com/mindspore/mindspore/pulls/629)).
  826. - Python API
  827. - Fix ControlDepend operator bugs on CPU and GPU ([!396](https://gitee.com/mindspore/mindspore/pulls/396)).
  828. - Fix ArgMinWithValue operator bugs ([!338](https://gitee.com/mindspore/mindspore/pulls/338)).
  829. - Fix Dense operator bugs on PyNative mode ([!276](https://gitee.com/mindspore/mindspore/pulls/276)).
  830. - Fix MatMul operator bugs on PyNative mode ([!288](https://gitee.com/mindspore/mindspore/pulls/288)).
  831. - Executor
  832. - Fix operator selection bugs and make it general ([!300](https://gitee.com/mindspore/mindspore/pulls/300)).
  833. - Fix memory reuse bug for GetNext op ([!291](https://gitee.com/mindspore/mindspore/pulls/291)).
  834. - GPU platform
  835. - Fix memory allocation in multi-graph scenarios ([!444](https://gitee.com/mindspore/mindspore/pulls/444)).
  836. - Fix bias_add_grad under fp16 precision ([!598](https://gitee.com/mindspore/mindspore/pulls/598)).
  837. - Fix support for fp16 kernels on nvidia 1080Ti([!571](https://gitee.com/mindspore/mindspore/pulls/571)).
  838. - Fix parsing of tuple type parameters ([!316](https://gitee.com/mindspore/mindspore/pulls/316)).
  839. - Data processing
  840. - Fix TypeErrors about can't pickle mindspore._c_dataengine.DEPipeline objects([!434](https://gitee.com/mindspore/mindspore/pulls/434)).
  841. - Add TFRecord file verification([!406](https://gitee.com/mindspore/mindspore/pulls/406)).
  842. ## Contributors
  843. Thanks goes to these wonderful people:
  844. Alexey_Shevlyakov, Cathy, Chong, Hoai, Jonathan, Junhan, JunhanHu, Peilin, SanjayChan, StrawNoBerry, VectorSL, Wei, WeibiaoYu, Xiaoda, Yanjun, YuJianfeng, ZPaC, Zhang, ZhangQinghua, ZiruiWu, amongo, anthonyaje, anzhengqi, biffex, caifubi, candanzg, caojian05, casgj, cathwong, ch-l, chang, changzherui, chenfei, chengang, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, dengwentao, dinghao, fanglei, fary86, flywind, gaojing, geekun, gengdongjie, ghzl, gong, gongchen, gukecai, guohongzilong, guozhijian, gziyan, h.farahat, hesham, huangdongrun, huanghui, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, jonathan_yan, jonyguo, jzw, kingfo, kisnwang, laiyongqiang, leonwanghui, lianliguang, lichen, lichenever, limingqi107, liubuyu, liuxiao, liyong, liyong126, lizhenyu, lupengcheng, lvliang, maoweiyong, ms_yan, mxm, ougongchang, panfengfeng, panyifeng, pengyanjun, penn, qianlong, seatea, simson, suteng, thlinh, vlne-v1, wangchengke, wanghua, wangnan39, wangqiuliang, wenchunjiang, wenkai, wukesong, xiefangqi, xulei, yanghaitao, yanghaoran, yangjie159, yangzhenzhang, yankai10, yanzhenxiang2020, yao_yf, yoonlee666, zhangbuxue, zhangz0911gm, zhangzheng, zhaojichen, zhaoting, zhaozhenlong, zhongligeng, zhoufeng, zhousiyi, zjun, zyli2020, yuhuijun, limingqi107, lizhenyu, chenweifeng.
  845. Contributions of any kind are welcome!
  846. # MindSpore 0.1.0-alpha Release Notes
  847. ## Main Features
  848. ### Ascend 910 Training and Inference Framework
  849. - Recommended OS: Ubuntu 16.04 (or later) or EulerOS 2.5 or EulerOS 2.8
  850. - Python version: 3.7.5
  851. - Preset models
  852. - ResNet-50: residual structure-based convolutional neural network (CNN) for image classification, which is widely used.
  853. - AlexNet: classic CNN for image classification, achieving historical results in ImageNet LSVRC-2012.
  854. - LeNet: classic CNN for image classification, which was proposed by Yann LeCun.
  855. - VGG16: classic CNN for image classification, which was proposed by Oxford Visual Geometry Group.
  856. - YoloV3: real-time object detection network.
  857. - NEZHA: BERT-based Chinese pre-training network produced by Huawei Noah's Ark Laboratory.
  858. - Execution modes
  859. - Graph mode: provides graph optimization methods such as memory overcommitment, IR fusion, and buffer fusion to achieve optimal execution performance.
  860. - PyNative mode: single-step execution mode, facilitating process debugging.
  861. - Debugging capability and methods
  862. - Save CheckPoints and Summary data during training.
  863. - Support asynchronous printing.
  864. - Dump the computing data.
  865. - Support profiling analysis of the execution process performance.
  866. - Distributed execution
  867. - Support AllReduce, AllGather, and BroadCast collective communication.
  868. - AllReduce data parallel: Each device obtains different training data, which accelerates the overall training process.
  869. - Collective communication-based layerwise parallel: Models are divided and allocated to different devices to solve the problem of insufficient memory for large model processing and improve the training speed.
  870. - Automatic parallel mode: The better data and model parallel mode can be predicted based on the cost model. It is recommended that this mode be used on ResNet series networks.
  871. - Automatic differentiation
  872. - Implement automatic differentiation based on Source to Source.
  873. - Support distributed scenarios and automatic insertion of reverse communication operators.
  874. - Data processing, augmentation, and save format
  875. - Load common datasets such as ImageNet, MNIST, CIFAR-10, and CIFAR-100.
  876. - Support common data loading pipeline operations, such as shuffle, repeat, batch, map, and sampler.
  877. - Provide basic operator libraries to cover common CV scenarios.
  878. - Support users to customize Python data augmentation operators through the Pyfunc mechanism.
  879. - Support the access of user-defined datasets through the GeneratorDataset mechanism.
  880. - Provide the MindSpore data format, data aggregation and storage, random access example, data partition, efficient parallel read, user-defined index, and dataset search.
  881. - Convert user datasets to the MindSpore data format.
  882. - After data processing and augmentation, provide training applications in feed and graph modes.
  883. - FP32/16 mixed precision computation, supporting automatic and manual configuration
  884. - Provide common operators such as nn, math, and array, which can be customized.
  885. ### Inference Deployment
  886. - Deploy models in MindSpore format on the Ascend 310 platform for inference.
  887. - Save models in ONNX format.
  888. - Support saving models in LITE format and running models based on the lightweight inference framework.
  889. - Recommended OS: Android 4.3 or later
  890. - Supported network type: LeNet
  891. - Provide the generalization operators generated by TVM and operators generated after specific networks are tuned.
  892. ### Other Hardware Support
  893. - GPU platform training
  894. - Recommended OS: Ubuntu 16.04
  895. - CUDA version: 9.2 or 10.1
  896. - CuDNN version: 7.6 or later
  897. - Python version: 3.7.5
  898. - NCCL version: 2.4.8-1
  899. - OpenMPI version: 3.1.5
  900. - Supported models: AlexNet, LeNet, and LSTM
  901. - Supported datasets: MNIST and CIFAR-10
  902. - Support data parallel.
  903. - CPU platform training
  904. - Recommended OS: Ubuntu 16.04
  905. - Python version: 3.7.5
  906. - Supported model: LeNet
  907. - Supported dataset: MNIST
  908. - Provide only the stand-alone operation version.
  909. ## Peripherals and Tools
  910. - [MindSpore Official Website](https://www.mindspore.cn/)
  911. - [MindInsight Visualization Debugging and Optimization](https://gitee.com/mindspore/mindinsight)
  912. - [MindArmour Model Security Hardening Package](https://gitee.com/mindspore/mindarmour)
  913. - [GraphEngine Computational Graph Engine](https://gitee.com/mindspore/graphengine)