# MindSpore 1.3.0 ## MindSpore 1.3.0 Release Notes ### Major Features and Improvements #### NewModels - [STABLE] Add CV models on Ascend: CPM, FCN8s, SSD-ResNet50-FPN, EAST, AdvancedEast. - [STABLE] Add NLP models on Ascend: DGU, TextCNN, SentimentNet(LSTM). - [STABLE] Add CV models on GPU: Faster-RCNN, FCN8s, CycleGAN, AdvancedEast. - [BETA] Add CV models on Ascend: CycleGAN, PoseNet, SimCLR. - [BETA] Add NLP models on Ascend: DGU, EmoTect, Senta, KT-Net. - [BETA] Add NLP models on GPU: DGU, EmoTect. - [BETA] Add EPP-MVSNet: a novel deep learning network for 3D reconstruction from multi-view stereo, which has won the first place in Tanks & Temples leaderboard(until April 1, 2021)(GPU). #### FrontEnd - [STABLE] The default running mode of MindSpore is changed to Graph mode. - [STABLE] Support interface `run_check` to check whether MindSpore is working properly or not. - [STABLE] Support saving custom information in the checkpoint file. - [STABLE] Normal class adds mean parameter. - [STABLE] Support export YOLOv3-DarkNet53 and YOLOv4 ONNX model. - [STABLE] Support 40+ operator export ONNX model. - [STABLE] The Metric module supports `set_indexes` to select the inputs of `update` in the specified order. - [STABLE] Switch `_Loss` to an external API `LossBase` as the base class of losses. #### Auto Parallel - [STABLE] Add distributed operators: Select/GatherNd/ScatterUpdate/TopK. - [STABLE] Support basic pipeline parallelism. - [STABLE] Optimize sharding strategy setting of `Gather`. - [STABLE] Optimize mix precision and shared parameter scenarios. - [STABLE] Optimize distributed prediction scenarios. #### Executor - [STABLE] Support unified runtime in GPU and CPU backend. - [STABLE] MindSpore GPU support CUDA11 with cuDNN8. - [STABLE] MindSpore GPU inference performance optimization by integrating TensorRT. - [STABLE] MindSpore built on one Linux distribution can now be used on multiple Linux distributions with the same CPU architecture (e.g. EulerOS, Ubuntu, CentOS). - [STABLE] MindSpore now supports Ascend310 and Ascend910 environments with one single wheel package and provides an alternate binary package for Ascend310 specifically. - [STABLE] MindSpore Ascend support group convolution. #### DataSet - [STABLE] Support caching over MindRecord dataset. - [STABLE] Support new shuffle mode for MindRecord dataset. - [STABLE] Support a cropper tool for MindSpore Lite to allow the user to customize MindData binary file according to their script. - [STABLE] Support share memory mechanism to optimize the multi-processing efficiency of GeneratorDataset/Map/Batch. - [STABLE] Add features for the GNN dataset to support molecular dynamics simulation scenarios. #### FederatedLearning - [STABLE] Support Cross-device federated learning framework. - [STABLE] Support FL-Server distributed networking including TCP and HTTP communication. - [STABLE] Support FL-Server distributed federated aggregation,support autoscaling and fault tolerance. - [STABLE] Develop FL-Client framework. - [STABLE] Supports local differential privacy algorithms. - [STABLE] MPC-based security aggregation algorithm. - [STABLE] MindSpore Lite Device-side Inference & Training Interconnection with FL-Client. #### Running Data Recorder - [STABLE] Provide records of multi-stage computational graphs, memory allocation information and graph execution order when a "Launch kernel failed" occurs. (CPU) #### GraphKernel Fusion - [STABLE] Add options to control the optimization level. - [STABLE] Enhance the generalization ability on GPU. GraphKernel is enabled by default in 40+ networks which cover the field of NLP, CV, Recommender, NAS and Audio. The result shows their throughput is significantly improved, and you are Recommended enabling GraphKernel in your network. #### Debug - [STABLE] Unified dump function. ### API Change #### Backwards Incompatible Change ##### Python API ###### `mindspore.dataset.Dataset.device_que` interface removes unused parameter `prefetch_size`([!18973](https://gitee.com/mindspore/mindspore/pulls/18973)) Previously, we have a parameter `prefetch_size` in `device_que` to define the prefetch number of records ahead of the user's request. But indeed this parameter is never used which means it is an ineffective parameter. Therefore, we remove this parameter in 1.3.0 and users can set this configuration by [mindspore.dataset.config.set_prefetch_size](https://www.mindspore.cn/docs/api/en/r1.3/api_python/mindspore.dataset.config.html#mindspore.dataset.config.set_prefetch_size).
| 1.2.1 | 1.3.0 |
| ```python device_que(prefetch_size=None, send_epoch_end=True, create_data_info_queue=False) ``` | ```python device_que(send_epoch_end=True, create_data_info_queue=False) ``` |
| 1.2.1 | 1.3.0 |
| ```python THOR(net, learning_rate, damping, momentum, weight_decay=0.0, loss_scale=1.0, batch_size=32, use_nesterov=False, decay_filter=lambda x: x.name not in [], split_indices=None) ``` | ```python thor(net, learning_rate, damping, momentum, weight_decay=0.0, loss_scale=1.0, batch_size=32, use_nesterov=False, decay_filter=lambda x: x.name not in [], split_indices=None, enable_clip_grad=False, frequency=100) ``` |
| 1.1.1 | 1.2.0 |
| ```python >>> import mindspore.numpy as mnp >>> import numpy >>> >>> nd_array = numpy.array([1,2,3]) >>> tensor = mnp.asarray(nd_array) # this line cannot be parsed in GRAPH mode ``` | ```python >>> import mindspore.numpy as mnp >>> import numpy >>> >>> tensor = mnp.asarray([1,2,3]) # this line can be parsed in GRAPH mode ``` |
| 1.1.1 | 1.2.0 |
| ```python >>> import mindspore.numpy as np >>> >>> a = np.ones((3,3)) >>> b = np.ones((3,3)) >>> out = np.zeros((3,3)) >>> where = np.asarray([[True, False, True],[False, False, True],[True, True, True]]) >>> res = np.add(a, b, out=out, where=where) # `out` cannot be used as a reference, therefore it is misleading ``` | ```python >>> import mindspore.numpy as np >>> >>> a = np.ones((3,3)) >>> b = np.ones((3,3)) >>> out = np.zeros((3,3)) >>> where = np.asarray([[True, False, True],[False, False, True],[True, True, True]]) >>> res = np.add(a, b) >>> out = np.where(where, x=res, y=out) # instead of np.add(a, b, out=out, where=where) ``` |
| 1.1.1 | 1.2.0 |
| ```python In some side-effect scenarios, we need to ensure the execution order of operators. In order to ensure that operator A is executed before operator B, it is recommended to insert the Depend operator between operators A and B. Previously, the ControlDepend operator was used to control the execution order. Since the ControlDepend operator is deprecated from version 1.1, it is recommended to use the Depend operator instead. The replacement method is as follows:: a = A(x) ---> a = A(x) b = B(y) ---> y = Depend(y, a) ControlDepend(a, b) ---> b = B(y) ``` | ```python In most scenarios, if operators have IO side effects or memory side effects, they will be executed according to the user's semantics. In some scenarios, if the two operators A and B have no order dependency, and A must be executed before B, we recommend using Depend to specify their execution order. The usage method is as follows:: a = A(x) ---> a = A(x) b = B(y) ---> y = Depend(y, a) ---> b = B(y) ``` |
| 1.1.1 | 1.2.0 |
| ```python @C.add_flags(has_effect=True) def construct(self, *inputs): ... loss = self.network(*inputs) init = self.allo_status() self.clear_status(init) ... ``` | ```python def construct(self, *inputs): ... loss = self.network(*inputs) init = self.allo_status() init = F.depend(init, loss) clear_status = self.clear_status(init) ... ``` |
| 1.1.1 | 1.2.0 |
| ```cmake add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0) ``` | ```cmake add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0) # old ABI are supported add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=1) # new ABI are supprrted, too # write nothing, use new ABI as default ``` |
| 1.1.1 | 1.2.0 |
|
```cpp
GlobalContext::SetGlobalDeviceTarget(kDeviceTypeAscend310); // set device target is ascend310
GlobalContext::SetGlobalDeviceID(0); // set device id is 0
auto model_context = std::make_shared |
```cpp
auto model_context = std::make_shared |
| 1.1.1 | 1.2.0 |
| ```cpp try { auto graph = Serialization::LoadModel(model_file_path, kMindIR); } catch (...) { ... } ``` | ```cpp Graph graph; auto ret = Serialization::Load(model_file_path, kMindIR, &graph); if (ret != kSuccess) { ... } ``` |
| 1.1.1 | 1.2.0 |
| ```cpp Model net(net_cell, model_context); auto ret = net.Build(); if (ret != kSuccess) { ... } ``` | ```cpp Model net; auto ret = net.Build(net_cell, model_context); if (ret != kSuccess) { ... } ``` |
| 1.1.1 | 1.2.0 |
| ```cpp auto tensor = MSTensor::CreateTensor(xxx, xxx, ...); auto name = tensor.Name(); ``` | ```cpp auto tensor = MSTensor::CreateTensor(xxx, xxx, ...); auto name = tensor->Name(); MSTensor::DestroyTensorPtr(tensor); ``` |
| 1.1.1 | 1.2.0 |
| ```python >>> import numpy as np >>> from mindspore import Tensor, nn >>> >>> x = Tensor(np.ones((2, 3)).astype(onp.float32) >>> y = Tensor(np.ones((3, 4)).astype(onp.float32) >>> nn.MatMul()(x, y) ``` | ```python >>> import numpy as np >>> from mindspore import Tensor, ops >>> >>> x = Tensor(np.ones((2, 3)).astype(onp.float32) >>> y = Tensor(np.ones((3, 4)).astype(onp.float32) >>> ops.matmul(x, y) ``` |
| lite_types.h |
| ```cpp namespace mindspore::lite { /// \brief CpuBindMode defined for holding bind cpu strategy argument. typedef enum { NO_BIND, /**< no bind */ HIGHER_CPU, /**< bind higher cpu first */ MID_CPU /**< bind middle cpu first */ } CpuBindMode; /// \brief DeviceType defined for holding user's preferred backend. typedef enum { DT_CPU, /**< CPU device type */ DT_GPU, /**< GPU device type */ DT_NPU /**< NPU device type */ } DeviceType; } // namespace mindspore::lite ``` |
| CreateTensor |
|
```cpp
/// \brief Create a MSTensor.
///
/// \return Pointer to an instance of MindSpore Lite MSTensor.
static MSTensor *CreateTensor(const std::string &name, TypeId type, const std::vector |
| set_shape |
|
```cpp
/// \brief Set the shape of MSTensor.
virtual void set_shape(const std::vector |
| data |
| ```cpp /// \brief Get the pointer of data in MSTensor. /// /// \note The data pointer can be used to both write and read data in MSTensor. No memory buffer will be /// allocated. /// /// \return the pointer points to data in MSTensor. virtual void *data() = 0; ``` |
| DimensionSize() |
| ```cpp /// \brief Get size of the dimension of the MindSpore Lite MSTensor index by the parameter index. /// /// \param[in] index Define index of dimension returned. /// /// \return Size of dimension of the MindSpore Lite MSTensor. virtual int DimensionSize(size_t index) const = 0; ``` |
| 1.1.0 | 1.2.0 |
| ```cpp namespace mindspore::lite { /// \brief Allocator defined a memory pool for malloc memory and free memory dynamically. /// /// \note List public class and interface for reference. class Allocator; } ``` | ```cpp namespace mindspore { /// \brief Allocator defined a memory pool for malloc memory and free memory dynamically. /// /// \note List public class and interface for reference. class Allocator; } ``` |
| 1.1.0 | 1.1.1 |
| ```python >>> import mindspore.ops as ops >>> >>> avg_pool = ops.AvgPool(ksize=2, padding='same') >>> max_pool = ops.MaxPool(ksize=2, padding='same') >>> max_pool_with_argmax = ops.MaxPoolWithArgmax(ksize=2, padding='same') ``` | ```python >>> import mindspore.ops as ops >>> >>> avg_pool = ops.AvgPool(kernel_size=2, pad_mode='same') >>> max_pool = ops.MaxPool(kernel_size=2, pad_mode='same') >>> max_pool_with_argmax = ops.MaxPoolWithArgmax(kernel_size=2, pad_mode='same') ``` |
| 1.1.0 | 1.1.1 |
| ```python >>> import mindspore.ops as ops >>> >>> add = ops.TensorAdd() ``` | ```python >>> import mindspore.ops as ops >>> >>> add = ops.Add() ``` |
| 1.1.0 | 1.1.1 |
| ```python >>> import mindspore.ops as ops >>> >>> gelu = ops.Gelu() >>> gelu_grad = ops.GeluGrad() >>> fast_gelu = ops.FastGelu() >>> fast_gelu_grad = ops.FastGeluGrad() ``` | ```python >>> import mindspore.ops as ops >>> >>> gelu = ops.GeLU() >>> gelu_grad = ops.GeLUGrad() >>> fast_gelu = ops.FastGeLU() >>> fast_gelu_grad = ops.FastGeLUGrad() ``` |
| 1.1.0 | 1.1.1 |
| ```python >>> import mindspore.ops as ops >>> >>> gather = ops.GatherV2() ``` | ```python >>> import mindspore.ops as ops >>> >>> gather = ops.Gather() ``` |
| 1.1.0 | 1.1.1 |
| ```python >>> import mindspore.ops as ops >>> >>> pack= ops.Pack() >>> unpack= ops.Unpack() ``` | ```python >>> import mindspore.ops as ops >>> >>> stack= ops.Stack() >>> unstack= ops.Unstack() ``` |
| 1.1.0 | 1.1.1 |
| ```pythonNote: Note: This operation does not work in `PYNATIVE_MODE`. ``` | ```python Note: This operation does not work in `PYNATIVE_MODE`. `ControlDepend` is deprecated from version 1.1 and will be removed in a future version, use `Depend` instead. ``` |
| 1.1.0 | 1.1.1 |
| ```python Depend is used for processing side-effect operations. Inputs: - **value** (Tensor) - the real value to return for depend operator. - **expr** (Expression) - the expression to execute with no outputs. Outputs: Tensor, the value passed by last operator. Supported Platforms: ``Ascend`` ``GPU`` ``CPU`` ``` | ```python Depend is used for processing dependency operations. In some side-effect scenarios, we need to ensure the execution order of operators. In order to ensure that operator A is executed before operator B, it is recommended to insert the Depend operator between operators A and B. Previously, the ControlDepend operator was used to control the execution order. Since the ControlDepend operator will be deprecated from version 1.2, it is recommended to use the Depend operator instead. The replacement method is as follows:: a = A(x) ---> a = A(x) b = B(y) ---> y = Depend(y, a) ControlDepend(a, b) ---> b = B(y) Inputs: - **value** (Tensor) - the real value to return for depend operator. - **expr** (Expression) - the expression to execute with no outputs. Outputs: Tensor, the value passed by last operator. Supported Platforms: ``Ascend`` ``GPU`` ``CPU`` Examples: >>> import numpy as np >>> import mindspore >>> import mindspore.nn as nn >>> import mindspore.ops.operations as P >>> from mindspore import Tensor >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.softmax = P.Softmax() ... self.depend = P.Depend() ... ... def construct(self, x, y): ... mul = x - y ... y = self.depend(y, mul) ... ret = self.softmax(y) ... return ret ... >>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32) >>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32) >>> net = Net() >>> output = net(x, y) >>> print(output) [[0.2 0.2 0.2 0.2 0.2] [0.2 0.2 0.2 0.2 0.2] [0.2 0.2 0.2 0.2 0.2] [0.2 0.2 0.2 0.2 0.2]] ``` |
| 1.1.0 | 1.1.1 |
| ```c++ namespace ms = mindspore::api; ``` | ```c++ namespace ms = mindspore; ``` |
| 1.1.0 | 1.1.1 |
| ```c++ ms::Context::Instance().SetDeviceTarget(ms::kDeviceTypeAscend310).SetDeviceID(0); ``` | ```c++ ms::GlobalContext::SetGlobalDeviceTarget(ms::kDeviceTypeAscend310); ms::GlobalContext::SetGlobalDeviceID(0); ``` |
| 1.1.0 | 1.1.1 |
| ```c++ ms::Tensor a; ``` | ```c++ ms::MSTensor a; ``` |
| 1.1.0 | 1.1.1 |
| ```c++ ms::Model model(graph_cell); model.Build(model_options); ``` | ```c++ ms::Model model(graph_cell, model_context); model.Build(); ``` |
| 1.1.0 | 1.1.1 |
|
```c++
std::vector |
```c++ auto inputs = model.GetInputs(); std::cout << "Input 0 name: " << inputs[0].Name() << std::endl; ``` |
| 1.1.0 | 1.1.1 |
|
```c++
std::vector |
```c++
std::vector |
| 1.0.1 | 1.1.0 |
| ```python >>> import mindspore.nn as nn >>> from mindspore.common import initializer >>> from mindspore import dtype as mstype >>> >>> def conv3x3(in_channels, out_channels) >>> weight = initializer('XavierUniform', shape=(3, 2, 32, 32), dtype=mstype.float32) >>> return nn.Conv2d(in_channels, out_channels, weight_init=weight, has_bias=False, pad_mode="same") ``` | ```python >>> import mindspore.nn as nn >>> from mindspore.common.initializer import XavierUniform >>> >>> #1) using string >>> def conv3x3(in_channels, out_channels) >>> return nn.Conv2d(in_channels, out_channels, weight_init='XavierUniform', has_bias=False, pad_mode="same") >>> >>> #2) using subclass of class Initializer >>> def conv3x3(in_channels, out_channels) >>> return nn.Conv2d(in_channels, out_channels, weight_init=XavierUniform(), has_bias=False, pad_mode="same") ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> import mindspore.nn as nn >>> from mindspore.common import initializer >>> from mindspore.common.initializer import XavierUniform >>> >>> weight_init_1 = XavierUniform(gain=1.1) >>> conv1 = nn.Conv2d(3, 6, weight_init=weight_init_1) >>> weight_init_2 = XavierUniform(gain=1.1) >>> conv2 = nn.Conv2d(6, 10, weight_init=weight_init_2) ``` | ```python >>> import mindspore.nn as nn >>> from mindspore.common import initializer >>> from mindspore.common.initializer import XavierUniform >>> >>> weight_init = XavierUniform(gain=1.1) >>> conv1 = nn.Conv2d(3, 6, weight_init=weight_init) >>> conv2 = nn.Conv2d(6, 10, weight_init=weight_init) ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> from mindspore import nn >>> >>> start = 1 >>> stop = 10 >>> num = 5 >>> linspace = nn.LinSpace(start, stop, num) >>> output = linspace() ``` | ```python >>> import mindspore >>> from mindspore import Tensor >>> from mindspore import ops >>> >>> linspace = ops.LinSpace() >>> start = Tensor(1, mindspore.float32) >>> stop = Tensor(10, mindspore.float32) >>> num = 5 >>> output = linspace(start, stop, num) ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> from mindspore.nn import Adam >>> >>> net = LeNet5() >>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters())) >>> optimizer.sparse_opt.add_prim_attr("primitive_target", "CPU") ``` | ```python >>> from mindspore.nn import Adam >>> >>> net = LeNet5() >>> optimizer = Adam(filter(lambda x: x.requires_grad, net.get_parameters())) >>> optimizer.target = 'CPU' ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> from mindspore.train.quant import quant >>> >>> network = LeNetQuant() >>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32) >>> quant.export(network, inputs, file_name="lenet_quant.mindir", file_format='MINDIR') lenet_quant.mindir ``` | ```python >>> from mindspore import export >>> >>> network = LeNetQuant() >>> inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32) >>> export(network, inputs, file_name="lenet_quant", file_format='MINDIR', quant_mode='AUTO') lenet_quant.mindir ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> import mindspore.nn as nn >>> >>> dense = nn.Dense(1, 1, activation='relu') ``` | ```python >>> import mindspore.nn as nn >>> import mindspore.ops as ops >>> >>> dense = nn.Dense(1, 1, activation=nn.ReLU()) >>> dense = nn.Dense(1, 1, activation=ops.ReLU()) ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> from mindspore import Tensor >>> >>> Tensor((1,2,3)).size() >>> Tensor((1,2,3)).dim() ``` | ```python >>> from mindspore import Tensor >>> >>> Tensor((1,2,3)).size >>> Tensor((1,2,3)).ndim ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> from mindspore.nn import EmbeddingLookup >>> >>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32) >>> result = EmbeddingLookup(4,2)(input_indices) >>> print(result.shape) (2, 2, 2) ``` | ```python >>> from mindspore.nn import EmbeddingLookup >>> >>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32) >>> result = EmbeddingLookup(4,2)(input_indices, sparse=False) >>> print(result.shape) (2, 2, 2) ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> import mindspore.nn.probability.bijector as msb >>> >>> power = 2 >>> bijector = msb.PowerTransform(power=power) ``` | ```python >>> import mindspore.nn.probability.bijector as msb >>> >>> power = 2.0 >>> bijector = msb.PowerTransform(power=power) ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> import mindspore.nn.probability.bijector as msb >>> from mindspore import dtype as mstype >>> >>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0, dtype=mstype.float32) ``` | ```python >>> import mindspore.nn.probability.bijector as msb >>> >>> bijector = msb.GumbelCDF(loc=0.0, scale=1.0) ``` |
| 1.0.1 | 1.1.0 |
| ```python >>> from mindspore.nn.layer.quant import Conv2dBnAct, DenseBnAct ``` | ```python >>> from mindspore.nn import Conv2dBnAct, DenseBnAct ``` |