
MindSpore is a new open source deep learning training/inference framework that
could be used for mobile, edge and cloud scenarios. MindSpore is designed to
provide development experience with friendly design and efficient execution for
the data scientists and algorithmic engineers, native support for Ascend AI
processor, and software hardware co-optimization. At the meantime MindSpore as
a global AI open source community, aims to further advance the development and
enrichment of the AI software/hardware application ecosystem.
For more details please check out our Architecture Guide.
There are currently three automatic differentiation techniques in mainstream deep learning frameworks:
TensorFlow adopted static calculation diagrams in the early days, whereas PyTorch used dynamic calculation diagrams. Static maps can utilize static compilation technology to optimize network performance, however, building a network or debugging it is very complicated. The use of dynamic graphics is very convenient, but it is difficult to achieve extreme optimization in performance.
But MindSpore finds another way, automatic differentiation based on source code conversion. On the one hand, it supports automatic differentiation of automatic control flow, so it is quite convenient to build models like PyTorch. On the other hand, MindSpore can perform static compilation optimization on neural networks to achieve great performance.
The implementation of MindSpore automatic differentiation can be understood as the symbolic differentiation of the program itself. Because MindSpore IR is a functional intermediate expression, it has an intuitive correspondence with the composite function in basic algebra. The derivation formula of the composite function composed of arbitrary basic functions can be derived. Each primitive operation in MindSpore IR can correspond to the basic functions in basic algebra, which can build more complex flow control.
The goal of MindSpore automatic parallel is to build a training method that combines data parallelism, model parallelism, and hybrid parallelism. It can automatically select a least cost model splitting strategy to achieve automatic distributed parallel training.
At present, MindSpore uses a fine-grained parallel strategy of splitting operators, that is, each operator in the figure is splitted into a cluster to complete parallel operations. The splitting strategy during this period may be very complicated, but as a developer advocating Pythonic, you don't need to care about the underlying implementation, as long as the top-level API compute is efficient.
MindSpore offers build options across multiple backends:
| Hardware Platform | Operating System | Status |
|---|---|---|
| Ascend910 | Ubuntu-x86 | ✔️ |
| EulerOS-x86 | ✔️ | |
| EulerOS-aarch64 | ✔️ | |
| GPU CUDA 9.2 | Ubuntu-x86 | ✔️ |
| GPU CUDA 10.1 | Ubuntu-x86 | ✔️ |
| CPU | Ubuntu-x86 | ✔️ |
| Windows-x86 | ✔️ |
For installation using pip, take CPU and Ubuntu-x86 build version as an example:
Download whl from MindSpore download page, and install the package.
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/cpu/ubuntu-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl
Run the following command to verify the install.
python -c 'import mindspore'
MindSpore docker image is hosted on Docker Hub,
currently the containerized build options are supported as follows:
| Hardware Platform | Docker Image Repository | Tag | Description |
|---|---|---|---|
| CPU | mindspore/mindspore-cpu |
0.1.0-alpha |
Production environment with pre-installed MindSpore 0.1.0-alpha CPU release. |
devel |
Development environment provided to build MindSpore (with CPU backend) from the source, refer to https://www.mindspore.cn/install/en for installation details. |
||
runtime |
Runtime environment provided to install MindSpore binary package with CPU backend. |
||
| GPU | mindspore/mindspore-gpu |
0.1.0-alpha |
Production environment with pre-installed MindSpore 0.1.0-alpha GPU release. |
devel |
Development environment provided to build MindSpore (with GPU CUDA10.1 backend) from the source, refer to https://www.mindspore.cn/install/en for installation details. |
||
runtime |
Runtime environment provided to install MindSpore binary package with GPU backend. |
||
| Ascend | — | — | Coming soon. |
CPU
For CPU backend, you can directly pull and run the image using the below command:
docker pull mindspore/mindspore-cpu:0.1.0-alpha
docker run -it mindspore/mindspore-cpu:0.1.0-alpha python -c 'import mindspore'
GPU
For GPU backend, please make sure the nvidia-container-toolkit has been installed in advance, here are some install guidelines for Ubuntu users:
DISTRIBUTION=$(. /etc/os-release; echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$DISTRIBUTION/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit nvidia-docker2
sudo systemctl restart docker
Then you can pull and run the image using the below command:
docker pull mindspore/mindspore-gpu:0.1.0-alpha
docker run -it --runtime=nvidia --privileged=true mindspore/mindspore-gpu:0.1.0-alpha /bin/bash
To test if the docker image works, please execute the python code below and check the output:
import numpy as np
from mindspore import Tensor
from mindspore.ops import functional as F
import mindspore.context as context
context.set_context(device_target="GPU")
x = Tensor(np.ones([1,3,3,4]).astype(np.float32))
y = Tensor(np.ones([1,3,3,4]).astype(np.float32))
print(F.tensor_add(x, y))
[[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]]]
If you want to learn more about the building process of MindSpore docker images,
please check out docker folder for the details.
See the Quick Start
to implement the image classification.
More details about installation guide, tutorials and APIs, please see the
User Documentation.
Check out how MindSpore Open Governance works.
#mindspore (only for meeting minutes logging purpose)Welcome contributions. See our Contributor Wiki for
more details.
The release notes, see our RELEASE.