Browse Source

Update docker image section in README.md

Signed-off-by: leonwanghui <leon.wanghui@huawei.com>
tags/v0.2.0-alpha
leonwanghui 6 years ago
parent
commit
87f8b9fba2
1 changed files with 56 additions and 10 deletions
  1. +56
    -10
      README.md

+ 56
- 10
README.md View File

@@ -1,7 +1,7 @@
![MindSpore Logo](docs/MindSpore-logo.png "MindSpore logo")
============================================================

- [What is MindSpore?](#what-is-MindSpore)
- [What is MindSpore?](#what-is-mindspore)
- [Automatic Differentiation](#automatic-differentiation)
- [Automatic Parallel](#automatic-parallel)
- [Installation](#installation)
@@ -53,7 +53,7 @@ The goal of MindSpore automatic parallel is to build a training method that comb

<img src="docs/Automatic-parallel.png" alt="Automatic Parallel" width="600"/>

At present, MindSpore uses a fine-grained parallel strategy of splitting operators, that is, each operator in the figure is splited into a cluster to complete parallel operations. The splitting strategy during this period may be very complicated, but as a developer advocating Pythonic, you don't need to care about the underlying implementation, as long as the top-level API compute is efficient.
At present, MindSpore uses a fine-grained parallel strategy of splitting operators, that is, each operator in the figure is splitted into a cluster to complete parallel operations. The splitting strategy during this period may be very complicated, but as a developer advocating Pythonic, you don't need to care about the underlying implementation, as long as the top-level API compute is efficient.

## Installation

@@ -70,9 +70,9 @@ MindSpore offers build options across multiple backends:
| GPU CUDA 10.1 | Ubuntu-x86 | ✔️ |
| CPU | Ubuntu-x86 | ✔️ |

For installation using pip, take `Ubuntu-x86` and `CPU` build version as an example:
For installation using `pip`, take `Ubuntu-x86` and `CPU` build version as an example:

1. Download whl from [MindSpore website](https://www.mindspore.cn/), and install the package.
1. Download whl from [MindSpore download page](https://www.mindspore.cn/versions/en), and install the package.

```
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/cpu/ubuntu-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl
@@ -96,14 +96,60 @@ currently the containerized build options are supported as follows:
| Hardware Platform | Docker Image URL |
| :---------------- | :--------------- |
| CPU | `mindspore/mindspore-cpu:0.1.0-alpha` |
| GPU CUDA 9.2 | `mindspore/mindspore-cuda9.2:0.1.0-alpha` |
| GPU CUDA 10.1 | `mindspore/mindspore-cuda10.1:0.1.0-alpha` |
| GPU | `mindspore/mindspore-gpu:0.1.0-alpha` |
| Ascend | <center>—</center> |

Take `CPU` for example, you can directly pull the image using the below command:
```
docker pull mindspore/mindspore-cpu:0.1.0-alpha
```
* CPU

For `CPU` backend, you can directly pull and run the image using the below command:
```
docker pull mindspore/mindspore-cpu:0.1.0-alpha
docker run -it mindspore/mindspore-cpu:0.1.0-alpha python -c 'import mindspore'
```

* GPU

For `GPU` backend, please make sure the `nvidia-container-toolkit` has been installed in advance, here are some install guidelines for Ubuntu users:
```
DISTRIBUTION=$(. /etc/os-release; echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$DISTRIBUTION/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list

sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit nvidia-docker2
sudo systemctl restart docker
```

Then you can pull and run the image using the below command:
```
docker pull mindspore/mindspore-gpu:0.1.0-alpha
docker run -it --runtime=nvidia --privileged=true mindspore/mindspore-gpu:0.1.0-alpha /bin/bash
```

To test if the docker image works, please execute the python code below and check the output:
```python
import numpy as np
from mindspore import Tensor
from mindspore.ops import functional as F
import mindspore.context as context

context.set_context(device_target="GPU")
x = Tensor(np.ones([1,3,3,4]).astype(np.float32))
y = Tensor(np.ones([1,3,3,4]).astype(np.float32))
print(F.tensor_add(x, y))
```
```
[[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],

[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],

[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]]]
```

If anyone wants to learn more about the build process of MindSpore docker images,
please check out `docker` folder for the details.


Loading…
Cancel
Save