Merge pull request !717 from leonwanghui/0.2.0-updatetags/v0.3.0-alpha
| @@ -1,7 +1,7 @@ | |||
|  | |||
| ============================================================ | |||
| - [What is MindSpore?](#what-is-mindspore) | |||
| - [What Is MindSpore?](#what-is-mindspore) | |||
| - [Automatic Differentiation](#automatic-differentiation) | |||
| - [Automatic Parallel](#automatic-parallel) | |||
| - [Installation](#installation) | |||
| @@ -29,7 +29,7 @@ enrichment of the AI software/hardware application ecosystem. | |||
| <img src="docs/MindSpore-architecture.png" alt="MindSpore Architecture" width="600"/> | |||
| For more details please check out our [Architecture Guide](https://www.mindspore.cn/docs/en/0.1.0-alpha/architecture.html). | |||
| For more details please check out our [Architecture Guide](https://www.mindspore.cn/docs/en/0.2.0-alpha/architecture.html). | |||
| ### Automatic Differentiation | |||
| @@ -76,7 +76,7 @@ For installation using `pip`, take `CPU` and `Ubuntu-x86` build version as an ex | |||
| 1. Download whl from [MindSpore download page](https://www.mindspore.cn/versions/en), and install the package. | |||
| ``` | |||
| pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/cpu/ubuntu-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl | |||
| pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.2.0-alpha/MindSpore/cpu/ubuntu-x86/mindspore-0.2.0-cp37-cp37m-linux_x86_64.whl | |||
| ``` | |||
| 2. Run the following command to verify the install. | |||
| @@ -96,20 +96,22 @@ currently the containerized build options are supported as follows: | |||
| | Hardware Platform | Docker Image Repository | Tag | Description | | |||
| | :---------------- | :---------------------- | :-- | :---------- | | |||
| | CPU | `mindspore/mindspore-cpu` | `0.1.0-alpha` | Production environment with pre-installed MindSpore `0.1.0-alpha` CPU release. | | |||
| | CPU | `mindspore/mindspore-cpu` | `x.y.z` | Production environment with pre-installed MindSpore `x.y.z` CPU release. | | |||
| | | | `devel` | Development environment provided to build MindSpore (with `CPU` backend) from the source, refer to https://www.mindspore.cn/install/en for installation details. | | |||
| | | | `runtime` | Runtime environment provided to install MindSpore binary package with `CPU` backend. | | |||
| | GPU | `mindspore/mindspore-gpu` | `0.1.0-alpha` | Production environment with pre-installed MindSpore `0.1.0-alpha` GPU release. | | |||
| | GPU | `mindspore/mindspore-gpu` | `x.y.z` | Production environment with pre-installed MindSpore `x.y.z` GPU release. | | |||
| | | | `devel` | Development environment provided to build MindSpore (with `GPU CUDA10.1` backend) from the source, refer to https://www.mindspore.cn/install/en for installation details. | | |||
| | | | `runtime` | Runtime environment provided to install MindSpore binary package with `GPU` backend. | | |||
| | | | `runtime` | Runtime environment provided to install MindSpore binary package with `GPU CUDA10.1` backend. | | |||
| | Ascend | <center>—</center> | <center>—</center> | Coming soon. | | |||
| > **NOTICE:** For GPU `devel` docker image, it's NOT suggested to directly install the whl package after building from the source, instead we strongly RECOMMEND you transfer and install the whl package inside GPU `runtime` docker image. | |||
| * CPU | |||
| For `CPU` backend, you can directly pull and run the image using the below command: | |||
| For `CPU` backend, you can directly pull and run the latest stable image using the below command: | |||
| ``` | |||
| docker pull mindspore/mindspore-cpu:0.1.0-alpha | |||
| docker run -it mindspore/mindspore-cpu:0.1.0-alpha python -c 'import mindspore' | |||
| docker pull mindspore/mindspore-cpu:0.2.0-alpha | |||
| docker run -it mindspore/mindspore-cpu:0.2.0-alpha python -c 'import mindspore' | |||
| ``` | |||
| * GPU | |||
| @@ -124,20 +126,21 @@ currently the containerized build options are supported as follows: | |||
| sudo systemctl restart docker | |||
| ``` | |||
| Then you can pull and run the image using the below command: | |||
| Then you can pull and run the latest stable image using the below command: | |||
| ``` | |||
| docker pull mindspore/mindspore-gpu:0.1.0-alpha | |||
| docker run -it --runtime=nvidia --privileged=true mindspore/mindspore-gpu:0.1.0-alpha /bin/bash | |||
| docker pull mindspore/mindspore-gpu:0.2.0-alpha | |||
| docker run -it --runtime=nvidia --privileged=true mindspore/mindspore-gpu:0.2.0-alpha /bin/bash | |||
| ``` | |||
| To test if the docker image works, please execute the python code below and check the output: | |||
| ```python | |||
| import numpy as np | |||
| import mindspore.context as context | |||
| from mindspore import Tensor | |||
| from mindspore.ops import functional as F | |||
| import mindspore.context as context | |||
| context.set_context(device_target="GPU") | |||
| x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) | |||
| y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) | |||
| print(F.tensor_add(x, y)) | |||
| @@ -161,7 +164,7 @@ please check out `docker` folder for the details. | |||
| ## Quickstart | |||
| See the [Quick Start](https://www.mindspore.cn/tutorial/en/0.1.0-alpha/quick_start/quick_start.html) | |||
| See the [Quick Start](https://www.mindspore.cn/tutorial/en/0.2.0-alpha/quick_start/quick_start.html) | |||
| to implement the image classification. | |||
| ## Docs | |||
| @@ -1,3 +1,75 @@ | |||
| # Release 0.2.0-alpha | |||
| ## Major Features and Improvements | |||
| ### Ascend 910 Training and Inference Framework | |||
| * New models | |||
| * MobileNetV2: Inverted Residuals and Linear Bottlenecks. | |||
| * ResNet101: Deep Residual Learning for Image Recognition. | |||
| * Frontend and User Interface | |||
| * Support for all python comparison operators. | |||
| * Support for math operators **,//,%. Support for other python operators like and/or/not/is/is not/ in/ not in. | |||
| * Support for the gradients of function with variable arguments. | |||
| * Support for tensor indexing assignment for certain indexing type. | |||
| * Support for dynamic learning rate. | |||
| * User interfaces change log | |||
| * DepthwiseConv2dNative, DepthwiseConv2dNativeBackpropFilter, DepthwiseConv2dNativeBackpropInput([!424](https://gitee.com/mindspore/mindspore/pulls/424)) | |||
| * ReLU6, ReLU6Grad([!224](https://gitee.com/mindspore/mindspore/pulls/224)) | |||
| * GeneratorDataset([!183](https://gitee.com/mindspore/mindspore/pulls/183)) | |||
| * VOCDataset([!477](https://gitee.com/mindspore/mindspore/pulls/477)) | |||
| * MindDataset, PKSampler([!514](https://gitee.com/mindspore/mindspore/pulls/514)) | |||
| * map([!506](https://gitee.com/mindspore/mindspore/pulls/506)) | |||
| * Conv([!226](https://gitee.com/mindspore/mindspore/pulls/226)) | |||
| * Adam([!253](https://gitee.com/mindspore/mindspore/pulls/253)) | |||
| * _set_fusion_strategy_by_idx, _set_fusion_strategy_by_size([!189](https://gitee.com/mindspore/mindspore/pulls/189)) | |||
| * CheckpointConfig([!122](https://gitee.com/mindspore/mindspore/pulls/122)) | |||
| * Constant([!54](https://gitee.com/mindspore/mindspore/pulls/54)) | |||
| * Executor and Performance Optimization | |||
| * Support parallel execution of data prefetching and forward/backward computing. | |||
| * Support parallel execution of gradient aggregation and forward/backward computing in distributed training scenarios. | |||
| * Support operator fusion optimization. | |||
| * Optimize compilation process and improve the performance. | |||
| * Data processing, augmentation, and save format | |||
| * Support multi-process of GeneratorDataset/PyFunc for high performance | |||
| * Support variable batchsize | |||
| * Support new Dataset operators, such as filter,skip,take,TextLineDataset | |||
| ### Other Hardware Support | |||
| * GPU platform | |||
| * Use dynamic memory pool by default on GPU. | |||
| * Support parallel execution of computation and communication. | |||
| * Support continuous address allocation by memory pool. | |||
| * CPU platform | |||
| * Support for windows 10 OS. | |||
| ## Bugfixes | |||
| * Models | |||
| * Fix mixed precision bug for VGG16 model ([!629](https://gitee.com/mindspore/mindspore/pulls/629)). | |||
| * Python API | |||
| * Fix ControlDepend operator bugs on CPU and GPU ([!396](https://gitee.com/mindspore/mindspore/pulls/396)). | |||
| * Fix ArgMinWithValue operator bugs ([!338](https://gitee.com/mindspore/mindspore/pulls/338)). | |||
| * Fix Dense operator bugs on PyNative mode ([!276](https://gitee.com/mindspore/mindspore/pulls/276)). | |||
| * Fix MatMul operator bugs on PyNative mode ([!288](https://gitee.com/mindspore/mindspore/pulls/288)). | |||
| * Executor | |||
| * Fix operator selection bugs and make it general ([!300](https://gitee.com/mindspore/mindspore/pulls/300)). | |||
| * Fix memory reuse bug for GetNext op ([!291](https://gitee.com/mindspore/mindspore/pulls/291)). | |||
| * GPU platform | |||
| * Fix memory allocation in multi-graph scenarios ([!444](https://gitee.com/mindspore/mindspore/pulls/444)). | |||
| * Fix bias_add_grad under fp16 precision ([!598](https://gitee.com/mindspore/mindspore/pulls/598)). | |||
| * Fix support for fp16 kernels on nvidia 1080Ti([!571](https://gitee.com/mindspore/mindspore/pulls/571)). | |||
| * Fix parsing of tuple type parameters ([!316](https://gitee.com/mindspore/mindspore/pulls/316)). | |||
| * Data processing | |||
| * Fix TypeErrors about can't pickle mindspore._c_dataengine.DEPipeline objects([!434](https://gitee.com/mindspore/mindspore/pulls/434)) | |||
| * Add TFRecord file verification([!406](https://gitee.com/mindspore/mindspore/pulls/406)) | |||
| ## Contributors | |||
| Thanks goes to these wonderful people: | |||
| Alexey_Shevlyakov, Cathy, Chong, Hoai, Jonathan, Junhan, JunhanHu, Peilin, SanjayChan, StrawNoBerry, VectorSL, Wei, WeibiaoYu, Xiaoda, Yanjun, YuJianfeng, ZPaC, Zhang, ZhangQinghua, ZiruiWu, amongo, anthonyaje, anzhengqi, biffex, caifubi, candanzg, caojian05, casgj, cathwong, ch-l, chang, changzherui, chenfei, chengang, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, dengwentao, dinghao, fanglei, fary86, flywind, gaojing, geekun, gengdongjie, ghzl, gong, gongchen, gukecai, guohongzilong, guozhijian, gziyan, h.farahat, hesham, huangdongrun, huanghui, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, jonathan_yan, jonyguo, jzw, kingfo, kisnwang, laiyongqiang, leonwanghui, lianliguang, lichen, lichenever, limingqi107, liubuyu, liuxiao, liyong, liyong126, lizhenyu, lupengcheng, lvliang, maoweiyong, ms_yan, mxm, ougongchang, panfengfeng, panyifeng, pengyanjun, penn, qianlong, seatea, simson, suteng, thlinh, vlne-v1, wangchengke, wanghua, wangnan39, wangqiuliang, wenchunjiang, wenkai, wukesong, xiefangqi, xulei, yanghaitao, yanghaoran, yangjie159, yangzhenzhang, yankai10, yanzhenxiang2020, yao_yf, yoonlee666, zhangbuxue, zhangz0911gm, zhangzheng, zhaojichen, zhaoting, zhaozhenlong, zhongligeng, zhoufeng, zhousiyi, zjun, zyli2020, yuhuijun, limingqi107, lizhenyu, chenweifeng. | |||
| Contributions of any kind are welcome! | |||
| # Release 0.1.0-alpha | |||
| ## Main Features | |||
| @@ -14,27 +14,27 @@ | |||
| @rem ============================================================================ | |||
| @echo off | |||
| @title mindspore_build | |||
| SET BASEPATH=%CD% | |||
| IF NOT EXIST %BASEPATH%/build ( | |||
| md "build" | |||
| ) | |||
| cd %BASEPATH%/build | |||
| SET BUILD_PATH=%CD% | |||
| IF NOT EXIST %BUILD_PATH%/mindspore ( | |||
| md "mindspore" | |||
| ) | |||
| cd %CD%/mindspore | |||
| cmake -DCMAKE_BUILD_TYPE=Release -DENABLE_CPU=ON -DENABLE_MINDDATA=ON -DUSE_GLOG=ON -G "CodeBlocks - MinGW Makefiles" ../.. | |||
| IF NOT %errorlevel% == 0 ( | |||
| echo "cmake fail." | |||
| goto run_fail | |||
| ) | |||
| IF "%1%" == "" ( | |||
| cmake --build . --target package -- -j6 | |||
| ) ELSE ( | |||
| @@ -433,9 +433,9 @@ build_predict() | |||
| cd "${BASEPATH}/predict/output/" | |||
| if [[ "$PREDICT_PLATFORM" == "x86_64" ]]; then | |||
| tar -cf MSPredict-0.1.0-linux_x86_64.tar.gz include/ lib/ --warning=no-file-changed | |||
| tar -cf MSPredict-0.2.0-linux_x86_64.tar.gz include/ lib/ --warning=no-file-changed | |||
| elif [[ "$PREDICT_PLATFORM" == "arm64" ]]; then | |||
| tar -cf MSPredict-0.1.0-linux_aarch64.tar.gz include/ lib/ --warning=no-file-changed | |||
| tar -cf MSPredict-0.2.0-linux_aarch64.tar.gz include/ lib/ --warning=no-file-changed | |||
| fi | |||
| echo "success to build predict project!" | |||
| } | |||
| @@ -4,14 +4,13 @@ This folder hosts all the `Dockerfile` to build MindSpore container images with | |||
| ### MindSpore docker build command | |||
| * CPU | |||
| | Hardware Platform | Version | Build Command | | |||
| | :---------------- | :------ | :------------ | | |||
| | CPU | `x.y.z` | cd mindspore-cpu/x.y.z && docker build . -t mindspore/mindspore-cpu:x.y.z | | |||
| | | `devel` | cd mindspore-cpu/devel && docker build . -t mindspore/mindspore-cpu:devel | | |||
| | | `runtime` | cd mindspore-cpu/runtime && docker build . -t mindspore/mindspore-cpu:runtime | | |||
| | GPU | `x.y.z` | cd mindspore-gpu/x.y.z && docker build . -t mindspore/mindspore-gpu:x.y.z | | |||
| | | `devel` | cd mindspore-gpu/devel && docker build . -t mindspore/mindspore-gpu:devel | | |||
| | | `runtime` | cd mindspore-gpu/runtime && docker build . -t mindspore/mindspore-gpu:runtime | | |||
| ``` | |||
| cd mindspore-cpu/0.1.0-alpha && docker build . -t mindspore/mindspore-cpu:0.1.0-alpha | |||
| ``` | |||
| * GPU | |||
| ``` | |||
| cd mindspore-gpu/0.1.0-alpha && docker build . -t mindspore/mindspore-gpu:0.1.0-alpha | |||
| ``` | |||
| > **NOTICE:** The `x.y.z` version shown above should be replaced with the real version number. | |||
| @@ -0,0 +1,67 @@ | |||
| FROM ubuntu:18.04 | |||
| MAINTAINER leonwanghui <leon.wanghui@huawei.com> | |||
| # Set env | |||
| ENV PYTHON_ROOT_PATH /usr/local/python-3.7.5 | |||
| ENV PATH /usr/local/bin:$PATH | |||
| # Install base tools | |||
| RUN apt update \ | |||
| && DEBIAN_FRONTEND=noninteractive apt install -y \ | |||
| vim \ | |||
| wget \ | |||
| curl \ | |||
| xz-utils \ | |||
| net-tools \ | |||
| openssh-client \ | |||
| git \ | |||
| ntpdate \ | |||
| tzdata \ | |||
| tcl \ | |||
| sudo \ | |||
| bash-completion | |||
| # Install compile tools | |||
| RUN DEBIAN_FRONTEND=noninteractive apt install -y \ | |||
| gcc \ | |||
| g++ \ | |||
| zlibc \ | |||
| make \ | |||
| libgmp-dev \ | |||
| patch \ | |||
| autoconf \ | |||
| libtool \ | |||
| automake \ | |||
| flex | |||
| # Set bash | |||
| RUN echo "dash dash/sh boolean false" | debconf-set-selections | |||
| RUN DEBIAN_FRONTEND=noninteractive dpkg-reconfigure dash | |||
| # Install python (v3.7.5) | |||
| RUN apt install -y libffi-dev libssl-dev zlib1g-dev libbz2-dev libncurses5-dev \ | |||
| libgdbm-dev libgdbm-compat-dev liblzma-dev libreadline-dev libsqlite3-dev \ | |||
| && cd /tmp \ | |||
| && wget https://github.com/python/cpython/archive/v3.7.5.tar.gz \ | |||
| && tar -xvf v3.7.5.tar.gz \ | |||
| && cd /tmp/cpython-3.7.5 \ | |||
| && mkdir -p ${PYTHON_ROOT_PATH} \ | |||
| && ./configure --prefix=${PYTHON_ROOT_PATH} \ | |||
| && make -j4 \ | |||
| && make install -j4 \ | |||
| && rm -f /usr/local/bin/python \ | |||
| && rm -f /usr/local/bin/pip \ | |||
| && ln -s ${PYTHON_ROOT_PATH}/bin/python3.7 /usr/local/bin/python \ | |||
| && ln -s ${PYTHON_ROOT_PATH}/bin/pip3.7 /usr/local/bin/pip \ | |||
| && rm -rf /tmp/cpython-3.7.5 \ | |||
| && rm -f /tmp/v3.7.5.tar.gz | |||
| # Set pip source | |||
| RUN mkdir -pv /root/.pip \ | |||
| && echo "[global]" > /root/.pip/pip.conf \ | |||
| && echo "trusted-host=mirrors.aliyun.com" >> /root/.pip/pip.conf \ | |||
| && echo "index-url=http://mirrors.aliyun.com/pypi/simple/" >> /root/.pip/pip.conf | |||
| # Install MindSpore cpu whl package | |||
| RUN pip install --no-cache-dir https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.2.0-alpha/MindSpore/cpu/ubuntu-x86/mindspore-0.2.0-cp37-cp37m-linux_x86_64.whl | |||
| @@ -0,0 +1,83 @@ | |||
| FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 | |||
| MAINTAINER leonwanghui <leon.wanghui@huawei.com> | |||
| # Set env | |||
| ENV PYTHON_ROOT_PATH /usr/local/python-3.7.5 | |||
| ENV OMPI_ROOT_PATH /usr/local/openmpi-3.1.5 | |||
| ENV PATH ${OMPI_ROOT_PATH}/bin:/usr/local/bin:$PATH | |||
| ENV LD_LIBRARY_PATH ${OMPI_ROOT_PATH}/lib:$LD_LIBRARY_PATH | |||
| # Install base tools | |||
| RUN apt update \ | |||
| && DEBIAN_FRONTEND=noninteractive apt install -y \ | |||
| vim \ | |||
| wget \ | |||
| curl \ | |||
| xz-utils \ | |||
| net-tools \ | |||
| openssh-client \ | |||
| git \ | |||
| ntpdate \ | |||
| tzdata \ | |||
| tcl \ | |||
| sudo \ | |||
| bash-completion | |||
| # Install compile tools | |||
| RUN DEBIAN_FRONTEND=noninteractive apt install -y \ | |||
| gcc \ | |||
| g++ \ | |||
| zlibc \ | |||
| make \ | |||
| libgmp-dev \ | |||
| patch \ | |||
| autoconf \ | |||
| libtool \ | |||
| automake \ | |||
| flex \ | |||
| libnccl2=2.4.8-1+cuda10.1 \ | |||
| libnccl-dev=2.4.8-1+cuda10.1 | |||
| # Set bash | |||
| RUN echo "dash dash/sh boolean false" | debconf-set-selections | |||
| RUN DEBIAN_FRONTEND=noninteractive dpkg-reconfigure dash | |||
| # Install python (v3.7.5) | |||
| RUN apt install -y libffi-dev libssl-dev zlib1g-dev libbz2-dev libncurses5-dev \ | |||
| libgdbm-dev libgdbm-compat-dev liblzma-dev libreadline-dev libsqlite3-dev \ | |||
| && cd /tmp \ | |||
| && wget https://github.com/python/cpython/archive/v3.7.5.tar.gz \ | |||
| && tar -xvf v3.7.5.tar.gz \ | |||
| && cd /tmp/cpython-3.7.5 \ | |||
| && mkdir -p ${PYTHON_ROOT_PATH} \ | |||
| && ./configure --prefix=${PYTHON_ROOT_PATH} \ | |||
| && make -j4 \ | |||
| && make install -j4 \ | |||
| && rm -f /usr/local/bin/python \ | |||
| && rm -f /usr/local/bin/pip \ | |||
| && ln -s ${PYTHON_ROOT_PATH}/bin/python3.7 /usr/local/bin/python \ | |||
| && ln -s ${PYTHON_ROOT_PATH}/bin/pip3.7 /usr/local/bin/pip \ | |||
| && rm -rf /tmp/cpython-3.7.5 \ | |||
| && rm -f /tmp/v3.7.5.tar.gz | |||
| # Set pip source | |||
| RUN mkdir -pv /root/.pip \ | |||
| && echo "[global]" > /root/.pip/pip.conf \ | |||
| && echo "trusted-host=mirrors.aliyun.com" >> /root/.pip/pip.conf \ | |||
| && echo "index-url=http://mirrors.aliyun.com/pypi/simple/" >> /root/.pip/pip.conf | |||
| # Install openmpi (v3.1.5) | |||
| RUN cd /tmp \ | |||
| && wget https://download.open-mpi.org/release/open-mpi/v3.1/openmpi-3.1.5.tar.gz \ | |||
| && tar -xvf openmpi-3.1.5.tar.gz \ | |||
| && cd /tmp/openmpi-3.1.5 \ | |||
| && mkdir -p ${OMPI_ROOT_PATH} \ | |||
| && ./configure --prefix=${OMPI_ROOT_PATH} \ | |||
| && make -j4 \ | |||
| && make install -j4 \ | |||
| && rm -rf /tmp/openmpi-3.1.5 \ | |||
| && rm -f /tmp/openmpi-3.1.5.tar.gz | |||
| # Install MindSpore cuda-10.1 whl package | |||
| RUN pip install --no-cache-dir https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.2.0-alpha/MindSpore/gpu/cuda-10.1/mindspore-0.2.0-cp37-cp37m-linux_x86_64.whl | |||
| @@ -23,7 +23,7 @@ from setuptools import setup, find_packages | |||
| from setuptools.command.egg_info import egg_info | |||
| from setuptools.command.build_py import build_py | |||
| version = '0.1.0' | |||
| version = '0.2.0' | |||
| backend_policy = os.getenv('BACKEND_POLICY') | |||
| commit_id = os.getenv('COMMIT_ID').replace("\n", "") | |||