Browse Source

!191 add releasenotes

From: @zhangyinxia
Reviewed-by: @xu-yfei,@zhoufeng54
Signed-off-by: @xu-yfei
tags/v1.2.0
mindspore-ci-bot Gitee 5 years ago
parent
commit
b5049ab484
3 changed files with 22 additions and 6 deletions
  1. +5
    -2
      README.md
  2. +4
    -1
      README_CN.md
  3. +13
    -3
      RELEASE.md

+ 5
- 2
README.md View File

@@ -27,7 +27,7 @@ MindSpore Serving is a lightweight and high-performance service module that help

MindSpore Serving architecture:

Currently, the MindSpore Serving nodes include client, master, and worker. On a client node, you can directly deliver inference service commands through the gRPC or RESTful API. Servable model service is deployed on a worker node. Servable indicates a single model or a combination of multiple models and provides different services in various methods. A master node manages all worker nodes and their model information, as well as managing and distributing tasks. The master and worker nodes can be deployed in the same process or in different processes. Currently, the client and master nodes do not depend on specific hardware platforms. The worker node supports only the Ascend 310 and Ascend 910 platforms. GPUs and CPUs will be supported in the future.
Currently, the MindSpore Serving nodes include client, master, and worker. On a client node, you can directly deliver inference service commands through the gRPC or RESTful API. Servable model service is deployed on a worker node. Servable indicates a single model or a combination of multiple models and provides different services in various methods. A master node manages all worker nodes and their model information, as well as managing and distributing tasks. The master and worker nodes can be deployed in the same process or in different processes. Currently, the client and master nodes do not depend on specific hardware platforms. The worker node supports GPUs, the Ascend 310 and Ascend 910 platforms. CPUs will be supported in the future.
<img src="docs/architecture.png" alt="MindSpore Architecture" width="600"/>

MindSpore Serving provides the following functions:
@@ -36,6 +36,7 @@ MindSpore Serving provides the following functions:
- Pre-processing and post-processing of assembled models
- Batch. Multiple instance requests are split and combined to meet the `batch size` requirement of the model.
- Simple Python APIs on clients
- Distributed model inference

## Installation

@@ -69,6 +70,8 @@ Perform the following steps to install Serving:
Method 2: Directly build Serving. The MindSpore package is built together with Serving. You need to configure the [environment variables](https://gitee.com/mindspore/docs/blob/master/install/mindspore_ascend_install_source_en.md#configuring-environment-variables) for MindSpore building.

```shell
# GPU
sh build.sh -e gpu
# Ascend 310
sh build.sh -e ascend -V 310
# Ascend 910
@@ -134,4 +137,4 @@ Welcome to MindSpore contribution.

## License

[Apache License 2.0](LICENSE)
[Apache License 2.0](LICENSE)

+ 4
- 1
README_CN.md View File

@@ -27,7 +27,7 @@ MindSpore Serving是一个轻量级、高性能的服务模块,旨在帮助Min

MindSpore Serving架构:

当前MindSpore Serving服务节点分为client、master和worker三部分。client为客户端节点,用户可以直接通过gRPC或RESTful接口方式下发推理服务命令。执行机worker部署了模型服务Servable,这里的Servable可以是单个模型,也可以是多个模型的组合,一个Servable可以通过提供多种方法来提供不同的服务。master节点用来管理所有的执行机worker及其部署的模型信息,并进行任务管理与分发。master和worker可以部署在一个进程中,也可以部署在不同进程中。当前client和master不依赖特定硬件平台,worker节点支持Ascend 310和Ascend 910平台,后续会逐步支持GPU和CPU场景。
当前MindSpore Serving服务节点分为client、master和worker三部分。client为客户端节点,用户可以直接通过gRPC或RESTful接口方式下发推理服务命令。执行机worker部署了模型服务Servable,这里的Servable可以是单个模型,也可以是多个模型的组合,一个Servable可以通过提供多种方法来提供不同的服务。master节点用来管理所有的执行机worker及其部署的模型信息,并进行任务管理与分发。master和worker可以部署在一个进程中,也可以部署在不同进程中。当前client和master不依赖特定硬件平台,worker节点支持GPU,Ascend 310和Ascend 910平台,后续会逐步支持CPU场景。
<img src="docs/architecture.png" alt="MindSpore Architecture" width="600"/>

MindSpore Serving提供以下功能:
@@ -36,6 +36,7 @@ MindSpore Serving提供以下功能:
- 支持组装模型的前处理和后处理。
- 支持batch功能,多实例请求会被拆分组合以满足模型`batch size`的需要。
- 提供客户端Python简易接口。
- 支持分布式模型推理功能。

## 安装

@@ -69,6 +70,8 @@ MindSpore Serving依赖MindSpore训练推理框架,安装完[MindSpore](https:
方式二,直接编译Serving,编译时会配套编译MindSpore的包,需要配置MindSpore编译时的[环境变量](https://gitee.com/mindspore/docs/blob/master/install/mindspore_ascend_install_source.md#配置环境变量) :

```shell
# GPU
sh build.sh -e gpu
# Ascend 310
sh build.sh -e ascend -V 310
# Ascend 910


+ 13
- 3
RELEASE.md View File

@@ -1,8 +1,18 @@
# MindSpore Serving 1.1.0 Release Notes

## Major Features and Improvements
# 1. MindSpore Serving 1.2.0 Release Notes

### Ascend 310 & Ascend 910 Serving Framework
## 1.1. Major Features and Improvements

### 1.1.1. Serving Framework

Support distributed inference, it needs to cooperate with distributed training to export distributed models for super-large-scale neural network parameters.
Support GPU platform, Serving worker nodes can be deployer on GPU, Ascend 310 and Ascend 910.

# 2. MindSpore Serving 1.1.0 Release Notes

## 2.1. Major Features and Improvements

### 2.1.1. Ascend 310 & Ascend 910 Serving Framework

Support gRPC and RESTful API.
Support simple Python API for Client and Server.


Loading…
Cancel
Save