From 17b21c7e05ba4cbe96884392e067acd2deb5361a Mon Sep 17 00:00:00 2001 From: caojiewen Date: Wed, 24 Mar 2021 01:56:22 +0800 Subject: [PATCH] 1. removed the link of apply form 2. fixed the lint errors 3. fixed the code spell errors --- model_zoo/official/cv/FCN8s/README.md | 2 +- model_zoo/official/cv/centerface/README.md | 2 +- model_zoo/official/cv/cnnctc/README.md | 2 +- model_zoo/official/cv/cnnctc/README_CN.md | 2 +- model_zoo/official/cv/crnn/README.md | 2 +- .../official/cv/crnn_seq2seq_ocr/README.md | 2 +- model_zoo/official/cv/ctpn/README.md | 2 +- model_zoo/official/cv/deeplabv3/README.md | 2 +- model_zoo/official/cv/deeplabv3/README_CN.md | 2 +- model_zoo/official/cv/deeptext/README.md | 2 +- model_zoo/official/cv/densenet/README.md | 2 +- model_zoo/official/cv/densenet/README_CN.md | 8 +- model_zoo/official/cv/dpn/README.md | 2 +- model_zoo/official/cv/faster_rcnn/README.md | 2 +- .../official/cv/faster_rcnn/README_CN.md | 2 +- model_zoo/official/cv/googlenet/README.md | 2 +- model_zoo/official/cv/googlenet/README_CN.md | 2 +- model_zoo/official/cv/inceptionv3/README.md | 2 +- .../official/cv/inceptionv3/README_CN.md | 2 +- model_zoo/official/cv/inceptionv4/README.md | 2 +- model_zoo/official/cv/maskrcnn/README.md | 2 +- model_zoo/official/cv/maskrcnn/README_CN.md | 2 +- .../cv/maskrcnn_mobilenetv1/README.md | 2 +- model_zoo/official/cv/mobilenetv1/README.md | 2 +- model_zoo/official/cv/mobilenetv2/README.md | 2 +- .../official/cv/mobilenetv2/README_CN.md | 2 +- .../cv/mobilenetv2_quant/README_CN.md | 2 +- .../official/cv/mobilenetv2_quant/Readme.md | 2 +- model_zoo/official/cv/openpose/README.md | 2 +- model_zoo/official/cv/psenet/README.md | 2 +- model_zoo/official/cv/psenet/README_CN.md | 2 +- model_zoo/official/cv/resnet/README.md | 2 +- model_zoo/official/cv/resnet/README_CN.md | 4 +- model_zoo/official/cv/resnet152/README-CN.md | 2 +- .../official/cv/resnet50_quant/README.md | 8 +- .../official/cv/resnet50_quant/README_CN.md | 12 +- model_zoo/official/cv/resnet_thor/README.md | 2 +- .../official/cv/resnet_thor/README_CN.md | 2 +- model_zoo/official/cv/resnext101/README_CN.md | 6 +- model_zoo/official/cv/resnext50/README.md | 2 +- model_zoo/official/cv/resnext50/README_CN.md | 2 +- model_zoo/official/cv/retinanet/README_CN.md | 2 +- .../official/cv/shufflenetv1/README_CN.md | 2 +- model_zoo/official/cv/simple_pose/README.md | 2 +- model_zoo/official/cv/squeezenet/README.md | 2 +- model_zoo/official/cv/tinydarknet/README.md | 2 +- .../official/cv/tinydarknet/README_CN.md | 2 +- model_zoo/official/cv/unet/README.md | 2 +- model_zoo/official/cv/unet/README_CN.md | 2 +- model_zoo/official/cv/vgg16/README.md | 109 ++++++++++-------- model_zoo/official/cv/vgg16/README_CN.md | 2 +- model_zoo/official/cv/warpctc/README.md | 64 +++++----- model_zoo/official/cv/warpctc/README_CN.md | 2 +- model_zoo/official/cv/xception/README.md | 2 +- .../official/cv/yolov3_darknet53/README.md | 2 +- .../official/cv/yolov3_darknet53/README_CN.md | 2 +- .../cv/yolov3_darknet53_quant/README.md | 74 +++++------- .../cv/yolov3_darknet53_quant/README_CN.md | 2 +- .../official/cv/yolov3_resnet18/README.md | 8 +- .../official/cv/yolov3_resnet18/README_CN.md | 2 +- model_zoo/official/cv/yolov4/README.md | 2 +- model_zoo/official/gnn/gcn/README.md | 78 ++++++------- model_zoo/official/gnn/gcn/README_CN.md | 2 +- model_zoo/official/nlp/bert/README.md | 2 +- model_zoo/official/nlp/bert/README_CN.md | 2 +- model_zoo/official/nlp/bert_thor/README.md | 2 +- model_zoo/official/nlp/bert_thor/README_CN.md | 2 +- model_zoo/official/nlp/fasttext/README.md | 6 +- model_zoo/official/nlp/gnmt_v2/README.md | 2 +- model_zoo/official/nlp/gpt/README.md | 2 +- model_zoo/official/nlp/gru/README.md | 2 +- model_zoo/official/nlp/lstm/README.md | 14 +-- model_zoo/official/nlp/lstm/README_CN.md | 2 +- model_zoo/official/nlp/mass/README.md | 6 +- model_zoo/official/nlp/mass/README_CN.md | 2 +- model_zoo/official/nlp/prophetnet/README.md | 6 +- model_zoo/official/nlp/textcnn/README.md | 4 +- model_zoo/official/nlp/tinybert/README.md | 2 +- model_zoo/official/nlp/tinybert/README_CN.md | 2 +- model_zoo/official/nlp/transformer/README.md | 2 +- .../official/nlp/transformer/README_CN.md | 4 +- model_zoo/official/recommend/deepfm/README.md | 4 +- .../official/recommend/deepfm/README_CN.md | 4 +- model_zoo/official/recommend/naml/README.md | 2 +- model_zoo/official/recommend/ncf/README.md | 2 +- .../recommend/wide_and_deep/README.md | 6 +- .../recommend/wide_and_deep/README_CN.md | 4 +- .../wide_and_deep_multitable/README.md | 99 +++++++++------- .../wide_and_deep_multitable/README_CN.md | 4 +- model_zoo/official/rl/dqn/README.md | 4 +- model_zoo/research/audio/fcn-4/README.md | 4 +- model_zoo/research/cv/FaceAttribute/README.md | 2 +- model_zoo/research/cv/FaceDetection/README.md | 2 +- .../cv/FaceQualityAssessment/README.md | 2 +- .../research/cv/FaceRecognition/README.md | 2 +- .../cv/FaceRecognitionForTracking/README.md | 2 +- .../cv/MaskedFaceRecognition/README.md | 2 +- model_zoo/research/cv/centernet/README.md | 2 +- model_zoo/research/cv/ghostnet/Readme.md | 46 ++++---- .../research/cv/ghostnet_quant/Readme.md | 45 ++++---- .../cv/resnet50_adv_pruning/Readme.md | 2 +- model_zoo/research/cv/squeezenet/README.md | 4 +- model_zoo/research/cv/ssd_ghostnet/README.md | 4 +- model_zoo/research/nlp/dscnn/README.md | 7 +- .../research/recommend/autodis/README.md | 2 +- model_zoo/research/rl/ldp_linucb/README.md | 8 +- 106 files changed, 408 insertions(+), 398 deletions(-) diff --git a/model_zoo/official/cv/FCN8s/README.md b/model_zoo/official/cv/FCN8s/README.md index e0dc23af10..46545f407e 100644 --- a/model_zoo/official/cv/FCN8s/README.md +++ b/model_zoo/official/cv/FCN8s/README.md @@ -41,7 +41,7 @@ Dataset used: # [环境要求](#contents) - 硬件(Ascend/GPU) - - 需要准备具有Ascend或GPU处理能力的硬件环境. 如需使用Ascend,可以发送 [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) 到ascend@huawei.com。一旦批准,你就可以使用此资源 + - 需要准备具有Ascend或GPU处理能力的硬件环境. - 框架 - [MindSpore](https://www.mindspore.cn/install/en) - 如需获取更多信息,请查看如下链接: diff --git a/model_zoo/official/cv/centerface/README.md b/model_zoo/official/cv/centerface/README.md index 119d37f04d..443d57d91f 100644 --- a/model_zoo/official/cv/centerface/README.md +++ b/model_zoo/official/cv/centerface/README.md @@ -82,7 +82,7 @@ other datasets need to use the same format as WiderFace. # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/cnnctc/README.md b/model_zoo/official/cv/cnnctc/README.md index 535e5f3c10..aa28782331 100644 --- a/model_zoo/official/cv/cnnctc/README.md +++ b/model_zoo/official/cv/cnnctc/README.md @@ -96,7 +96,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) diff --git a/model_zoo/official/cv/cnnctc/README_CN.md b/model_zoo/official/cv/cnnctc/README_CN.md index 35b8900e50..c5f9fb2d92 100644 --- a/model_zoo/official/cv/cnnctc/README_CN.md +++ b/model_zoo/official/cv/cnnctc/README_CN.md @@ -97,7 +97,7 @@ python src/preprocess_dataset.py - 硬件(Ascend) - - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 准备Ascend或GPU处理器搭建硬件环境。 - 框架 diff --git a/model_zoo/official/cv/crnn/README.md b/model_zoo/official/cv/crnn/README.md index 3b3247ecd7..e110c022fa 100644 --- a/model_zoo/official/cv/crnn/README.md +++ b/model_zoo/official/cv/crnn/README.md @@ -58,7 +58,7 @@ We provide `convert_ic03.py`, `convert_iiit5k.py`, `convert_svt.py` as exmples f ## [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. You will be able to have access to related resources once approved. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/crnn_seq2seq_ocr/README.md b/model_zoo/official/cv/crnn_seq2seq_ocr/README.md index 668d20bf87..00a9f0e6ca 100755 --- a/model_zoo/official/cv/crnn_seq2seq_ocr/README.md +++ b/model_zoo/official/cv/crnn_seq2seq_ocr/README.md @@ -38,7 +38,7 @@ For training and evaluation, we use the French Street Name Signs (FSNS) released ## [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. You will be able to have access to related resources once approved. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/ctpn/README.md b/model_zoo/official/cv/ctpn/README.md index 1c0a8de70b..0913cee2fe 100644 --- a/model_zoo/official/cv/ctpn/README.md +++ b/model_zoo/official/cv/ctpn/README.md @@ -57,7 +57,7 @@ Here we used 6 datasets for training, and 1 datasets for Evaluation. # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/deeplabv3/README.md b/model_zoo/official/cv/deeplabv3/README.md index 41b53bba1e..333f7dec80 100644 --- a/model_zoo/official/cv/deeplabv3/README.md +++ b/model_zoo/official/cv/deeplabv3/README.md @@ -74,7 +74,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend) -- Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. +- Prepare hardware environment with Ascend. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/deeplabv3/README_CN.md b/model_zoo/official/cv/deeplabv3/README_CN.md index 97661945bd..392bc773b5 100644 --- a/model_zoo/official/cv/deeplabv3/README_CN.md +++ b/model_zoo/official/cv/deeplabv3/README_CN.md @@ -89,7 +89,7 @@ Pascal VOC数据集和语义边界数据集(Semantic Boundaries Dataset,SBD # 环境要求 - 硬件(Ascend) - - 准备Ascend处理器搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 准备Ascend处理器搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/deeptext/README.md b/model_zoo/official/cv/deeptext/README.md index bb6d20b6c5..77fefad87d 100644 --- a/model_zoo/official/cv/deeptext/README.md +++ b/model_zoo/official/cv/deeptext/README.md @@ -49,7 +49,7 @@ Here we used 4 datasets for training, and 1 datasets for Evaluation. # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/densenet/README.md b/model_zoo/official/cv/densenet/README.md index 90462eaed8..43b5b6fb38 100644 --- a/model_zoo/official/cv/densenet/README.md +++ b/model_zoo/official/cv/densenet/README.md @@ -78,7 +78,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/densenet/README_CN.md b/model_zoo/official/cv/densenet/README_CN.md index 07e00f4513..5615d4638e 100644 --- a/model_zoo/official/cv/densenet/README_CN.md +++ b/model_zoo/official/cv/densenet/README_CN.md @@ -82,12 +82,12 @@ DenseNet-100使用的数据集: Cifar-10 # 环境要求 - 硬件(Ascend/GPU) -- 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 准备Ascend或GPU处理器搭建硬件环境。 - 框架 -- [MindSpore](https://www.mindspore.cn/install) + - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: -- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) -- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html) + - [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html) # 快速入门 diff --git a/model_zoo/official/cv/dpn/README.md b/model_zoo/official/cv/dpn/README.md index 9a5358b357..123a738115 100644 --- a/model_zoo/official/cv/dpn/README.md +++ b/model_zoo/official/cv/dpn/README.md @@ -70,7 +70,7 @@ The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advan To run the python scripts in the repository, you need to prepare the environment as follow: - Hardware - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to [ascend@huawei.com](mailto:ascend@huawei.com). Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Python and dependencies - Python3.7 - Mindspore 1.1.0 diff --git a/model_zoo/official/cv/faster_rcnn/README.md b/model_zoo/official/cv/faster_rcnn/README.md index 78226698bc..91276ea26f 100644 --- a/model_zoo/official/cv/faster_rcnn/README.md +++ b/model_zoo/official/cv/faster_rcnn/README.md @@ -48,7 +48,7 @@ Dataset used: [COCO2017]() # Environment Requirements - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Docker base image - [Ascend Hub](ascend.huawei.com/ascendhub/#/home) diff --git a/model_zoo/official/cv/faster_rcnn/README_CN.md b/model_zoo/official/cv/faster_rcnn/README_CN.md index a9d01444b3..c9a859edcd 100644 --- a/model_zoo/official/cv/faster_rcnn/README_CN.md +++ b/model_zoo/official/cv/faster_rcnn/README_CN.md @@ -49,7 +49,7 @@ Faster R-CNN是一个两阶段目标检测网络,该网络采用RPN,可以 # 环境要求 - 硬件(Ascend/GPU) - - 使用Ascend处理器来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 使用Ascend处理器来搭建硬件环境。 - 获取基础镜像 - [Ascend Hub](https://ascend.huawei.com/ascendhub/#/home) diff --git a/model_zoo/official/cv/googlenet/README.md b/model_zoo/official/cv/googlenet/README.md index b0a39d7def..0c392063de 100644 --- a/model_zoo/official/cv/googlenet/README.md +++ b/model_zoo/official/cv/googlenet/README.md @@ -68,7 +68,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/googlenet/README_CN.md b/model_zoo/official/cv/googlenet/README_CN.md index 03df3c1a8d..fc114e74e2 100644 --- a/model_zoo/official/cv/googlenet/README_CN.md +++ b/model_zoo/official/cv/googlenet/README_CN.md @@ -75,7 +75,7 @@ GoogleNet由多个inception模块串联起来,可以更加深入。 降维的 # 环境要求 - 硬件(Ascend/GPU) - - 使用Ascend或GPU处理器来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 使用Ascend或GPU处理器来搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install/en) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/inceptionv3/README.md b/model_zoo/official/cv/inceptionv3/README.md index bfba3390aa..49b8044b99 100644 --- a/model_zoo/official/cv/inceptionv3/README.md +++ b/model_zoo/official/cv/inceptionv3/README.md @@ -59,7 +59,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend) -- Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. +- Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/inceptionv3/README_CN.md b/model_zoo/official/cv/inceptionv3/README_CN.md index 7a295db95b..ce35335e7b 100644 --- a/model_zoo/official/cv/inceptionv3/README_CN.md +++ b/model_zoo/official/cv/inceptionv3/README_CN.md @@ -70,7 +70,7 @@ InceptionV3的总体网络架构如下: # 环境要求 - 硬件(Ascend) -- 使用Ascend来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 +- 使用Ascend来搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install/en) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/inceptionv4/README.md b/model_zoo/official/cv/inceptionv4/README.md index eac92df58d..d19faf842d 100644 --- a/model_zoo/official/cv/inceptionv4/README.md +++ b/model_zoo/official/cv/inceptionv4/README.md @@ -51,7 +51,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - or prepare GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) diff --git a/model_zoo/official/cv/maskrcnn/README.md b/model_zoo/official/cv/maskrcnn/README.md index aef4277d6b..88d082e973 100644 --- a/model_zoo/official/cv/maskrcnn/README.md +++ b/model_zoo/official/cv/maskrcnn/README.md @@ -53,7 +53,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - Docker base image diff --git a/model_zoo/official/cv/maskrcnn/README_CN.md b/model_zoo/official/cv/maskrcnn/README_CN.md index 0065d2a3a4..4be4834db4 100644 --- a/model_zoo/official/cv/maskrcnn/README_CN.md +++ b/model_zoo/official/cv/maskrcnn/README_CN.md @@ -55,7 +55,7 @@ MaskRCNN是一个两级目标检测网络,作为FasterRCNN的扩展模型, # 环境要求 - 硬件(昇腾处理器) - - 采用昇腾处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 采用昇腾处理器搭建硬件环境。 - 框架 - [MindSpore](https://gitee.com/mindspore/mindspore) - 获取基础镜像 diff --git a/model_zoo/official/cv/maskrcnn_mobilenetv1/README.md b/model_zoo/official/cv/maskrcnn_mobilenetv1/README.md index 0be61ab07e..2e18640ce9 100644 --- a/model_zoo/official/cv/maskrcnn_mobilenetv1/README.md +++ b/model_zoo/official/cv/maskrcnn_mobilenetv1/README.md @@ -54,7 +54,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/mobilenetv1/README.md b/model_zoo/official/cv/mobilenetv1/README.md index c971774e0e..15b858359c 100644 --- a/model_zoo/official/cv/mobilenetv1/README.md +++ b/model_zoo/official/cv/mobilenetv1/README.md @@ -64,7 +64,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil ## Environment Requirements - Hardware(Ascend) - - Prepare hardware environment with Ascend. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/mobilenetv2/README.md b/model_zoo/official/cv/mobilenetv2/README.md index 2c5d4b8fa4..e0a29eed30 100644 --- a/model_zoo/official/cv/mobilenetv2/README.md +++ b/model_zoo/official/cv/mobilenetv2/README.md @@ -50,7 +50,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend/GPU/CPU) - - Prepare hardware environment with Ascend, GPU or CPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend, GPU or CPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/mobilenetv2/README_CN.md b/model_zoo/official/cv/mobilenetv2/README_CN.md index 12d55fd6c3..027a4a8ee6 100644 --- a/model_zoo/official/cv/mobilenetv2/README_CN.md +++ b/model_zoo/official/cv/mobilenetv2/README_CN.md @@ -56,7 +56,7 @@ MobileNetV2总体网络架构如下: # 环境要求 - 硬件(Ascend/GPU/CPU) - - 使用Ascend、GPU或CPU处理器来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 使用Ascend、GPU或CPU处理器来搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/mobilenetv2_quant/README_CN.md b/model_zoo/official/cv/mobilenetv2_quant/README_CN.md index 6a45135fbd..6471eb45cf 100644 --- a/model_zoo/official/cv/mobilenetv2_quant/README_CN.md +++ b/model_zoo/official/cv/mobilenetv2_quant/README_CN.md @@ -65,7 +65,7 @@ MobileNetV2总体网络架构如下: # 环境要求 - 硬件:昇腾处理器(Ascend) - - 使用昇腾处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 使用昇腾处理器来搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源 diff --git a/model_zoo/official/cv/mobilenetv2_quant/Readme.md b/model_zoo/official/cv/mobilenetv2_quant/Readme.md index 5a22ea8eea..241aade01e 100644 --- a/model_zoo/official/cv/mobilenetv2_quant/Readme.md +++ b/model_zoo/official/cv/mobilenetv2_quant/Readme.md @@ -52,7 +52,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware:Ascend - - Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below diff --git a/model_zoo/official/cv/openpose/README.md b/model_zoo/official/cv/openpose/README.md index 68798789dd..d9760f795f 100644 --- a/model_zoo/official/cv/openpose/README.md +++ b/model_zoo/official/cv/openpose/README.md @@ -75,7 +75,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware (Ascend) - - Prepare hardware environment with Ascend. If you want to try, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - Download the VGG19 model of the MindSpore version: diff --git a/model_zoo/official/cv/psenet/README.md b/model_zoo/official/cv/psenet/README.md index 5b473f888a..02784a84be 100644 --- a/model_zoo/official/cv/psenet/README.md +++ b/model_zoo/official/cv/psenet/README.md @@ -46,7 +46,7 @@ A testing set containing about 2000 readable words # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](http://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/psenet/README_CN.md b/model_zoo/official/cv/psenet/README_CN.md index c2f40f8f8d..a76810a6ef 100644 --- a/model_zoo/official/cv/psenet/README_CN.md +++ b/model_zoo/official/cv/psenet/README_CN.md @@ -47,7 +47,7 @@ # 环境要求 - 硬件:昇腾处理器(Ascend) - - 使用Ascend处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 使用Ascend处理器来搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) diff --git a/model_zoo/official/cv/resnet/README.md b/model_zoo/official/cv/resnet/README.md index 724b246084..a677b9e5b5 100644 --- a/model_zoo/official/cv/resnet/README.md +++ b/model_zoo/official/cv/resnet/README.md @@ -82,7 +82,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend/GPU/CPU) - - Prepare hardware environment with Ascend, GPU or CPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend, GPU or CPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/resnet/README_CN.md b/model_zoo/official/cv/resnet/README_CN.md index 6affe2445d..6a748ba9ea 100755 --- a/model_zoo/official/cv/resnet/README_CN.md +++ b/model_zoo/official/cv/resnet/README_CN.md @@ -84,8 +84,8 @@ ResNet的总体网络架构如下: # 环境要求 -- 硬件(Ascend/GPU) - - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 +- 硬件(Ascend/GPU/CPU) + - 准备Ascend、GPU或CPU处理器搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install/en) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/resnet152/README-CN.md b/model_zoo/official/cv/resnet152/README-CN.md index 75e7f9e2f9..7f1898a4d4 100644 --- a/model_zoo/official/cv/resnet152/README-CN.md +++ b/model_zoo/official/cv/resnet152/README-CN.md @@ -35,7 +35,7 @@ ResNet152的总体网络架构如下:[链接](https://arxiv.org/pdf/1512.03385 # 环境要求 - 硬件 - - 准备Ascend处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 准备Ascend处理器搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install/en) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/resnet50_quant/README.md b/model_zoo/official/cv/resnet50_quant/README.md index 2c08da8b99..8b35c3c104 100644 --- a/model_zoo/official/cv/resnet50_quant/README.md +++ b/model_zoo/official/cv/resnet50_quant/README.md @@ -59,12 +59,12 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware:Ascend - - Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend. - Framework - - [MindSpore](https://www.mindspore.cn/install/en) + - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) diff --git a/model_zoo/official/cv/resnet50_quant/README_CN.md b/model_zoo/official/cv/resnet50_quant/README_CN.md index e3a8300a2e..b93a76ee3d 100644 --- a/model_zoo/official/cv/resnet50_quant/README_CN.md +++ b/model_zoo/official/cv/resnet50_quant/README_CN.md @@ -64,16 +64,12 @@ ResNet-50总体网络架构如下: # 环境要求 - 硬件:昇腾处理器(Ascend) - - 使用昇腾处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 - + - 使用昇腾处理器来搭建硬件环境。 - 框架 - - [MindSpore](https://www.mindspore.cn/install) - + - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: - - - [MindSpore教程](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore教程](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) ## 脚本说明 diff --git a/model_zoo/official/cv/resnet_thor/README.md b/model_zoo/official/cv/resnet_thor/README.md index cba50a97ef..2a8a934d0b 100644 --- a/model_zoo/official/cv/resnet_thor/README.md +++ b/model_zoo/official/cv/resnet_thor/README.md @@ -52,7 +52,7 @@ The classical first-order optimization algorithm, such as SGD, has a small amoun ## Environment Requirements - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) diff --git a/model_zoo/official/cv/resnet_thor/README_CN.md b/model_zoo/official/cv/resnet_thor/README_CN.md index 58ff735d8d..f1bb862cf7 100644 --- a/model_zoo/official/cv/resnet_thor/README_CN.md +++ b/model_zoo/official/cv/resnet_thor/README_CN.md @@ -57,7 +57,7 @@ ResNet-50的总体网络架构如下:[链接](https://arxiv.org/pdf/1512.03385 ## 环境要求 - 硬件:昇腾处理器(Ascend或GPU) - - 使用Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) 至ascend@huawei.com,审核通过即可获得资源。 + - 使用Ascend或GPU处理器搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) diff --git a/model_zoo/official/cv/resnext101/README_CN.md b/model_zoo/official/cv/resnext101/README_CN.md index e88c873e24..216c03b734 100644 --- a/model_zoo/official/cv/resnext101/README_CN.md +++ b/model_zoo/official/cv/resnext101/README_CN.md @@ -1,4 +1,4 @@ -# ResNext101-64x4d for MindSpore +# ResNext101-64x4d 本仓库提供了ResNeXt101-64x4d模型的训练脚本和超参配置,以达到论文中的准确性。 @@ -26,7 +26,7 @@ ResNeXt是ResNet网络的改进版本,比ResNet的网络多了块多了cardina ### 默认设置 -以下各节介绍ResNext50模型的默认配置和超参数。 +以下各节介绍ResNext101模型的默认配置和超参数。 #### 优化器 @@ -65,7 +65,7 @@ ResNeXt是ResNet网络的改进版本,比ResNet的网络多了块多了cardina ## 快速入门指南 -目录说明,代码参考了Modelzoo上的[ResNext50_for_MindSpore](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnext50) +目录说明,代码参考了Modelzoo上的[ResNext50](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnext50) ```path . diff --git a/model_zoo/official/cv/resnext50/README.md b/model_zoo/official/cv/resnext50/README.md index 0fcd8e632c..afb4e5bed1 100644 --- a/model_zoo/official/cv/resnext50/README.md +++ b/model_zoo/official/cv/resnext50/README.md @@ -53,7 +53,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend/GPU) -- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. +- Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/resnext50/README_CN.md b/model_zoo/official/cv/resnext50/README_CN.md index 41da9b0836..6dbf0ed0d8 100644 --- a/model_zoo/official/cv/resnext50/README_CN.md +++ b/model_zoo/official/cv/resnext50/README_CN.md @@ -58,7 +58,7 @@ ResNeXt整体网络架构如下: # 环境要求 - 硬件(Ascend或GPU) - - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 准备Ascend或GPU处理器搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/retinanet/README_CN.md b/model_zoo/official/cv/retinanet/README_CN.md index 02bc29479d..989e8c6317 100644 --- a/model_zoo/official/cv/retinanet/README_CN.md +++ b/model_zoo/official/cv/retinanet/README_CN.md @@ -58,7 +58,7 @@ MSCOCO2017 ## [环境要求](#content) - 硬件(Ascend) - - 使用Ascend处理器准备硬件环境。如果您想使用Ascend,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com。一旦获得批准,您就可以获取资源。 + - 使用Ascend处理器准备硬件环境。 - 架构 - [MindSpore](https://www.mindspore.cn/install/en) - 想要获取更多信息,请检查以下资源: diff --git a/model_zoo/official/cv/shufflenetv1/README_CN.md b/model_zoo/official/cv/shufflenetv1/README_CN.md index e4998b4e36..6d29a1e177 100644 --- a/model_zoo/official/cv/shufflenetv1/README_CN.md +++ b/model_zoo/official/cv/shufflenetv1/README_CN.md @@ -42,7 +42,7 @@ ShuffleNetV1的核心部分被分成三个阶段,每个阶段重复堆积了 # 环境要求 - 硬件(Ascend) - - 使用Ascend来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 使用Ascend来搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/simple_pose/README.md b/model_zoo/official/cv/simple_pose/README.md index 9dde9dc537..3c3c6911b7 100644 --- a/model_zoo/official/cv/simple_pose/README.md +++ b/model_zoo/official/cv/simple_pose/README.md @@ -60,7 +60,7 @@ The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advan To run the python scripts in the repository, you need to prepare the environment as follow: - Hardware - - Prepare hardware environment with Ascend. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to [ascend@huawei.com](mailto:ascend@huawei.com). Once approved, you can get the resources. + - Prepare hardware environment with Ascend. - Python and dependencies - python 3.7 - mindspore 1.0.1 diff --git a/model_zoo/official/cv/squeezenet/README.md b/model_zoo/official/cv/squeezenet/README.md index bf529f6933..06e1af82e8 100644 --- a/model_zoo/official/cv/squeezenet/README.md +++ b/model_zoo/official/cv/squeezenet/README.md @@ -63,7 +63,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend/CPU) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. Squeezenet training on GPU performs badly now, and it is still in research. See [squeezenet in research](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/squeezenet) to get up-to-date details. + - Prepare hardware environment with Ascend or CPU processor. Squeezenet training on GPU performs not well now, and it is still in research. See [squeezenet in research](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/squeezenet) to get up-to-date details. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/tinydarknet/README.md b/model_zoo/official/cv/tinydarknet/README.md index d3153fe2a9..77012ed8ee 100644 --- a/model_zoo/official/cv/tinydarknet/README.md +++ b/model_zoo/official/cv/tinydarknet/README.md @@ -56,7 +56,7 @@ Dataset used can refer to [paper]() +## Dataset used: [CIFAR-10]() - CIFAR-10 Dataset size:175M,60,000 32*32 colorful images in 10 classes - Train:146M,50,000 images - Test:29.3M,10,000 images - - Data format: binary files +- Data format: binary files - Note: Data will be processed in src/dataset.py -#### Dataset used: [ImageNet2012](http://www.image-net.org/) +## Dataset used: [ImageNet2012](http://www.image-net.org/) + - Dataset size: ~146G, 1.28 million colorful images in 1000 classes - Train: 140G, 1,281,167 images - Test: 6.4G, 50, 000 images - - Data format: RGB images +- Data format: RGB images - Note: Data will be processed in src/dataset.py -#### Dataset organize way +## Dataset organize way CIFAR-10 > Unzip the CIFAR-10 dataset to any path you want and the folder structure should be as follows: - > ``` + > + > ```bash > . > ├── cifar-10-batches-bin # train dataset > └── cifar-10-verify-bin # infer dataset @@ -67,39 +69,37 @@ Note that you can run the scripts based on the dataset mentioned in original pap > Unzip the ImageNet2012 dataset to any path you want and the folder should include train and eval dataset as follows: > - > ``` + > ```bash > . > └─dataset > ├─ilsvrc # train dataset > └─validation_preprocess # evaluate dataset > ``` - # [Features](#contents) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. - # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - - [MindSpore](https://www.mindspore.cn/install/en) + - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) After installing MindSpore via the official website, you can start training and evaluation as follows: - Running on Ascend + ```python # run training example python train.py --data_path=[DATA_PATH] --device_id=[DEVICE_ID] > output.train.log 2>&1 & @@ -110,12 +110,14 @@ sh run_distribute_train.sh [RANL_TABLE_JSON] [DATA_PATH] # run evaluation example python eval.py --data_path=[DATA_PATH] --pre_trained=[PRE_TRAINED] > output.eval.log 2>&1 & ``` + For distributed training, a hccl configuration file with JSON format needs to be created in advance. Please follow the instructions in the link below: -https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools + - Running on GPU -``` + +```bash # run training example python train.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_TYPE] --data_path=[DATA_PATH] > output.train.log 2>&1 & @@ -130,13 +132,12 @@ python eval.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_ ## [Script and Sample Code](#contents) - -``` +```bash ├── model_zoo ├── README.md // descriptions about all the models - ├── vgg16 + ├── vgg16 ├── README.md // descriptions about googlenet - ├── scripts + ├── scripts │ ├── run_distribute_train.sh // shell script for distributed training on Ascend │ ├── run_distribute_train_gpu.sh // shell script for distributed training on GPU ├── src @@ -146,7 +147,7 @@ python eval.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_ │ │ ├── util.py // util function │ │ ├── var_init.py // network parameter init method │ ├── config.py // parameter configuration - │ ├── crossentropy.py // loss caculation + │ ├── crossentropy.py // loss calculation │ ├── dataset.py // creating dataset │ ├── linear_warmup.py // linear leanring rate │ ├── warmup_cosine_annealing_lr.py // consine anealing learning rate @@ -159,7 +160,8 @@ python eval.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_ ## [Script Parameters](#contents) ### Training -``` + +```bash usage: train.py [--device_target TARGET][--data_path DATA_PATH] [--dataset DATASET_TYPE][--is_distributed VALUE] [--device_id DEVICE_ID][--pre_trained PRE_TRAINED] @@ -179,7 +181,7 @@ parameters/options: ### Evaluation -``` +```bash usage: eval.py [--device_target TARGET][--data_path DATA_PATH] [--dataset DATASET_TYPE][--pre_trained PRE_TRAINED] [--device_id DEVICE_ID] @@ -198,7 +200,7 @@ Parameters for both training and evaluation can be set in config.py. - config for vgg16, CIFAR-10 dataset -``` +```bash "num_classes": 10, # dataset class num "lr": 0.01, # learning rate "lr_init": 0.01, # initial learning rate @@ -218,15 +220,15 @@ Parameters for both training and evaluation can be set in config.py. "pad_mode": 'same', # pad mode for conv2d "padding": 0, # padding value for conv2d "has_bias": False, # whether has bias in conv2d -"batch_norm": True, # wether has batch_norm in conv2d +"batch_norm": True, # whether has batch_norm in conv2d "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint "initialize_mode": "XavierUniform", # conv2d init mode -"has_dropout": True # wether using Dropout layer +"has_dropout": True # whether using Dropout layer ``` - config for vgg16, ImageNet2012 dataset -``` +```bash "num_classes": 1000, # dataset class num "lr": 0.01, # learning rate "lr_init": 0.01, # initial learning rate @@ -246,10 +248,10 @@ Parameters for both training and evaluation can be set in config.py. "pad_mode": 'pad', # pad mode for conv2d "padding": 1, # padding value for conv2d "has_bias": True, # whether has bias in conv2d -"batch_norm": False, # wether has batch_norm in conv2d +"batch_norm": False, # whether has batch_norm in conv2d "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint "initialize_mode": "KaimingNormal", # conv2d init mode -"has_dropout": True # wether using Dropout layer +"has_dropout": True # whether using Dropout layer ``` ## [Training Process](#contents) @@ -259,15 +261,18 @@ Parameters for both training and evaluation can be set in config.py. #### Run vgg16 on Ascend - Training using single device(1p), using CIFAR-10 dataset in default + +```bash +python train.py --data_path=your_data_path --device_id=6 > out.train.log 2>&1 & ``` -python train.py --data_path=your_data_path --device_id=6 > out.train.log 2>&1 & -``` + The python command above will run in the background, you can view the results through the file `out.train.log`. After training, you'll get some checkpoint files in specified ckpt_path, default in ./output directory. You will get the loss value as following: -``` + +```bash # grep "loss is " output.train.log epoch: 1 step: 781, loss is 2.093086 epcoh: 2 step: 781, loss is 1.827582 @@ -275,13 +280,16 @@ epcoh: 2 step: 781, loss is 1.827582 ``` - Distributed Training -``` + +```bash sh run_distribute_train.sh rank_table.json your_data_path ``` + The above shell script will run distribute training in the background, you can view the results through the file `train_parallel[X]/log`. You will get the loss value as following: -``` + +```bash # grep "result: " train_parallel*/log train_parallel0/log:epoch: 1 step: 97, loss is 1.9060308 train_parallel0/log:epcoh: 2 step: 97, loss is 1.6003821 @@ -291,19 +299,21 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579 ... ... ``` -> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). +> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). > **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh` #### Run vgg16 on GPU - Training using single device(1p) -``` + +```bash python train.py --device_target="GPU" --dataset="imagenet2012" --is_distributed=0 --data_path=$DATA_PATH > output.train.log 2>&1 & ``` - Distributed Training -``` + +```bash # distributed training(8p) bash scripts/run_distribute_train_gpu.sh /path/ImageNet2012/train" ``` @@ -313,15 +323,18 @@ bash scripts/run_distribute_train_gpu.sh /path/ImageNet2012/train" ### Evaluation - Do eval as follows, need to specify dataset type as "cifar10" or "imagenet2012" -``` + +```bash # when using cifar10 dataset python eval.py --data_path=your_data_path --dataset="cifar10" --device_target="Ascend" --pre_trained=./*-70-781.ckpt > output.eval.log 2>&1 & # when using imagenet2012 dataset python eval.py --data_path=your_data_path --dataset="imagenet2012" --device_target="GPU" --pre_trained=./*-150-5004.ckpt > output.eval.log 2>&1 & ``` + - The above python command will run in the background, you can view the results through the file `output.eval.log`. You will get the accuracy as following: -``` + +```bash # when using cifar10 dataset # grep "result: " output.eval.log result: {'acc': 0.92} @@ -331,11 +344,11 @@ after allreduce eval: top1_correct=36636, tot=50000, acc=73.27% after allreduce eval: top5_correct=45582, tot=50000, acc=91.16% ``` - # [Model Description](#contents) + ## [Performance](#contents) -### Training Performance +### Training Performance | Parameters | VGG16(Ascend) | VGG16(GPU) | | -------------------------- | ---------------------------------------------- |------------------------------------| @@ -354,7 +367,6 @@ after allreduce eval: top5_correct=45582, tot=50000, acc=91.16% | Checkpoint for Fine tuning | 1.1G(.ckpt file) |1.1G(.ckpt file) | | Scripts |[vgg16](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/vgg16) | | - ### Evaluation Performance | Parameters | VGG16(Ascend) | VGG16(GPU) @@ -372,5 +384,6 @@ after allreduce eval: top5_correct=45582, tot=50000, acc=91.16% In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py. -# [ModelZoo Homepage](#contents) - Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). +# [ModelZoo Homepage](#contents) + +Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). diff --git a/model_zoo/official/cv/vgg16/README_CN.md b/model_zoo/official/cv/vgg16/README_CN.md index 2a0295fe78..55c6e6c4c3 100644 --- a/model_zoo/official/cv/vgg16/README_CN.md +++ b/model_zoo/official/cv/vgg16/README_CN.md @@ -94,7 +94,7 @@ VGG 16网络主要由几个基本模块(包括卷积层和池化层)和三 # 环境要求 - 硬件(Ascend或GPU) - - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 准备Ascend或GPU处理器搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/warpctc/README.md b/model_zoo/official/cv/warpctc/README.md index 4ba7a18e4c..a3566ea573 100644 --- a/model_zoo/official/cv/warpctc/README.md +++ b/model_zoo/official/cv/warpctc/README.md @@ -12,10 +12,10 @@ - [Parameters Configuration](#parameters-configuration) - [Dataset Preparation](#dataset-preparation) - [Training Process](#training-process) - - [Training](#training) - - [Distributed Training](#distributed-training) + - [Training](#training) + - [Distributed Training](#distributed-training) - [Evaluation Process](#evaluation-process) - - [Evaluation](#evaluation) + - [Evaluation](#evaluation) - [Model Description](#model-description) - [Performance](#performance) - [Training Performance](#training-performance) @@ -38,24 +38,23 @@ The dataset is self-generated using a third-party library called [captcha](https # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. You will be able to have access to related resources once approved. + - Prepare hardware environment with Ascend or GPU processor. - Framework - - [MindSpore](https://gitee.com/mindspore/mindspore) + - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) - Generate dataset. Run the script `scripts/run_process_data.sh` to generate a dataset. By default, the shell script will generate 10000 test images and 50000 train images separately. - - ``` + + ```bash $ cd scripts $ sh run_process_data.sh - + # after execution, you will find the dataset like the follows: . └─warpctc @@ -67,31 +66,33 @@ The dataset is self-generated using a third-party library called [captcha](https - After the dataset is prepared, you may start running the training or the evaluation scripts as follows: - Running on Ascend - ``` + + ```bash # distribute training example in Ascend $ bash run_distribute_train.sh rank_table.json ../data/train - + # evaluation example in Ascend $ bash run_eval.sh ../data/test warpctc-30-97.ckpt Ascend - + # standalone training example in Ascend $ bash run_standalone_train.sh ../data/train Ascend ``` + For distributed training, a hccl configuration file with JSON format needs to be created in advance. Please follow the instructions in the link below: - https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools. - + . + - Running on GPU - - ``` + + ```bash # distribute training example in GPU $ bash run_distribute_train_for_gpu.sh 8 ../data/train - + # standalone training example in GPU $ bash run_standalone_train.sh ../data/train GPU - + # evaluation example in GPU $ bash run_eval.sh ../data/test warpctc-30-97.ckpt GPU ``` @@ -127,7 +128,8 @@ The dataset is self-generated using a third-party library called [captcha](https ## [Script Parameters](#contents) ### Training Script Parameters -``` + +```bash # distributed training in Ascend Usage: bash run_distribute_train.sh [RANK_TABLE_FILE] [DATASET_PATH] @@ -142,8 +144,8 @@ Usage: bash run_standalone_train.sh [DATASET_PATH] [PLATFORM] Parameters for both training and evaluation can be set in config.py. -``` -"max_captcha_digits": 4, # max number of digits in each +```bash +"max_captcha_digits": 4, # max number of digits in each "captcha_width": 160, # width of captcha images "captcha_height": 64, # height of capthca images "batch_size": 64, # batch size of input tensor @@ -158,36 +160,41 @@ Parameters for both training and evaluation can be set in config.py. ``` ## [Dataset Preparation](#contents) + - You may refer to "Generate dataset" in [Quick Start](#quick-start) to automatically generate a dataset, or you may choose to generate a captcha dataset by yourself. ## [Training Process](#contents) - Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/data_preparation.html) for more information about dataset. - + ### [Training](#contents) + - Run `run_standalone_train.sh` for non-distributed training of WarpCTC model, either on Ascend or on GPU. ``` bash bash run_standalone_train.sh [DATASET_PATH] [PLATFORM] ``` - + ### [Distributed Training](#contents) + - Run `run_distribute_train.sh` for distributed training of WarpCTC model on Ascend. ``` bash bash run_distribute_train.sh [RANK_TABLE_FILE] [DATASET_PATH] ``` - - Run `run_distribute_train_gpu.sh` for distributed training of WarpCTC model on GPU. + ``` bash bash run_distribute_train_gpu.sh [RANK_SIZE] [DATASET_PATH] ``` ## [Evaluation Process](#contents) + ### [Evaluation](#contents) - Run `run_eval.sh` for evaluation. + ``` bash bash run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH] [PLATFORM] ``` @@ -216,7 +223,6 @@ bash run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH] [PLATFORM] | Checkpoint for Fine tuning | 20.3M (.ckpt file) | 20.3M (.ckpt file) | | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/warpctc) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/warpctc) | - ### [Evaluation Performance](#contents) | Parameters | WarpCTC | @@ -232,7 +238,9 @@ bash run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH] [PLATFORM] | Model for inference | 20.3M (.ckpt file) | # [Description of Random Situation](#contents) + In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py for weight initialization. # [ModelZoo Homepage](#contents) -Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). \ No newline at end of file + +Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). diff --git a/model_zoo/official/cv/warpctc/README_CN.md b/model_zoo/official/cv/warpctc/README_CN.md index ca24178434..33650773d9 100644 --- a/model_zoo/official/cv/warpctc/README_CN.md +++ b/model_zoo/official/cv/warpctc/README_CN.md @@ -43,7 +43,7 @@ WarpCTC是带有一层FC神经网络的二层堆叠LSTM模型。详细信息请 # 环境要求 - 硬件(Ascend/GPU) - - 使用Ascend或GPU处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawie,审核通过即可获得资源。 + - 使用Ascend或GPU处理器来搭建硬件环境。 - 框架 - [MindSpore](https://gitee.com/mindspore/mindspore) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/xception/README.md b/model_zoo/official/cv/xception/README.md index 2891a269c0..4f9e51187e 100644 --- a/model_zoo/official/cv/xception/README.md +++ b/model_zoo/official/cv/xception/README.md @@ -58,7 +58,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/yolov3_darknet53/README.md b/model_zoo/official/cv/yolov3_darknet53/README.md index 82652386a2..80dee63ff7 100644 --- a/model_zoo/official/cv/yolov3_darknet53/README.md +++ b/model_zoo/official/cv/yolov3_darknet53/README.md @@ -68,7 +68,7 @@ Dataset used: [COCO2014](https://cocodataset.org/#download) ## [Environment Requirements](#contents) - Hardware(Ascend/GPU) -- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/cv/yolov3_darknet53/README_CN.md b/model_zoo/official/cv/yolov3_darknet53/README_CN.md index 912f13ed28..b4bc4709a5 100644 --- a/model_zoo/official/cv/yolov3_darknet53/README_CN.md +++ b/model_zoo/official/cv/yolov3_darknet53/README_CN.md @@ -70,7 +70,7 @@ YOLOv3使用DarkNet53执行特征提取,这是YOLOv2中的Darknet-19和残差 # 环境要求 - 硬件(Ascend/GPU) - - 使用Ascend或GPU处理器来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) 至ascend@huawei.com,审核通过即可获得资源。 + - 使用Ascend或GPU处理器来搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/yolov3_darknet53_quant/README.md b/model_zoo/official/cv/yolov3_darknet53_quant/README.md index 83f9c15249..25332e847c 100644 --- a/model_zoo/official/cv/yolov3_darknet53_quant/README.md +++ b/model_zoo/official/cv/yolov3_darknet53_quant/README.md @@ -4,7 +4,7 @@ - [Model Architecture](#model-architecture) - [Dataset](#dataset) - [Environment Requirements](#environment-requirements) -- [Quick Start](#quick-start) +- [Quick Start](#quick-start) - [Script Description](#script-description) - [Script and Sample Code](#script-and-sample-code) - [Script Parameters](#script-parameters) @@ -20,13 +20,12 @@ - [Description of Random Situation](#description-of-random-situation) - [ModelZoo Homepage](#modelzoo-homepage) - # [YOLOv3-DarkNet53-Quant Description](#contents) -You only look once (YOLO) is a state-of-the-art, real-time object detection system. YOLOv3 is extremely fast and accurate. +You only look once (YOLO) is a state-of-the-art, real-time object detection system. YOLOv3 is extremely fast and accurate. Prior detection systems repurpose classifiers or localizers to perform detection. They apply the model to an image at multiple locations and scales. High scoring regions of the image are considered detections. - YOLOv3 use a totally different approach. It apply a single neural network to the full image. This network divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities. + YOLOv3 use a totally different approach. It apply a single neural network to the full image. This network divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities. YOLOv3 uses a few tricks to improve training and increase performance, including: multi-scale predictions, a better backbone classifier, and more. The full details are in the paper! @@ -35,43 +34,39 @@ In order to reduce the size of the weight and improve the low-bit computing perf [Paper](https://pjreddie.com/media/files/papers/YOLOv3.pdf): YOLOv3: An Incremental Improvement. Joseph Redmon, Ali Farhadi, University of Washington - # [Model Architecture](#contents) YOLOv3 use DarkNet53 for performing feature extraction, which is a hybrid approach between the network used in YOLOv2, Darknet-19, and that newfangled residual network stuff. DarkNet53 uses successive 3 × 3 and 1 × 1 convolutional layers and has some shortcut connections as well and is significantly larger. It has 53 convolutional layers. - # [Dataset](#contents) + Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below. -Dataset used: [COCO2014](https://cocodataset.org/#download) +Dataset used: [COCO2014](https://cocodataset.org/#download) - Dataset size: 19G, 123,287 images, 80 object categories. - - Train:13G, 82,783 images - - Val:6GM, 40,504 images - - Annotations: 241M, Train/Val annotations + - Train:13G, 82,783 images + - Val:6GM, 40,504 images + - Annotations: 241M, Train/Val annotations - Data format:zip files - - Note:Data will be processed in yolo_dataset.py, and unzip files before uses it. - + - Note:Data will be processed in yolo_dataset.py, and unzip files before uses it. # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - - [MindSpore](https://www.mindspore.cn/install/en) + - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - - + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) -After installing MindSpore via the official website, you can start training and evaluation in Ascend as follows: +After installing MindSpore via the official website, you can start training and evaluation in Ascend as follows: -``` -# The yolov3_darknet53_noquant.ckpt in the follow script is got from yolov3-darknet53 training like paper. +```bash +# The yolov3_darknet53_noquant.ckpt in the follow script is got from yolov3-darknet53 training like paper. # The parameter of resume_yolov3 is necessary. # The parameter of training_shape define image shape for network, default is "". # It means use 10 kinds of shape as input shape, or it can be set some kind of shape. @@ -103,17 +98,16 @@ python eval.py \ sh run_eval.sh dataset/coco2014/ checkpoint/yolov3_quant.ckpt 0 ``` - # [Script Description](#contents) ## [Script and Sample Code](#contents) -``` +```bash . -└─yolov3_darknet53_quant +└─yolov3_darknet53_quant ├─README.md ├─mindspore_hub_conf.md # config for mindspore hub - ├─scripts + ├─scripts ├─run_standalone_train.sh # launch standalone training(1p) in ascend ├─run_distribute_train.sh # launch distributed training(8p) in ascend └─run_eval.sh # launch evaluating in ascend @@ -134,10 +128,9 @@ sh run_eval.sh dataset/coco2014/ checkpoint/yolov3_quant.ckpt 0 └─train.py # train net ``` - ## [Script Parameters](#contents) -``` +```bash Major parameters in train.py as follow. optional arguments: @@ -194,21 +187,19 @@ optional arguments: Resize rate for multi-scale training. Default: None ``` - - ## [Training Process](#contents) -### Training on Ascend +### Training on Ascend ### Distributed Training -``` +```bash sh run_distribute_train.sh dataset/coco2014 yolov3_darknet53_noquant.ckpt rank_table_8p.json ``` The above shell script will run distribute training in the background. You can view the results through the file `train_parallel[X]/log.txt`. The loss value will be achieved as follows: -``` +```bash # distribute training result(8p) epoch[0], iter[0], loss:483.341675, 0.31 imgs/sec, lr:0.0 epoch[0], iter[100], loss:55.690952, 3.46 imgs/sec, lr:0.0 @@ -232,14 +223,13 @@ epoch[134], iter[86400], loss:35.603033, 142.23 imgs/sec, lr:1.6245529650404933e epoch[134], iter[86500], loss:34.303755, 145.18 imgs/sec, lr:1.6245529650404933e-06 ``` - ## [Evaluation Process](#contents) ### Evaluation on Ascend Before running the command below. -``` +```bash python eval.py \ --data_dir=./dataset/coco2014 \ --pretrained=0-130_83330.ckpt \ @@ -250,7 +240,7 @@ sh run_eval.sh dataset/coco2014/ checkpoint/0-130_83330.ckpt 0 The above python command will run in the background. You can view the results through the file "log.txt". The mAP of the test dataset will be as follows: -``` +```bash # log.txt =============coco eval reulst========= Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.310 @@ -267,8 +257,8 @@ Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.450 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.558 ``` - # [Model Description](#contents) + ## [Performance](#contents) ### Evaluation Performance @@ -279,7 +269,7 @@ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.558 | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G | | uploaded Date | 09/15/2020 (month/day/year) | | MindSpore Version | 1.0.0 | -| Dataset | COCO2014 | +| Dataset | COCO2014 | | Training Parameters | epoch=135, batch_size=16, lr=0.012, momentum=0.9 | | Optimizer | Momentum | | Loss Function | Sigmoid Cross Entropy with logits | @@ -289,8 +279,7 @@ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.558 | Total time | 8pc: 23.5 hours | | Parameters (M) | 62.1 | | Checkpoint for Fine tuning | 474M (.ckpt file) | -| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/yolov3_darknet53_quant | - +| Scripts | | ### Inference Performance @@ -306,11 +295,10 @@ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.558 | Accuracy | 8pcs: 31.0% | | Model for inference | 474M (.ckpt file) | - # [Description of Random Situation](#contents) -There are random seeds in distributed_sampler.py, transforms.py, yolo_dataset.py files. - +There are random seeds in distributed_sampler.py, transforms.py, yolo_dataset.py files. # [ModelZoo Homepage](#contents) - Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). + +Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). diff --git a/model_zoo/official/cv/yolov3_darknet53_quant/README_CN.md b/model_zoo/official/cv/yolov3_darknet53_quant/README_CN.md index 4f1b1a140b..8e4084ceee 100644 --- a/model_zoo/official/cv/yolov3_darknet53_quant/README_CN.md +++ b/model_zoo/official/cv/yolov3_darknet53_quant/README_CN.md @@ -56,7 +56,7 @@ YOLOv3使用DarkNet53执行特征提取,这是YOLOv2中的Darknet-19和残差 # 环境要求 - 硬件(Ascend处理器) - - 准备Ascend或GPU处理器搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 准备Ascend或GPU处理器搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install/) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/yolov3_resnet18/README.md b/model_zoo/official/cv/yolov3_resnet18/README.md index 92b540c300..b6393cb132 100644 --- a/model_zoo/official/cv/yolov3_resnet18/README.md +++ b/model_zoo/official/cv/yolov3_resnet18/README.md @@ -66,12 +66,12 @@ Dataset used: [COCO2017]() # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - - [MindSpore](https://www.mindspore.cn/install/en) + - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) diff --git a/model_zoo/official/cv/yolov3_resnet18/README_CN.md b/model_zoo/official/cv/yolov3_resnet18/README_CN.md index 991837dfc2..b4409ee933 100644 --- a/model_zoo/official/cv/yolov3_resnet18/README_CN.md +++ b/model_zoo/official/cv/yolov3_resnet18/README_CN.md @@ -69,7 +69,7 @@ YOLOv3整体网络架构如下: # 环境要求 - 硬件(Ascend处理器) - - 准备Ascend处理器搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 + - 准备Ascend处理器搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/cv/yolov4/README.md b/model_zoo/official/cv/yolov4/README.md index f852aa2f01..279de66268 100644 --- a/model_zoo/official/cv/yolov4/README.md +++ b/model_zoo/official/cv/yolov4/README.md @@ -62,7 +62,7 @@ other datasets need to use the same format as MS COCO. # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/gnn/gcn/README.md b/model_zoo/official/gnn/gcn/README.md index ada0fb0c02..77571bd31d 100644 --- a/model_zoo/official/gnn/gcn/README.md +++ b/model_zoo/official/gnn/gcn/README.md @@ -14,20 +14,18 @@ - [Description of Random Situation](#description-of-random-situation) - [ModelZoo Homepage](#modelzoo-homepage) - # [GCN Description](#contents) GCN(Graph Convolutional Networks) was proposed in 2016 and designed to do semi-supervised learning on graph-structured data. A scalable approach based on an efficient variant of convolutional neural networks which operate directly on graphs was presented. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. [Paper](https://arxiv.org/abs/1609.02907): Thomas N. Kipf, Max Welling. 2016. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR 2016. - # [Model Architecture](#contents) -GCN contains two graph convolution layers. Each layer takes nodes features and adjacency matrix as input, nodes' features are then updated by aggregating neighbours' features. - +GCN contains two graph convolution layers. Each layer takes nodes features and adjacency matrix as input, nodes' features are then updated by aggregating neighbours' features. # [Dataset](#contents) + Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below. | Dataset | Type | Nodes | Edges | Classes | Features | Label rate | @@ -35,29 +33,23 @@ Note that you can run the scripts based on the dataset mentioned in original pap | Cora | Citation network | 2708 | 5429 | 7 | 1433 | 0.052 | | Citeseer| Citation network | 3327 | 4732 | 6 | 3703 | 0.036 | - - - # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - - [MindSpore](https://gitee.com/mindspore/mindspore) + - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: - - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Quick Start](#contents) - Install [MindSpore](https://www.mindspore.cn/install/en). - - Download the dataset Cora or Citeseer provided by /kimiyoung/planetoid from github. - - Place the dataset to any path you want, the folder should include files as follows(we use Cora dataset as an example): - -``` + +```bash . └─data ├─ind.cora.allx @@ -71,15 +63,18 @@ Note that you can run the scripts based on the dataset mentioned in original pap ``` - Generate dataset in mindrecord format for cora or citeseer. -####Usage -```buildoutcfg + +## Usage + +```bash cd ./scripts # SRC_PATH is the dataset file path you downloaded, DATASET_NAME is cora or citeseer sh run_process_data.sh [SRC_PATH] [DATASET_NAME] ``` -####Launch -``` +## Launch + +```bash #Generate dataset in mindrecord format for cora sh run_process_data.sh ./data cora #Generate dataset in mindrecord format for citeseer @@ -89,12 +84,12 @@ sh run_process_data.sh ./data citeseer # [Script Description](#contents) ## [Script and Sample Code](#contents) - + ```shell . -└─gcn +└─gcn ├─README.md - ├─scripts + ├─scripts | ├─run_process_data.sh # Generate dataset in mindrecord format | └─run_train.sh # Launch training, now only Ascend backend is supported. | @@ -106,12 +101,12 @@ sh run_process_data.sh ./data citeseer | └─train.py # Train net, evaluation is performed after every training epoch. After the verification result converges, the training stops, then testing is performed. ``` - + ## [Script Parameters](#contents) - + Parameters for training can be set in config.py. - -``` + +```bash "learning_rate": 0.01, # Learning rate "epochs": 200, # Epoch sizes for training "hidden1": 16, # Hidden size for the first graph convolution layer @@ -121,26 +116,25 @@ Parameters for training can be set in config.py. ``` ## [Training, Evaluation, Test Process](#contents) - -#### Usage -``` +### Usage + +```bash # run train with cora or citeseer dataset, DATASET_NAME is cora or citeseer sh run_train.sh [DATASET_NAME] ``` - -#### Launch - + +### Launch + ```bash sh run_train.sh cora ``` - -#### Result - + +### Result + Training result will be stored in the scripts path, whose folder name begins with "train". You can find the result like the followings in log. - -``` +```bash Epoch: 0001 train_loss= 1.95373 train_acc= 0.09286 val_loss= 1.95075 val_acc= 0.20200 time= 7.25737 Epoch: 0002 train_loss= 1.94812 train_acc= 0.32857 val_loss= 1.94717 val_acc= 0.34000 time= 0.00438 Epoch: 0003 train_loss= 1.94249 train_acc= 0.47857 val_loss= 1.94337 val_acc= 0.43000 time= 0.00428 @@ -158,6 +152,7 @@ Test set results: cost= 1.00983 accuracy= 0.81300 time= 0.39083 ``` # [Model Description](#contents) + ## [Performance](#contents) | Parameters | GCN | @@ -171,20 +166,17 @@ Test set results: cost= 1.00983 accuracy= 0.81300 time= 0.39083 | Loss Function | Softmax Cross Entropy | | Accuracy | 81.5/70.3 | | Parameters (B) | 92160/59344 | -| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/gcn | - - +| Scripts | | # [Description of Random Situation](#contents) There are two random situations: + - Seed is set in train.py according to input argument --seed. - Dropout operations. Some seeds have already been set in train.py to avoid the randomness of weight initialization. If you want to disable dropout, please set the corresponding dropout_prob parameter to 0 in src/config.py. - # [ModelZoo Homepage](#contents) Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). - diff --git a/model_zoo/official/gnn/gcn/README_CN.md b/model_zoo/official/gnn/gcn/README_CN.md index 1b5035bc2a..64671730c4 100644 --- a/model_zoo/official/gnn/gcn/README_CN.md +++ b/model_zoo/official/gnn/gcn/README_CN.md @@ -44,7 +44,7 @@ GCN包含两个图卷积层。每一层以节点特征和邻接矩阵为输入 # 环境要求 - 硬件(Ascend处理器) - - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei,审核通过即可获得资源。 + - 准备Ascend或GPU处理器搭建硬件环境。 - 框架 - [MindSpore](https://gitee.com/mindspore/mindspore) - 如需查看详情,请参见如下资源: diff --git a/model_zoo/official/nlp/bert/README.md b/model_zoo/official/nlp/bert/README.md index af1e717136..4540f05a86 100644 --- a/model_zoo/official/nlp/bert/README.md +++ b/model_zoo/official/nlp/bert/README.md @@ -56,7 +56,7 @@ The backbone structure of BERT is transformer. For BERT_base, the transformer co # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend/GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get access to the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: diff --git a/model_zoo/official/nlp/bert/README_CN.md b/model_zoo/official/nlp/bert/README_CN.md index 4b9ca2f79c..98f0fb5e77 100644 --- a/model_zoo/official/nlp/bert/README_CN.md +++ b/model_zoo/official/nlp/bert/README_CN.md @@ -59,7 +59,7 @@ BERT的主干结构为Transformer。对于BERT_base,Transformer包含12个编 # 环境要求 - 硬件(Ascend处理器) - - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,申请通过后,即可获得资源。 + - 准备Ascend或GPU处理器搭建硬件环境。 - 框架 - [MindSpore](https://gitee.com/mindspore/mindspore) - 更多关于Mindspore的信息,请查看以下资源: diff --git a/model_zoo/official/nlp/bert_thor/README.md b/model_zoo/official/nlp/bert_thor/README.md index 35dcaf77cf..24f042c884 100644 --- a/model_zoo/official/nlp/bert_thor/README.md +++ b/model_zoo/official/nlp/bert_thor/README.md @@ -50,7 +50,7 @@ The classical first-order optimization algorithm, such as SGD, has a small amoun ## Environment Requirements - Hardware(Ascend) - - Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/nlp/bert_thor/README_CN.md b/model_zoo/official/nlp/bert_thor/README_CN.md index 6db8bd6740..304272c938 100644 --- a/model_zoo/official/nlp/bert_thor/README_CN.md +++ b/model_zoo/official/nlp/bert_thor/README_CN.md @@ -56,7 +56,7 @@ BERT的总体架构包含3个嵌入层,用于查找令牌嵌入、位置嵌入 环境要求 - 硬件(Ascend) - - 使用Ascend处理器准备硬件环境。- 如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,申请通过即可获得资源。 + - 使用Ascend处理器准备硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 更多关于Mindspore的信息,请查看以下资源: diff --git a/model_zoo/official/nlp/fasttext/README.md b/model_zoo/official/nlp/fasttext/README.md index 276ac4c99d..cc63e79473 100644 --- a/model_zoo/official/nlp/fasttext/README.md +++ b/model_zoo/official/nlp/fasttext/README.md @@ -25,9 +25,9 @@ # [FastText](#contents) FastText is a fast text classification algorithm, which is simple and efficient. It was proposed by Armand -Joulin, Tomas Mikolov etc. in the artical "Bag of Tricks for Efficient Text Classification" in 2016. It is similar to +Joulin, Tomas Mikolov etc. in the article "Bag of Tricks for Efficient Text Classification" in 2016. It is similar to CBOW in model architecture, where the middle word is replace by a label. FastText adopts ngram feature as addition feature -to get some information about words. It speeds up training and testing while maintaining high percision, and widly used +to get some information about words. It speeds up training and testing while maintaining high precision, and widly used in various tasks of text classification. [Paper](https://arxiv.org/pdf/1607.01759.pdf): "Bag of Tricks for Efficient Text Classification", 2016, A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov @@ -50,7 +50,7 @@ architecture. In the following sections, we will introduce how to run the script # [Environment Requirements](#content) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: diff --git a/model_zoo/official/nlp/gnmt_v2/README.md b/model_zoo/official/nlp/gnmt_v2/README.md index 89a6ef9d2d..3c7409eab8 100644 --- a/model_zoo/official/nlp/gnmt_v2/README.md +++ b/model_zoo/official/nlp/gnmt_v2/README.md @@ -47,7 +47,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap ## Platform - Hardware (Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you could get the resources for trial. + - Prepare hardware environment with Ascend processor. - Framework - Install [MindSpore](https://www.mindspore.cn/install/en). - For more information, please check the resources below: diff --git a/model_zoo/official/nlp/gpt/README.md b/model_zoo/official/nlp/gpt/README.md index de96560bc9..49ac0e716d 100644 --- a/model_zoo/official/nlp/gpt/README.md +++ b/model_zoo/official/nlp/gpt/README.md @@ -30,7 +30,7 @@ GPT3 stacks many layers of decoder of transformer. According to the layer number # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get access to the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: diff --git a/model_zoo/official/nlp/gru/README.md b/model_zoo/official/nlp/gru/README.md index 9958dec152..53876b4272 100644 --- a/model_zoo/official/nlp/gru/README.md +++ b/model_zoo/official/nlp/gru/README.md @@ -45,7 +45,7 @@ In this model, we use the Multi30K dataset as our train and test dataset.As trai # [Environment Requirements](#content) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: diff --git a/model_zoo/official/nlp/lstm/README.md b/model_zoo/official/nlp/lstm/README.md index 9f4b0a505a..360195e1cf 100644 --- a/model_zoo/official/nlp/lstm/README.md +++ b/model_zoo/official/nlp/lstm/README.md @@ -39,7 +39,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap # [Environment Requirements](#contents) - Hardware(GPU/CPU/Ascend) - - If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you could get the resources for trial. + - Prepare hardware environment with Ascend, GPU or CPU processor. - Framework - [MindSpore](https://gitee.com/mindspore/mindspore) - For more information, please check the resources below: @@ -48,7 +48,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap # [Quick Start](#contents) -- runing on Ascend +- running on Ascend ```bash # run training example @@ -58,7 +58,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap bash run_eval_ascend.sh 0 ./preprocess lstm-20_390.ckpt ``` -- runing on GPU +- running on GPU ```bash # run training example @@ -68,7 +68,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap bash run_eval_gpu.sh 0 ./aclimdb ./glove_dir lstm-20_390.ckpt ``` -- runing on CPU +- running on CPU ```bash # run training example @@ -200,7 +200,7 @@ Ascend: - Set options in `config.py`, including learning rate and network hyperparameters. -- runing on Ascend +- running on Ascend Run `sh run_train_ascend.sh` for training. @@ -217,7 +217,7 @@ Ascend: ... ``` -- runing on GPU +- running on GPU Run `sh run_train_gpu.sh` for training. @@ -234,7 +234,7 @@ Ascend: ... ``` -- runing on CPU +- running on CPU Run `sh run_train_cpu.sh` for training. diff --git a/model_zoo/official/nlp/lstm/README_CN.md b/model_zoo/official/nlp/lstm/README_CN.md index 1718e0581f..a3a090348a 100644 --- a/model_zoo/official/nlp/lstm/README_CN.md +++ b/model_zoo/official/nlp/lstm/README_CN.md @@ -44,7 +44,7 @@ LSTM模型包含嵌入层、编码器和解码器这几个模块,编码器模 # 环境要求 - 硬件(GPU/CPU/Ascend) - - 如果你想尝试Ascend,请发送[Ascend Model Zoo体验资源申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)到ascend@huawei.com申请Ascend体验资源。 + - 准备Ascend或GPU处理器搭建硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 更多关于Mindspore的信息,请查看以下资源: diff --git a/model_zoo/official/nlp/mass/README.md b/model_zoo/official/nlp/mass/README.md index e03bfba4be..9ae73502ea 100644 --- a/model_zoo/official/nlp/mass/README.md +++ b/model_zoo/official/nlp/mass/README.md @@ -349,7 +349,7 @@ GPU: sh run_gpu.sh [--options] ``` -The usage of `run_ascend.sh` is shown as bellow: +The usage of `run_ascend.sh` is shown as below: ```text Usage: run_ascend.sh [-h, --help] [-t, --task ] [-n, --device_num ] @@ -371,7 +371,7 @@ options: Notes: Be sure to assign the hccl_json file while running a distributed-training. -The usage of `run_gpu.sh` is shown as bellow: +The usage of `run_gpu.sh` is shown as below: ```text Usage: run_gpu.sh [-h, --help] [-t, --task ] [-n, --device_num ] @@ -488,7 +488,7 @@ More detail about LR scheduler could be found in `src/utils/lr_scheduler.py`. ## Platform - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/nlp/mass/README_CN.md b/model_zoo/official/nlp/mass/README_CN.md index a07f812b44..9a0ec500a6 100644 --- a/model_zoo/official/nlp/mass/README_CN.md +++ b/model_zoo/official/nlp/mass/README_CN.md @@ -487,7 +487,7 @@ python weights_average.py --input_files your_checkpoint_list --output_file model ## 平台 - 硬件(Ascend或GPU) - - 使用Ascend或GPU处理器准备硬件环境。- 如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,申请通过即可获得资源。 + - 使用Ascend或GPU处理器准备硬件环境。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 更多关于Mindspore的信息,请查看以下资源: diff --git a/model_zoo/official/nlp/prophetnet/README.md b/model_zoo/official/nlp/prophetnet/README.md index b4f2a3c742..4aa46ea46c 100644 --- a/model_zoo/official/nlp/prophetnet/README.md +++ b/model_zoo/official/nlp/prophetnet/README.md @@ -340,7 +340,7 @@ GPU: sh run_gpu.sh [--options] ``` -The usage of `run_ascend.sh` is shown as bellow: +The usage of `run_ascend.sh` is shown as below: ```text Usage: run_ascend.sh [-h, --help] [-t, --task ] [-n, --device_num ] @@ -362,7 +362,7 @@ options: Notes: Be sure to assign the hccl_json file while running a distributed-training. -The usage of `run_gpu.sh` is shown as bellow: +The usage of `run_gpu.sh` is shown as below: ```text Usage: run_gpu.sh [-h, --help] [-t, --task ] [-n, --device_num ] @@ -546,7 +546,7 @@ The comparisons between MASS and other baseline methods in terms of PPL on Corne ## Platform - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you could get the resources for trial. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/official/nlp/textcnn/README.md b/model_zoo/official/nlp/textcnn/README.md index 4519aaca20..47f5cec53f 100644 --- a/model_zoo/official/nlp/textcnn/README.md +++ b/model_zoo/official/nlp/textcnn/README.md @@ -40,7 +40,7 @@ Dataset used: [Movie Review Data]( Please refer to [1] to obtain the download link and data preprocess + 3. Start Training - Once the dataset is ready, the model can be trained and evaluated on the single device(Ascend) by the command as follows: + + Once the dataset is ready, the model can be trained and evaluated on the single device(Ascend) by the command as follows: ```bash python train_and_eval.py --data_path=./data/mindrecord --data_type=mindrecord ``` + To evaluate the model, command as follows: + ```bash python eval.py --data_path=./data/mindrecord --data_type=mindrecord ``` - # [Script Description](#contents) + ## [Script and Sample Code](#contents) -``` + +```bash └── wide_and_deep_multitable ├── eval.py ├── README.md @@ -87,10 +95,9 @@ python eval.py --data_path=./data/mindrecord --data_type=mindrecord ### [Training Script Parameters](#contents) -The parameters is same for ``train_and_eval.py`` and ``train_and_eval_distribute.py`` - +The parameters is same for ``train_and_eval.py`` and ``train_and_eval_distribute.py`` -``` +```bash usage: train_and_eval.py [-h] [--data_path DATA_PATH] [--epochs EPOCHS] [--batch_size BATCH_SIZE] [--eval_batch_size EVAL_BATCH_SIZE] @@ -115,35 +122,41 @@ optional arguments: --deep_layers_dim The dimension of all deep layers.(Default:[1024,1024,1024,1024]) --deep_layers_act The activation function of all deep layers.(Default:'relu') --keep_prob The keep rate in dropout layer.(Default:1.0) - --adam_lr The learning rate of the deep part. (Default:0.003) - --ftrl_lr The learning rate of the wide part.(Default:0.1) - --l2_coef The coefficient of the L2 pernalty. (Default:0.0) + --adam_lr The learning rate of the deep part. (Default:0.003) + --ftrl_lr The learning rate of the wide part.(Default:0.1) + --l2_coef The coefficient of the L2 pernalty. (Default:0.0) --is_tf_dataset IS_TF_DATASET Whether the input is tfrecords. (Default:True) - --dropout_flag Enable dropout.(Default:0) + --dropout_flag Enable dropout.(Default:0) --output_path OUTPUT_PATH Deprecated - --ckpt_path CKPT_PATH The location of the checkpoint file.(Defalut:./checkpoints/) + --ckpt_path CKPT_PATH The location of the checkpoint file.(Default:./checkpoints/) --eval_file_name EVAL_FILE_NAME Eval output file.(Default:eval.og) --loss_file_name LOSS_FILE_NAME Loss output file.(Default:loss.log) ``` + ## [Training Process](#contents) ### [SingleDevice](#contents) To train and evaluate the model, command as follows: -``` + +```bash python train_and_eval.py ``` - ### [Distribute Training](#contents) + To train the model in data distributed training, command as follows: -``` + +```bash # configure environment path before training -bash run_multinpu_train.sh RANK_SIZE EPOCHS DATASET RANK_TABLE_FILE +bash run_multinpu_train.sh RANK_SIZE EPOCHS DATASET RANK_TABLE_FILE ``` + ## [Evaluation Process](#contents) + To evaluate the model, command as follows: -``` + +```bash python eval.py ``` @@ -151,7 +164,7 @@ python eval.py ## [Performance](#contents) -### Training Performance +### Training Performance | Parameters | Single
Ascend | Data-Parallel-8P | | ------------------------ | ------------------------------- | ------------------------------- | @@ -166,11 +179,9 @@ python eval.py | MAP Score | 0.6608 | 0.6590 | | Speed | 284 ms/step | 331 ms/step | | Loss | wide:0.415,deep:0.415 | wide:0.419, deep: 0.419 | -| Parms(M) | 349 | 349 | +| Params(M) | 349 | 349 | | Checkpoint for inference | 1.1GB(.ckpt file) | 1.1GB(.ckpt file) | - - All executable scripts can be found in [here](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/recommend/wide_and_deep_multitable/script) ### Evaluation Performance @@ -188,11 +199,11 @@ All executable scripts can be found in [here](https://gitee.com/mindspore/mindsp # [Description of Random Situation](#contents) There are three random situations: + - Shuffle of the dataset. - Initialization of some model weights. - Dropout operations. - # [ModelZoo Homepage](#contents) -Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). \ No newline at end of file +Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). diff --git a/model_zoo/official/recommend/wide_and_deep_multitable/README_CN.md b/model_zoo/official/recommend/wide_and_deep_multitable/README_CN.md index de630b57c6..84c500d8b4 100644 --- a/model_zoo/official/recommend/wide_and_deep_multitable/README_CN.md +++ b/model_zoo/official/recommend/wide_and_deep_multitable/README_CN.md @@ -38,7 +38,7 @@ Wide&Deep模型训练了宽线性模型和深度学习神经网络,结合了 # 环境要求 - 硬件(Ascend或GPU) - - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,申请通过即可获得资源。 + - 准备Ascend或GPU处理器搭建硬件环境。 - 框架 - [MindSpore](https://gitee.com/mindspore/mindspore) - 更多关于Mindspore的信息,请查看以下资源: @@ -129,7 +129,7 @@ optional arguments: --is_tf_dataset IS_TF_DATASET Whether the input is tfrecords. (Default:True) --dropout_flag Enable dropout.(Default:0) --output_path OUTPUT_PATH Deprecated - --ckpt_path CKPT_PATH The location of the checkpoint file.(Defalut:./checkpoints/) + --ckpt_path CKPT_PATH The location of the checkpoint file.(Default:./checkpoints/) --eval_file_name EVAL_FILE_NAME Eval output file.(Default:eval.og) --loss_file_name LOSS_FILE_NAME Loss output file.(Default:loss.log) ``` diff --git a/model_zoo/official/rl/dqn/README.md b/model_zoo/official/rl/dqn/README.md index b9a44bdbdd..340ba3a146 100644 --- a/model_zoo/official/rl/dqn/README.md +++ b/model_zoo/official/rl/dqn/README.md @@ -29,8 +29,8 @@ The overall network architecture of DQN is show below: ## [Requirements](#content) -- Hardware(Ascend/GPU/CPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. +- Hardware(Ascend/GPU) + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/audio/fcn-4/README.md b/model_zoo/research/audio/fcn-4/README.md index 5d0fe4ddff..096b2b6857 100644 --- a/model_zoo/research/audio/fcn-4/README.md +++ b/model_zoo/research/audio/fcn-4/README.md @@ -37,8 +37,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil ## [Environment Requirements](#contents) -- Hardware(Ascend - - If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. +- Hardware(Ascend) + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/cv/FaceAttribute/README.md b/model_zoo/research/cv/FaceAttribute/README.md index adeae9f938..1639182b86 100644 --- a/model_zoo/research/cv/FaceAttribute/README.md +++ b/model_zoo/research/cv/FaceAttribute/README.md @@ -86,7 +86,7 @@ We use about 91K face images as training dataset and 11K as evaluating dataset i # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/cv/FaceDetection/README.md b/model_zoo/research/cv/FaceDetection/README.md index a5f6f67db8..6bb6d18d62 100644 --- a/model_zoo/research/cv/FaceDetection/README.md +++ b/model_zoo/research/cv/FaceDetection/README.md @@ -70,7 +70,7 @@ We use about 13K images as training dataset and 3K as evaluating dataset in this # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/cv/FaceQualityAssessment/README.md b/model_zoo/research/cv/FaceQualityAssessment/README.md index d34337d8a4..6329083444 100644 --- a/model_zoo/research/cv/FaceQualityAssessment/README.md +++ b/model_zoo/research/cv/FaceQualityAssessment/README.md @@ -68,7 +68,7 @@ We use about 122K face images as training dataset and 2K as evaluating dataset i # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/cv/FaceRecognition/README.md b/model_zoo/research/cv/FaceRecognition/README.md index 964668fb7b..2a8e7f340d 100644 --- a/model_zoo/research/cv/FaceRecognition/README.md +++ b/model_zoo/research/cv/FaceRecognition/README.md @@ -56,7 +56,7 @@ The directory structure is as follows: # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to get Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/cv/FaceRecognitionForTracking/README.md b/model_zoo/research/cv/FaceRecognitionForTracking/README.md index 2fe4d6c2ee..4c31a64dae 100644 --- a/model_zoo/research/cv/FaceRecognitionForTracking/README.md +++ b/model_zoo/research/cv/FaceRecognitionForTracking/README.md @@ -56,7 +56,7 @@ The directory structure is as follows: # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/cv/MaskedFaceRecognition/README.md b/model_zoo/research/cv/MaskedFaceRecognition/README.md index 6349d1dc4f..4e44edefde 100644 --- a/model_zoo/research/cv/MaskedFaceRecognition/README.md +++ b/model_zoo/research/cv/MaskedFaceRecognition/README.md @@ -66,7 +66,7 @@ The directory structure is as follows: ## [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to get Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/cv/centernet/README.md b/model_zoo/research/cv/centernet/README.md index 3ed5d6eac3..774cb21b4c 100644 --- a/model_zoo/research/cv/centernet/README.md +++ b/model_zoo/research/cv/centernet/README.md @@ -77,7 +77,7 @@ Dataset used: [COCO2017](https://cocodataset.org/) # [Environment Requirements](#contents) - Hardware(Ascend) - - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/cv/ghostnet/Readme.md b/model_zoo/research/cv/ghostnet/Readme.md index 091c240a4b..ade8c24936 100644 --- a/model_zoo/research/cv/ghostnet/Readme.md +++ b/model_zoo/research/cv/ghostnet/Readme.md @@ -5,14 +5,14 @@ - [Dataset](#dataset) - [Environment Requirements](#environment-requirements) - [Script Description](#script-description) - - [Script and Sample Code](#script-and-sample-code) - - [Training Process](#training-process) - - [Evaluation Process](#evaluation-process) - - [Evaluation](#evaluation) + - [Script and Sample Code](#script-and-sample-code) + - [Training Process](#training-process) + - [Evaluation Process](#evaluation-process) + - [Evaluation](#evaluation) - [Model Description](#model-description) - - [Performance](#performance) - - [Training Performance](#evaluation-performance) - - [Inference Performance](#evaluation-performance) + - [Performance](#performance) + - [Training Performance](#evaluation-performance) + - [Inference Performance](#evaluation-performance) - [Description of Random Situation](#description-of-random-situation) - [ModelZoo Homepage](#modelzoo-homepage) @@ -33,20 +33,20 @@ The overall network architecture of GhostNet is show below: Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) - Dataset size: 7049 colorful images in 1000 classes - - Train: 3680 images - - Test: 3369 images + - Train: 3680 images + - Test: 3369 images - Data format: RGB images. - - Note: Data will be processed in src/dataset.py + - Note: Data will be processed in src/dataset.py # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU. - Framework - - [MindSpore](https://www.mindspore.cn/install/en) + - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) @@ -67,6 +67,7 @@ Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) ``` ## [Training process](#contents) + To Be Done ## [Eval process](#contents) @@ -75,12 +76,11 @@ To Be Done After installing MindSpore via the official website, you can start evaluation as follows: - ### Launch -``` +```bash # infer example - + Ascend: python eval.py --model [ghostnet/ghostnet-600] --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH] GPU: python eval.py --model [ghostnet/ghostnet-600] --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH] ``` @@ -89,7 +89,7 @@ After installing MindSpore via the official website, you can start evaluation as ### Result -``` +```bash result: {'acc': 0.8113927500681385} ckpt= ./ghostnet_nose_1x_pets.ckpt result: {'acc': 0.824475333878441} ckpt= ./ghostnet_1x_pets.ckpt result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt @@ -99,9 +99,10 @@ result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt ## [Performance](#contents) -#### Evaluation Performance +### Evaluation Performance + +#### GhostNet on ImageNet2012 -###### GhostNet on ImageNet2012 | Parameters | | | | -------------------------- | -------------------------------------- |---------------------------------- | | Model Version | GhostNet |GhostNet-600| @@ -112,7 +113,8 @@ result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt | FLOPs (M) | 142 | 591 | | Accuracy (Top1) | 73.9 |80.2 | -###### GhostNet on Oxford-IIIT Pet +#### GhostNet on Oxford-IIIT Pet + | Parameters | | | | -------------------------- | -------------------------------------- |---------------------------------- | | Model Version | GhostNet |GhostNet-600| @@ -123,7 +125,7 @@ result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt | FLOPs (M) | 140 | 590 | | Accuracy (Top1) | 82.4 |86.9 | -###### Comparison with other methods on Oxford-IIIT Pet +#### Comparison with other methods on Oxford-IIIT Pet |Model|FLOPs (M)|Latency (ms)*|Accuracy (Top1)| |-|-|-|-| diff --git a/model_zoo/research/cv/ghostnet_quant/Readme.md b/model_zoo/research/cv/ghostnet_quant/Readme.md index 209247869e..6d615203cd 100644 --- a/model_zoo/research/cv/ghostnet_quant/Readme.md +++ b/model_zoo/research/cv/ghostnet_quant/Readme.md @@ -6,14 +6,14 @@ - [Dataset](#dataset) - [Environment Requirements](#environment-requirements) - [Script Description](#script-description) - - [Script and Sample Code](#script-and-sample-code) - - [Training Process](#training-process) - - [Evaluation Process](#evaluation-process) - - [Evaluation](#evaluation) + - [Script and Sample Code](#script-and-sample-code) + - [Training Process](#training-process) + - [Evaluation Process](#evaluation-process) + - [Evaluation](#evaluation) - [Model Description](#model-description) - - [Performance](#performance) - - [Training Performance](#evaluation-performance) - - [Inference Performance](#evaluation-performance) + - [Performance](#performance) + - [Training Performance](#evaluation-performance) + - [Inference Performance](#evaluation-performance) - [Description of Random Situation](#description-of-random-situation) - [ModelZoo Homepage](#modelzoo-homepage) @@ -38,20 +38,20 @@ The overall network architecture of GhostNet is show below: Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) - Dataset size: 7049 colorful images in 1000 classes - - Train: 3680 images - - Test: 3369 images + - Train: 3680 images + - Test: 3369 images - Data format: RGB images. - - Note: Data will be processed in src/dataset.py + - Note: Data will be processed in src/dataset.py # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - - [MindSpore](https://www.mindspore.cn/install/en) + - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) # [Script description](#contents) @@ -72,6 +72,7 @@ Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) ``` ## [Training process](#contents) + To Be Done ## [Eval process](#contents) @@ -80,12 +81,11 @@ To Be Done After installing MindSpore via the official website, you can start evaluation as follows: - ### Launch -``` +```bash # infer example - + Ascend: python eval.py --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH] GPU: python eval.py --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH] ``` @@ -94,7 +94,7 @@ After installing MindSpore via the official website, you can start evaluation as ### Result -``` +```bash result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt ``` @@ -102,9 +102,10 @@ result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt ## [Performance](#contents) -#### Evaluation Performance +### Evaluation Performance + +#### GhostNet on ImageNet2012 -###### GhostNet on ImageNet2012 | Parameters | | | | -------------------------- | -------------------------------------- |---------------------------------- | | Model Version | GhostNet |GhostNet-int8| @@ -115,7 +116,8 @@ result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt | FLOPs (M) | 142 | / | | Accuracy (Top1) | 73.9 | w/o finetune:72.2, w finetune:73.6 | -###### GhostNet on Oxford-IIIT Pet +#### GhostNet on Oxford-IIIT Pet + | Parameters | | | | -------------------------- | -------------------------------------- |---------------------------------- | | Model Version | GhostNet |GhostNet-int8| @@ -126,7 +128,6 @@ result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt | FLOPs (M) | 140 | / | | Accuracy (Top1) | 82.4 | w/o finetune:81.66, w finetune:82.45 | - # [Description of Random Situation](#contents) In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py. diff --git a/model_zoo/research/cv/resnet50_adv_pruning/Readme.md b/model_zoo/research/cv/resnet50_adv_pruning/Readme.md index 9951bc3aac..981225af18 100644 --- a/model_zoo/research/cv/resnet50_adv_pruning/Readme.md +++ b/model_zoo/research/cv/resnet50_adv_pruning/Readme.md @@ -43,7 +43,7 @@ Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/cv/squeezenet/README.md b/model_zoo/research/cv/squeezenet/README.md index 1e396dd8a9..22e0f31ebd 100644 --- a/model_zoo/research/cv/squeezenet/README.md +++ b/model_zoo/research/cv/squeezenet/README.md @@ -63,7 +63,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. Squeezenet training on GPU performs badly now, and it is still in research. + - Prepare hardware environment with Ascend or GPU processor. Squeezenet training on GPU performs not well now, and it is still in research. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: @@ -74,7 +74,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil After installing MindSpore via the official website, you can start training and evaluation as follows: -- runing on Ascend +- running on Ascend ```bash # distributed training diff --git a/model_zoo/research/cv/ssd_ghostnet/README.md b/model_zoo/research/cv/ssd_ghostnet/README.md index 68fdc9678f..b473692569 100644 --- a/model_zoo/research/cv/ssd_ghostnet/README.md +++ b/model_zoo/research/cv/ssd_ghostnet/README.md @@ -22,7 +22,7 @@ Dataset used: [COCO2017]() # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: @@ -59,7 +59,7 @@ Dataset used: [COCO2017]() ``` 2. If your own dataset is used. **Select dataset to other when run script.** - Organize the dataset infomation into a TXT file, each row in the file is as follows: + Organize the dataset information into a TXT file, each row in the file is as follows: ```python train2017/0000001.jpg 0,259,401,459,7 35,28,324,201,2 0,30,59,80,2 diff --git a/model_zoo/research/nlp/dscnn/README.md b/model_zoo/research/nlp/dscnn/README.md index b312c3e0fc..588f8c25b1 100644 --- a/model_zoo/research/nlp/dscnn/README.md +++ b/model_zoo/research/nlp/dscnn/README.md @@ -57,7 +57,7 @@ Dataset used: [Speech commands dataset version 2](https://arxiv.org/abs/1804.032 # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - Third party open source package(if have) @@ -204,8 +204,8 @@ Parameters for both training and evaluation can be set in config.py. for shell script: ```python - # sh srcipts/run_train_ascend.sh [device_id] - sh srcipts/run_train_ascend.sh 0 + # sh scripts/run_train_ascend.sh [device_id] + sh scripts/run_train_ascend.sh 0 ``` for python script: @@ -284,7 +284,6 @@ Parameters for both training and evaluation can be set in config.py. | Total time | 4 mins | | Parameters (K) | 500K | | Checkpoint for Fine tuning | 3.3M (.ckpt file) | -| Script | [Link]() | [Link]() | ### Inference Performance diff --git a/model_zoo/research/recommend/autodis/README.md b/model_zoo/research/recommend/autodis/README.md index d904ea0909..18e39c358e 100644 --- a/model_zoo/research/recommend/autodis/README.md +++ b/model_zoo/research/recommend/autodis/README.md @@ -37,7 +37,7 @@ AutoDis leverages a set of meta-embeddings for each numerical field, which are s # [Environment Requirements](#contents) - Hardware(Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: diff --git a/model_zoo/research/rl/ldp_linucb/README.md b/model_zoo/research/rl/ldp_linucb/README.md index 09150201e5..1d7f7d940d 100644 --- a/model_zoo/research/rl/ldp_linucb/README.md +++ b/model_zoo/research/rl/ldp_linucb/README.md @@ -21,7 +21,7 @@ Locally Differentially Private (LDP) LinUCB is a variant of LinUCB bandit algori # [Model Architecture](#contents) -The server interacts with users in rounds. For a coming user, the server first transfers the current model parameters to the user. In the user side, the model chooses an action based on the user feature to play (e.g., choose a movie to recommend), and observes a reward (or loss) value from the user (e.g., rating of the movie). Then we perturb the data to be transfered by adding Gaussian noise. Finally, the server receives the perturbed data and updates the model. Details can be found in the [original paper](https://arxiv.org/abs/2006.00701). +The server interacts with users in rounds. For a coming user, the server first transfers the current model parameters to the user. In the user side, the model chooses an action based on the user feature to play (e.g., choose a movie to recommend), and observes a reward (or loss) value from the user (e.g., rating of the movie). Then we perturb the data to be transferred by adding Gaussian noise. Finally, the server receives the perturbed data and updates the model. Details can be found in the [original paper](https://arxiv.org/abs/2006.00701). # [Dataset](#contents) @@ -35,7 +35,7 @@ Dataset used: [MovieLens 100K](https://grouplens.org/datasets/movielens/100k/) # [Environment Requirements](#contents) - Hardware (Ascend/GPU) - - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the[application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. + - Prepare hardware environment with Ascend or GPU processor. - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: @@ -54,7 +54,7 @@ Dataset used: [MovieLens 100K](https://grouplens.org/datasets/movielens/100k/) ├── ldp_linucb ├── README.md // descriptions about LDP LinUCB ├── scripts - │ ├── run_train_eval.sh // shell script for runing on Ascend + │ ├── run_train_eval.sh // shell script for running on Ascend ├── src │ ├── dataset.py // dataset for movielens │ ├── linucb.py // model @@ -124,7 +124,7 @@ The performance compared with optimal non-private regret O(sqrt(T)): # [Description of Random Situation](#contents) -In `train_eval.py`, we randomly sample a user at each round. We also add Gaussian noise to the date being transfered. +In `train_eval.py`, we randomly sample a user at each round. We also add Gaussian noise to the date being transferred. # [ModelZoo Homepage](#contents)