From: @oacjiewen Reviewed-by: @wuxuejian,@linqingke Signed-off-by: @wuxuejiantags/v1.2.0-rc1
| @@ -41,7 +41,7 @@ Dataset used: | |||||
| # [环境要求](#contents) | # [环境要求](#contents) | ||||
| - 硬件(Ascend/GPU) | - 硬件(Ascend/GPU) | ||||
| - 需要准备具有Ascend或GPU处理能力的硬件环境. 如需使用Ascend,可以发送 [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) 到ascend@huawei.com。一旦批准,你就可以使用此资源 | |||||
| - 需要准备具有Ascend或GPU处理能力的硬件环境. | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - 如需获取更多信息,请查看如下链接: | - 如需获取更多信息,请查看如下链接: | ||||
| @@ -82,7 +82,7 @@ other datasets need to use the same format as WiderFace. | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -96,7 +96,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| @@ -97,7 +97,7 @@ python src/preprocess_dataset.py | |||||
| - 硬件(Ascend) | - 硬件(Ascend) | ||||
| - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| @@ -58,7 +58,7 @@ We provide `convert_ic03.py`, `convert_iiit5k.py`, `convert_svt.py` as exmples f | |||||
| ## [Environment Requirements](#contents) | ## [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. You will be able to have access to related resources once approved. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -38,7 +38,7 @@ For training and evaluation, we use the French Street Name Signs (FSNS) released | |||||
| ## [Environment Requirements](#contents) | ## [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. You will be able to have access to related resources once approved. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -57,7 +57,7 @@ Here we used 6 datasets for training, and 1 datasets for Evaluation. | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -74,7 +74,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -89,7 +89,7 @@ Pascal VOC数据集和语义边界数据集(Semantic Boundaries Dataset,SBD | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend) | - 硬件(Ascend) | ||||
| - 准备Ascend处理器搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 准备Ascend处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -49,7 +49,7 @@ Here we used 4 datasets for training, and 1 datasets for Evaluation. | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -78,7 +78,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -82,12 +82,12 @@ DenseNet-100使用的数据集: Cifar-10 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend/GPU) | - 硬件(Ascend/GPU) | ||||
| - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | |||||
| - [MindSpore](https://www.mindspore.cn/install) | |||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| - [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html) | |||||
| - [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html) | |||||
| # 快速入门 | # 快速入门 | ||||
| @@ -70,7 +70,7 @@ The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advan | |||||
| To run the python scripts in the repository, you need to prepare the environment as follow: | To run the python scripts in the repository, you need to prepare the environment as follow: | ||||
| - Hardware | - Hardware | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to [ascend@huawei.com](mailto:ascend@huawei.com). Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Python and dependencies | - Python and dependencies | ||||
| - Python3.7 | - Python3.7 | ||||
| - Mindspore 1.1.0 | - Mindspore 1.1.0 | ||||
| @@ -48,7 +48,7 @@ Dataset used: [COCO2017](<https://cocodataset.org/>) | |||||
| # Environment Requirements | # Environment Requirements | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Docker base image | - Docker base image | ||||
| - [Ascend Hub](ascend.huawei.com/ascendhub/#/home) | - [Ascend Hub](ascend.huawei.com/ascendhub/#/home) | ||||
| @@ -49,7 +49,7 @@ Faster R-CNN是一个两阶段目标检测网络,该网络采用RPN,可以 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend/GPU) | - 硬件(Ascend/GPU) | ||||
| - 使用Ascend处理器来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用Ascend处理器来搭建硬件环境。 | |||||
| - 获取基础镜像 | - 获取基础镜像 | ||||
| - [Ascend Hub](https://ascend.huawei.com/ascendhub/#/home) | - [Ascend Hub](https://ascend.huawei.com/ascendhub/#/home) | ||||
| @@ -68,7 +68,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -75,7 +75,7 @@ GoogleNet由多个inception模块串联起来,可以更加深入。 降维的 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend/GPU) | - 硬件(Ascend/GPU) | ||||
| - 使用Ascend或GPU处理器来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用Ascend或GPU处理器来搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -59,7 +59,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -70,7 +70,7 @@ InceptionV3的总体网络架构如下: | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend) | - 硬件(Ascend) | ||||
| - 使用Ascend来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用Ascend来搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -51,7 +51,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - or prepare GPU processor. | - or prepare GPU processor. | ||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| @@ -53,7 +53,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - Docker base image | - Docker base image | ||||
| @@ -55,7 +55,7 @@ MaskRCNN是一个两级目标检测网络,作为FasterRCNN的扩展模型, | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(昇腾处理器) | - 硬件(昇腾处理器) | ||||
| - 采用昇腾处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 采用昇腾处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - 获取基础镜像 | - 获取基础镜像 | ||||
| @@ -54,7 +54,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -64,7 +64,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| ## Environment Requirements | ## Environment Requirements | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -50,7 +50,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU/CPU) | - Hardware(Ascend/GPU/CPU) | ||||
| - Prepare hardware environment with Ascend, GPU or CPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend, GPU or CPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -56,7 +56,7 @@ MobileNetV2总体网络架构如下: | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend/GPU/CPU) | - 硬件(Ascend/GPU/CPU) | ||||
| - 使用Ascend、GPU或CPU处理器来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用Ascend、GPU或CPU处理器来搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -65,7 +65,7 @@ MobileNetV2总体网络架构如下: | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件:昇腾处理器(Ascend) | - 硬件:昇腾处理器(Ascend) | ||||
| - 使用昇腾处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用昇腾处理器来搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源 | - 如需查看详情,请参见如下资源 | ||||
| @@ -52,7 +52,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware:Ascend | - Hardware:Ascend | ||||
| - Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below | - For more information, please check the resources below | ||||
| @@ -75,7 +75,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware (Ascend) | - Hardware (Ascend) | ||||
| - Prepare hardware environment with Ascend. If you want to try, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - Download the VGG19 model of the MindSpore version: | - Download the VGG19 model of the MindSpore version: | ||||
| @@ -46,7 +46,7 @@ A testing set containing about 2000 readable words | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](http://www.mindspore.cn/install/en) | - [MindSpore](http://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -47,7 +47,7 @@ | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件:昇腾处理器(Ascend) | - 硬件:昇腾处理器(Ascend) | ||||
| - 使用Ascend处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用Ascend处理器来搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| @@ -82,7 +82,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU/CPU) | - Hardware(Ascend/GPU/CPU) | ||||
| - Prepare hardware environment with Ascend, GPU or CPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend, GPU or CPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -84,8 +84,8 @@ ResNet的总体网络架构如下: | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend/GPU) | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 硬件(Ascend/GPU/CPU) | |||||
| - 准备Ascend、GPU或CPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -35,7 +35,7 @@ ResNet152的总体网络架构如下:[链接](https://arxiv.org/pdf/1512.03385 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件 | - 硬件 | ||||
| - 准备Ascend处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 准备Ascend处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -59,12 +59,12 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware:Ascend | - Hardware:Ascend | ||||
| - Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| # [Script description](#contents) | # [Script description](#contents) | ||||
| @@ -64,16 +64,12 @@ ResNet-50总体网络架构如下: | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件:昇腾处理器(Ascend) | - 硬件:昇腾处理器(Ascend) | ||||
| - 使用昇腾处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用昇腾处理器来搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | |||||
| - [MindSpore](https://www.mindspore.cn/install) | |||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| - [MindSpore教程](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore教程](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| ## 脚本说明 | ## 脚本说明 | ||||
| @@ -52,7 +52,7 @@ The classical first-order optimization algorithm, such as SGD, has a small amoun | |||||
| ## Environment Requirements | ## Environment Requirements | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| @@ -57,7 +57,7 @@ ResNet-50的总体网络架构如下:[链接](https://arxiv.org/pdf/1512.03385 | |||||
| ## 环境要求 | ## 环境要求 | ||||
| - 硬件:昇腾处理器(Ascend或GPU) | - 硬件:昇腾处理器(Ascend或GPU) | ||||
| - 使用Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) 至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| @@ -1,4 +1,4 @@ | |||||
| # ResNext101-64x4d for MindSpore | |||||
| # ResNext101-64x4d | |||||
| 本仓库提供了ResNeXt101-64x4d模型的训练脚本和超参配置,以达到论文中的准确性。 | 本仓库提供了ResNeXt101-64x4d模型的训练脚本和超参配置,以达到论文中的准确性。 | ||||
| @@ -26,7 +26,7 @@ ResNeXt是ResNet网络的改进版本,比ResNet的网络多了块多了cardina | |||||
| ### 默认设置 | ### 默认设置 | ||||
| 以下各节介绍ResNext50模型的默认配置和超参数。 | |||||
| 以下各节介绍ResNext101模型的默认配置和超参数。 | |||||
| #### 优化器 | #### 优化器 | ||||
| @@ -65,7 +65,7 @@ ResNeXt是ResNet网络的改进版本,比ResNet的网络多了块多了cardina | |||||
| ## 快速入门指南 | ## 快速入门指南 | ||||
| 目录说明,代码参考了Modelzoo上的[ResNext50_for_MindSpore](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnext50) | |||||
| 目录说明,代码参考了Modelzoo上的[ResNext50](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnext50) | |||||
| ```path | ```path | ||||
| . | . | ||||
| @@ -53,7 +53,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -58,7 +58,7 @@ ResNeXt整体网络架构如下: | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend或GPU) | - 硬件(Ascend或GPU) | ||||
| - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -58,7 +58,7 @@ MSCOCO2017 | |||||
| ## [环境要求](#content) | ## [环境要求](#content) | ||||
| - 硬件(Ascend) | - 硬件(Ascend) | ||||
| - 使用Ascend处理器准备硬件环境。如果您想使用Ascend,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com。一旦获得批准,您就可以获取资源。 | |||||
| - 使用Ascend处理器准备硬件环境。 | |||||
| - 架构 | - 架构 | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - 想要获取更多信息,请检查以下资源: | - 想要获取更多信息,请检查以下资源: | ||||
| @@ -42,7 +42,7 @@ ShuffleNetV1的核心部分被分成三个阶段,每个阶段重复堆积了 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend) | - 硬件(Ascend) | ||||
| - 使用Ascend来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用Ascend来搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -60,7 +60,7 @@ The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advan | |||||
| To run the python scripts in the repository, you need to prepare the environment as follow: | To run the python scripts in the repository, you need to prepare the environment as follow: | ||||
| - Hardware | - Hardware | ||||
| - Prepare hardware environment with Ascend. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to [ascend@huawei.com](mailto:ascend@huawei.com). Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend. | |||||
| - Python and dependencies | - Python and dependencies | ||||
| - python 3.7 | - python 3.7 | ||||
| - mindspore 1.0.1 | - mindspore 1.0.1 | ||||
| @@ -63,7 +63,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/CPU) | - Hardware(Ascend/CPU) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. Squeezenet training on GPU performs badly now, and it is still in research. See [squeezenet in research](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/squeezenet) to get up-to-date details. | |||||
| - Prepare hardware environment with Ascend or CPU processor. Squeezenet training on GPU performs not well now, and it is still in research. See [squeezenet in research](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/squeezenet) to get up-to-date details. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -56,7 +56,7 @@ Dataset used can refer to [paper](<https://ieeexplore.ieee.org/abstract/document | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) form to ascend@huawei.com. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information,please check the resources below: | - For more information,please check the resources below: | ||||
| @@ -64,7 +64,7 @@ Tiny-DarkNet是Joseph Chet Redmon等人提出的一个16层的针对于经典的 | |||||
| # [环境要求](#目录) | # [环境要求](#目录) | ||||
| - 硬件(Ascend) | - 硬件(Ascend) | ||||
| - 请准备具有Ascend处理器的硬件环境.如果想使用Ascend资源,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) 至ascend@huawei.com. 当收到许可即可使用Ascend资源. | |||||
| - 请准备具有Ascend处理器的硬件环境. | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - 更多的信息请访问以下链接: | - 更多的信息请访问以下链接: | ||||
| @@ -58,7 +58,7 @@ We also support cell nuclei dataset which is used in [Unet++ original paper](htt | |||||
| ## [Environment Requirements](#contents) | ## [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -62,7 +62,7 @@ UNet++是U-Net的增强版本,使用了新的跨层链接方式和深层监督 | |||||
| ## 环境要求 | ## 环境要求 | ||||
| - 硬件(Ascend) | - 硬件(Ascend) | ||||
| - 准备Ascend处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 准备Ascend处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -22,42 +22,44 @@ | |||||
| - [Description of Random Situation](#description-of-random-situation) | - [Description of Random Situation](#description-of-random-situation) | ||||
| - [ModelZoo Homepage](#modelzoo-homepage) | - [ModelZoo Homepage](#modelzoo-homepage) | ||||
| # [VGG Description](#contents) | # [VGG Description](#contents) | ||||
| VGG, a very deep convolutional networks for large-scale image recognition, was proposed in 2014 and won the 1th place in object localization and 2th place in image classification task in ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). | VGG, a very deep convolutional networks for large-scale image recognition, was proposed in 2014 and won the 1th place in object localization and 2th place in image classification task in ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). | ||||
| [Paper](): Simonyan K, zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition[J]. arXiv preprint arXiv:1409.1556, 2014. | |||||
| [Paper](https://arxiv.org/abs/1409.1556): Simonyan K, zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition[J]. arXiv preprint arXiv:1409.1556, 2014. | |||||
| # [Model Architecture](#contents) | # [Model Architecture](#contents) | ||||
| VGG 16 network is mainly consisted by several basic modules (including convolution and pooling layer) and three continuous Dense layer. | VGG 16 network is mainly consisted by several basic modules (including convolution and pooling layer) and three continuous Dense layer. | ||||
| here basic modules mainly include basic operation like: **3×3 conv** and **2×2 max pooling**. | here basic modules mainly include basic operation like: **3×3 conv** and **2×2 max pooling**. | ||||
| # [Dataset](#contents) | # [Dataset](#contents) | ||||
| Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below. | Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below. | ||||
| #### Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>) | |||||
| ## Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>) | |||||
| - CIFAR-10 Dataset size:175M,60,000 32*32 colorful images in 10 classes | - CIFAR-10 Dataset size:175M,60,000 32*32 colorful images in 10 classes | ||||
| - Train:146M,50,000 images | - Train:146M,50,000 images | ||||
| - Test:29.3M,10,000 images | - Test:29.3M,10,000 images | ||||
| - Data format: binary files | |||||
| - Data format: binary files | |||||
| - Note: Data will be processed in src/dataset.py | - Note: Data will be processed in src/dataset.py | ||||
| #### Dataset used: [ImageNet2012](http://www.image-net.org/) | |||||
| ## Dataset used: [ImageNet2012](http://www.image-net.org/) | |||||
| - Dataset size: ~146G, 1.28 million colorful images in 1000 classes | - Dataset size: ~146G, 1.28 million colorful images in 1000 classes | ||||
| - Train: 140G, 1,281,167 images | - Train: 140G, 1,281,167 images | ||||
| - Test: 6.4G, 50, 000 images | - Test: 6.4G, 50, 000 images | ||||
| - Data format: RGB images | |||||
| - Data format: RGB images | |||||
| - Note: Data will be processed in src/dataset.py | - Note: Data will be processed in src/dataset.py | ||||
| #### Dataset organize way | |||||
| ## Dataset organize way | |||||
| CIFAR-10 | CIFAR-10 | ||||
| > Unzip the CIFAR-10 dataset to any path you want and the folder structure should be as follows: | > Unzip the CIFAR-10 dataset to any path you want and the folder structure should be as follows: | ||||
| > ``` | |||||
| > | |||||
| > ```bash | |||||
| > . | > . | ||||
| > ├── cifar-10-batches-bin # train dataset | > ├── cifar-10-batches-bin # train dataset | ||||
| > └── cifar-10-verify-bin # infer dataset | > └── cifar-10-verify-bin # infer dataset | ||||
| @@ -67,39 +69,37 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| > Unzip the ImageNet2012 dataset to any path you want and the folder should include train and eval dataset as follows: | > Unzip the ImageNet2012 dataset to any path you want and the folder should include train and eval dataset as follows: | ||||
| > | > | ||||
| > ``` | |||||
| > ```bash | |||||
| > . | > . | ||||
| > └─dataset | > └─dataset | ||||
| > ├─ilsvrc # train dataset | > ├─ilsvrc # train dataset | ||||
| > └─validation_preprocess # evaluate dataset | > └─validation_preprocess # evaluate dataset | ||||
| > ``` | > ``` | ||||
| # [Features](#contents) | # [Features](#contents) | ||||
| ## Mixed Precision | ## Mixed Precision | ||||
| The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. | |||||
| The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. | |||||
| For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. | For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. | ||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| # [Quick Start](#contents) | # [Quick Start](#contents) | ||||
| After installing MindSpore via the official website, you can start training and evaluation as follows: | After installing MindSpore via the official website, you can start training and evaluation as follows: | ||||
| - Running on Ascend | - Running on Ascend | ||||
| ```python | ```python | ||||
| # run training example | # run training example | ||||
| python train.py --data_path=[DATA_PATH] --device_id=[DEVICE_ID] > output.train.log 2>&1 & | python train.py --data_path=[DATA_PATH] --device_id=[DEVICE_ID] > output.train.log 2>&1 & | ||||
| @@ -110,12 +110,14 @@ sh run_distribute_train.sh [RANL_TABLE_JSON] [DATA_PATH] | |||||
| # run evaluation example | # run evaluation example | ||||
| python eval.py --data_path=[DATA_PATH] --pre_trained=[PRE_TRAINED] > output.eval.log 2>&1 & | python eval.py --data_path=[DATA_PATH] --pre_trained=[PRE_TRAINED] > output.eval.log 2>&1 & | ||||
| ``` | ``` | ||||
| For distributed training, a hccl configuration file with JSON format needs to be created in advance. | For distributed training, a hccl configuration file with JSON format needs to be created in advance. | ||||
| Please follow the instructions in the link below: | Please follow the instructions in the link below: | ||||
| https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools | |||||
| <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools> | |||||
| - Running on GPU | - Running on GPU | ||||
| ``` | |||||
| ```bash | |||||
| # run training example | # run training example | ||||
| python train.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_TYPE] --data_path=[DATA_PATH] > output.train.log 2>&1 & | python train.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_TYPE] --data_path=[DATA_PATH] > output.train.log 2>&1 & | ||||
| @@ -130,13 +132,12 @@ python eval.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_ | |||||
| ## [Script and Sample Code](#contents) | ## [Script and Sample Code](#contents) | ||||
| ``` | |||||
| ```bash | |||||
| ├── model_zoo | ├── model_zoo | ||||
| ├── README.md // descriptions about all the models | ├── README.md // descriptions about all the models | ||||
| ├── vgg16 | |||||
| ├── vgg16 | |||||
| ├── README.md // descriptions about googlenet | ├── README.md // descriptions about googlenet | ||||
| ├── scripts | |||||
| ├── scripts | |||||
| │ ├── run_distribute_train.sh // shell script for distributed training on Ascend | │ ├── run_distribute_train.sh // shell script for distributed training on Ascend | ||||
| │ ├── run_distribute_train_gpu.sh // shell script for distributed training on GPU | │ ├── run_distribute_train_gpu.sh // shell script for distributed training on GPU | ||||
| ├── src | ├── src | ||||
| @@ -146,7 +147,7 @@ python eval.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_ | |||||
| │ │ ├── util.py // util function | │ │ ├── util.py // util function | ||||
| │ │ ├── var_init.py // network parameter init method | │ │ ├── var_init.py // network parameter init method | ||||
| │ ├── config.py // parameter configuration | │ ├── config.py // parameter configuration | ||||
| │ ├── crossentropy.py // loss caculation | |||||
| │ ├── crossentropy.py // loss calculation | |||||
| │ ├── dataset.py // creating dataset | │ ├── dataset.py // creating dataset | ||||
| │ ├── linear_warmup.py // linear leanring rate | │ ├── linear_warmup.py // linear leanring rate | ||||
| │ ├── warmup_cosine_annealing_lr.py // consine anealing learning rate | │ ├── warmup_cosine_annealing_lr.py // consine anealing learning rate | ||||
| @@ -159,7 +160,8 @@ python eval.py --device_target="GPU" --device_id=[DEVICE_ID] --dataset=[DATASET_ | |||||
| ## [Script Parameters](#contents) | ## [Script Parameters](#contents) | ||||
| ### Training | ### Training | ||||
| ``` | |||||
| ```bash | |||||
| usage: train.py [--device_target TARGET][--data_path DATA_PATH] | usage: train.py [--device_target TARGET][--data_path DATA_PATH] | ||||
| [--dataset DATASET_TYPE][--is_distributed VALUE] | [--dataset DATASET_TYPE][--is_distributed VALUE] | ||||
| [--device_id DEVICE_ID][--pre_trained PRE_TRAINED] | [--device_id DEVICE_ID][--pre_trained PRE_TRAINED] | ||||
| @@ -179,7 +181,7 @@ parameters/options: | |||||
| ### Evaluation | ### Evaluation | ||||
| ``` | |||||
| ```bash | |||||
| usage: eval.py [--device_target TARGET][--data_path DATA_PATH] | usage: eval.py [--device_target TARGET][--data_path DATA_PATH] | ||||
| [--dataset DATASET_TYPE][--pre_trained PRE_TRAINED] | [--dataset DATASET_TYPE][--pre_trained PRE_TRAINED] | ||||
| [--device_id DEVICE_ID] | [--device_id DEVICE_ID] | ||||
| @@ -198,7 +200,7 @@ Parameters for both training and evaluation can be set in config.py. | |||||
| - config for vgg16, CIFAR-10 dataset | - config for vgg16, CIFAR-10 dataset | ||||
| ``` | |||||
| ```bash | |||||
| "num_classes": 10, # dataset class num | "num_classes": 10, # dataset class num | ||||
| "lr": 0.01, # learning rate | "lr": 0.01, # learning rate | ||||
| "lr_init": 0.01, # initial learning rate | "lr_init": 0.01, # initial learning rate | ||||
| @@ -218,15 +220,15 @@ Parameters for both training and evaluation can be set in config.py. | |||||
| "pad_mode": 'same', # pad mode for conv2d | "pad_mode": 'same', # pad mode for conv2d | ||||
| "padding": 0, # padding value for conv2d | "padding": 0, # padding value for conv2d | ||||
| "has_bias": False, # whether has bias in conv2d | "has_bias": False, # whether has bias in conv2d | ||||
| "batch_norm": True, # wether has batch_norm in conv2d | |||||
| "batch_norm": True, # whether has batch_norm in conv2d | |||||
| "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint | "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint | ||||
| "initialize_mode": "XavierUniform", # conv2d init mode | "initialize_mode": "XavierUniform", # conv2d init mode | ||||
| "has_dropout": True # wether using Dropout layer | |||||
| "has_dropout": True # whether using Dropout layer | |||||
| ``` | ``` | ||||
| - config for vgg16, ImageNet2012 dataset | - config for vgg16, ImageNet2012 dataset | ||||
| ``` | |||||
| ```bash | |||||
| "num_classes": 1000, # dataset class num | "num_classes": 1000, # dataset class num | ||||
| "lr": 0.01, # learning rate | "lr": 0.01, # learning rate | ||||
| "lr_init": 0.01, # initial learning rate | "lr_init": 0.01, # initial learning rate | ||||
| @@ -246,10 +248,10 @@ Parameters for both training and evaluation can be set in config.py. | |||||
| "pad_mode": 'pad', # pad mode for conv2d | "pad_mode": 'pad', # pad mode for conv2d | ||||
| "padding": 1, # padding value for conv2d | "padding": 1, # padding value for conv2d | ||||
| "has_bias": True, # whether has bias in conv2d | "has_bias": True, # whether has bias in conv2d | ||||
| "batch_norm": False, # wether has batch_norm in conv2d | |||||
| "batch_norm": False, # whether has batch_norm in conv2d | |||||
| "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint | "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint | ||||
| "initialize_mode": "KaimingNormal", # conv2d init mode | "initialize_mode": "KaimingNormal", # conv2d init mode | ||||
| "has_dropout": True # wether using Dropout layer | |||||
| "has_dropout": True # whether using Dropout layer | |||||
| ``` | ``` | ||||
| ## [Training Process](#contents) | ## [Training Process](#contents) | ||||
| @@ -259,15 +261,18 @@ Parameters for both training and evaluation can be set in config.py. | |||||
| #### Run vgg16 on Ascend | #### Run vgg16 on Ascend | ||||
| - Training using single device(1p), using CIFAR-10 dataset in default | - Training using single device(1p), using CIFAR-10 dataset in default | ||||
| ```bash | |||||
| python train.py --data_path=your_data_path --device_id=6 > out.train.log 2>&1 & | |||||
| ``` | ``` | ||||
| python train.py --data_path=your_data_path --device_id=6 > out.train.log 2>&1 & | |||||
| ``` | |||||
| The python command above will run in the background, you can view the results through the file `out.train.log`. | The python command above will run in the background, you can view the results through the file `out.train.log`. | ||||
| After training, you'll get some checkpoint files in specified ckpt_path, default in ./output directory. | After training, you'll get some checkpoint files in specified ckpt_path, default in ./output directory. | ||||
| You will get the loss value as following: | You will get the loss value as following: | ||||
| ``` | |||||
| ```bash | |||||
| # grep "loss is " output.train.log | # grep "loss is " output.train.log | ||||
| epoch: 1 step: 781, loss is 2.093086 | epoch: 1 step: 781, loss is 2.093086 | ||||
| epcoh: 2 step: 781, loss is 1.827582 | epcoh: 2 step: 781, loss is 1.827582 | ||||
| @@ -275,13 +280,16 @@ epcoh: 2 step: 781, loss is 1.827582 | |||||
| ``` | ``` | ||||
| - Distributed Training | - Distributed Training | ||||
| ``` | |||||
| ```bash | |||||
| sh run_distribute_train.sh rank_table.json your_data_path | sh run_distribute_train.sh rank_table.json your_data_path | ||||
| ``` | ``` | ||||
| The above shell script will run distribute training in the background, you can view the results through the file `train_parallel[X]/log`. | The above shell script will run distribute training in the background, you can view the results through the file `train_parallel[X]/log`. | ||||
| You will get the loss value as following: | You will get the loss value as following: | ||||
| ``` | |||||
| ```bash | |||||
| # grep "result: " train_parallel*/log | # grep "result: " train_parallel*/log | ||||
| train_parallel0/log:epoch: 1 step: 97, loss is 1.9060308 | train_parallel0/log:epoch: 1 step: 97, loss is 1.9060308 | ||||
| train_parallel0/log:epcoh: 2 step: 97, loss is 1.6003821 | train_parallel0/log:epcoh: 2 step: 97, loss is 1.6003821 | ||||
| @@ -291,19 +299,21 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579 | |||||
| ... | ... | ||||
| ... | ... | ||||
| ``` | ``` | ||||
| > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). | |||||
| > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). | |||||
| > **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh` | > **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh` | ||||
| #### Run vgg16 on GPU | #### Run vgg16 on GPU | ||||
| - Training using single device(1p) | - Training using single device(1p) | ||||
| ``` | |||||
| ```bash | |||||
| python train.py --device_target="GPU" --dataset="imagenet2012" --is_distributed=0 --data_path=$DATA_PATH > output.train.log 2>&1 & | python train.py --device_target="GPU" --dataset="imagenet2012" --is_distributed=0 --data_path=$DATA_PATH > output.train.log 2>&1 & | ||||
| ``` | ``` | ||||
| - Distributed Training | - Distributed Training | ||||
| ``` | |||||
| ```bash | |||||
| # distributed training(8p) | # distributed training(8p) | ||||
| bash scripts/run_distribute_train_gpu.sh /path/ImageNet2012/train" | bash scripts/run_distribute_train_gpu.sh /path/ImageNet2012/train" | ||||
| ``` | ``` | ||||
| @@ -313,15 +323,18 @@ bash scripts/run_distribute_train_gpu.sh /path/ImageNet2012/train" | |||||
| ### Evaluation | ### Evaluation | ||||
| - Do eval as follows, need to specify dataset type as "cifar10" or "imagenet2012" | - Do eval as follows, need to specify dataset type as "cifar10" or "imagenet2012" | ||||
| ``` | |||||
| ```bash | |||||
| # when using cifar10 dataset | # when using cifar10 dataset | ||||
| python eval.py --data_path=your_data_path --dataset="cifar10" --device_target="Ascend" --pre_trained=./*-70-781.ckpt > output.eval.log 2>&1 & | python eval.py --data_path=your_data_path --dataset="cifar10" --device_target="Ascend" --pre_trained=./*-70-781.ckpt > output.eval.log 2>&1 & | ||||
| # when using imagenet2012 dataset | # when using imagenet2012 dataset | ||||
| python eval.py --data_path=your_data_path --dataset="imagenet2012" --device_target="GPU" --pre_trained=./*-150-5004.ckpt > output.eval.log 2>&1 & | python eval.py --data_path=your_data_path --dataset="imagenet2012" --device_target="GPU" --pre_trained=./*-150-5004.ckpt > output.eval.log 2>&1 & | ||||
| ``` | ``` | ||||
| - The above python command will run in the background, you can view the results through the file `output.eval.log`. You will get the accuracy as following: | - The above python command will run in the background, you can view the results through the file `output.eval.log`. You will get the accuracy as following: | ||||
| ``` | |||||
| ```bash | |||||
| # when using cifar10 dataset | # when using cifar10 dataset | ||||
| # grep "result: " output.eval.log | # grep "result: " output.eval.log | ||||
| result: {'acc': 0.92} | result: {'acc': 0.92} | ||||
| @@ -331,11 +344,11 @@ after allreduce eval: top1_correct=36636, tot=50000, acc=73.27% | |||||
| after allreduce eval: top5_correct=45582, tot=50000, acc=91.16% | after allreduce eval: top5_correct=45582, tot=50000, acc=91.16% | ||||
| ``` | ``` | ||||
| # [Model Description](#contents) | # [Model Description](#contents) | ||||
| ## [Performance](#contents) | ## [Performance](#contents) | ||||
| ### Training Performance | |||||
| ### Training Performance | |||||
| | Parameters | VGG16(Ascend) | VGG16(GPU) | | | Parameters | VGG16(Ascend) | VGG16(GPU) | | ||||
| | -------------------------- | ---------------------------------------------- |------------------------------------| | | -------------------------- | ---------------------------------------------- |------------------------------------| | ||||
| @@ -354,7 +367,6 @@ after allreduce eval: top5_correct=45582, tot=50000, acc=91.16% | |||||
| | Checkpoint for Fine tuning | 1.1G(.ckpt file) |1.1G(.ckpt file) | | | Checkpoint for Fine tuning | 1.1G(.ckpt file) |1.1G(.ckpt file) | | ||||
| | Scripts |[vgg16](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/vgg16) | | | | Scripts |[vgg16](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/vgg16) | | | ||||
| ### Evaluation Performance | ### Evaluation Performance | ||||
| | Parameters | VGG16(Ascend) | VGG16(GPU) | | Parameters | VGG16(Ascend) | VGG16(GPU) | ||||
| @@ -372,5 +384,6 @@ after allreduce eval: top5_correct=45582, tot=50000, acc=91.16% | |||||
| In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py. | In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py. | ||||
| # [ModelZoo Homepage](#contents) | |||||
| Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | |||||
| # [ModelZoo Homepage](#contents) | |||||
| Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | |||||
| @@ -94,7 +94,7 @@ VGG 16网络主要由几个基本模块(包括卷积层和池化层)和三 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend或GPU) | - 硬件(Ascend或GPU) | ||||
| - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -12,10 +12,10 @@ | |||||
| - [Parameters Configuration](#parameters-configuration) | - [Parameters Configuration](#parameters-configuration) | ||||
| - [Dataset Preparation](#dataset-preparation) | - [Dataset Preparation](#dataset-preparation) | ||||
| - [Training Process](#training-process) | - [Training Process](#training-process) | ||||
| - [Training](#training) | |||||
| - [Distributed Training](#distributed-training) | |||||
| - [Training](#training) | |||||
| - [Distributed Training](#distributed-training) | |||||
| - [Evaluation Process](#evaluation-process) | - [Evaluation Process](#evaluation-process) | ||||
| - [Evaluation](#evaluation) | |||||
| - [Evaluation](#evaluation) | |||||
| - [Model Description](#model-description) | - [Model Description](#model-description) | ||||
| - [Performance](#performance) | - [Performance](#performance) | ||||
| - [Training Performance](#training-performance) | - [Training Performance](#training-performance) | ||||
| @@ -38,24 +38,23 @@ The dataset is self-generated using a third-party library called [captcha](https | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. You will be able to have access to related resources once approved. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | |||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | |||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| # [Quick Start](#contents) | # [Quick Start](#contents) | ||||
| - Generate dataset. | - Generate dataset. | ||||
| Run the script `scripts/run_process_data.sh` to generate a dataset. By default, the shell script will generate 10000 test images and 50000 train images separately. | Run the script `scripts/run_process_data.sh` to generate a dataset. By default, the shell script will generate 10000 test images and 50000 train images separately. | ||||
| ``` | |||||
| ```bash | |||||
| $ cd scripts | $ cd scripts | ||||
| $ sh run_process_data.sh | $ sh run_process_data.sh | ||||
| # after execution, you will find the dataset like the follows: | # after execution, you will find the dataset like the follows: | ||||
| . | . | ||||
| └─warpctc | └─warpctc | ||||
| @@ -67,31 +66,33 @@ The dataset is self-generated using a third-party library called [captcha](https | |||||
| - After the dataset is prepared, you may start running the training or the evaluation scripts as follows: | - After the dataset is prepared, you may start running the training or the evaluation scripts as follows: | ||||
| - Running on Ascend | - Running on Ascend | ||||
| ``` | |||||
| ```bash | |||||
| # distribute training example in Ascend | # distribute training example in Ascend | ||||
| $ bash run_distribute_train.sh rank_table.json ../data/train | $ bash run_distribute_train.sh rank_table.json ../data/train | ||||
| # evaluation example in Ascend | # evaluation example in Ascend | ||||
| $ bash run_eval.sh ../data/test warpctc-30-97.ckpt Ascend | $ bash run_eval.sh ../data/test warpctc-30-97.ckpt Ascend | ||||
| # standalone training example in Ascend | # standalone training example in Ascend | ||||
| $ bash run_standalone_train.sh ../data/train Ascend | $ bash run_standalone_train.sh ../data/train Ascend | ||||
| ``` | ``` | ||||
| For distributed training, a hccl configuration file with JSON format needs to be created in advance. | For distributed training, a hccl configuration file with JSON format needs to be created in advance. | ||||
| Please follow the instructions in the link below: | Please follow the instructions in the link below: | ||||
| https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools. | |||||
| <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools>. | |||||
| - Running on GPU | - Running on GPU | ||||
| ``` | |||||
| ```bash | |||||
| # distribute training example in GPU | # distribute training example in GPU | ||||
| $ bash run_distribute_train_for_gpu.sh 8 ../data/train | $ bash run_distribute_train_for_gpu.sh 8 ../data/train | ||||
| # standalone training example in GPU | # standalone training example in GPU | ||||
| $ bash run_standalone_train.sh ../data/train GPU | $ bash run_standalone_train.sh ../data/train GPU | ||||
| # evaluation example in GPU | # evaluation example in GPU | ||||
| $ bash run_eval.sh ../data/test warpctc-30-97.ckpt GPU | $ bash run_eval.sh ../data/test warpctc-30-97.ckpt GPU | ||||
| ``` | ``` | ||||
| @@ -127,7 +128,8 @@ The dataset is self-generated using a third-party library called [captcha](https | |||||
| ## [Script Parameters](#contents) | ## [Script Parameters](#contents) | ||||
| ### Training Script Parameters | ### Training Script Parameters | ||||
| ``` | |||||
| ```bash | |||||
| # distributed training in Ascend | # distributed training in Ascend | ||||
| Usage: bash run_distribute_train.sh [RANK_TABLE_FILE] [DATASET_PATH] | Usage: bash run_distribute_train.sh [RANK_TABLE_FILE] [DATASET_PATH] | ||||
| @@ -142,8 +144,8 @@ Usage: bash run_standalone_train.sh [DATASET_PATH] [PLATFORM] | |||||
| Parameters for both training and evaluation can be set in config.py. | Parameters for both training and evaluation can be set in config.py. | ||||
| ``` | |||||
| "max_captcha_digits": 4, # max number of digits in each | |||||
| ```bash | |||||
| "max_captcha_digits": 4, # max number of digits in each | |||||
| "captcha_width": 160, # width of captcha images | "captcha_width": 160, # width of captcha images | ||||
| "captcha_height": 64, # height of capthca images | "captcha_height": 64, # height of capthca images | ||||
| "batch_size": 64, # batch size of input tensor | "batch_size": 64, # batch size of input tensor | ||||
| @@ -158,36 +160,41 @@ Parameters for both training and evaluation can be set in config.py. | |||||
| ``` | ``` | ||||
| ## [Dataset Preparation](#contents) | ## [Dataset Preparation](#contents) | ||||
| - You may refer to "Generate dataset" in [Quick Start](#quick-start) to automatically generate a dataset, or you may choose to generate a captcha dataset by yourself. | - You may refer to "Generate dataset" in [Quick Start](#quick-start) to automatically generate a dataset, or you may choose to generate a captcha dataset by yourself. | ||||
| ## [Training Process](#contents) | ## [Training Process](#contents) | ||||
| - Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/data_preparation.html) for more information about dataset. | - Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/data_preparation.html) for more information about dataset. | ||||
| ### [Training](#contents) | ### [Training](#contents) | ||||
| - Run `run_standalone_train.sh` for non-distributed training of WarpCTC model, either on Ascend or on GPU. | - Run `run_standalone_train.sh` for non-distributed training of WarpCTC model, either on Ascend or on GPU. | ||||
| ``` bash | ``` bash | ||||
| bash run_standalone_train.sh [DATASET_PATH] [PLATFORM] | bash run_standalone_train.sh [DATASET_PATH] [PLATFORM] | ||||
| ``` | ``` | ||||
| ### [Distributed Training](#contents) | ### [Distributed Training](#contents) | ||||
| - Run `run_distribute_train.sh` for distributed training of WarpCTC model on Ascend. | - Run `run_distribute_train.sh` for distributed training of WarpCTC model on Ascend. | ||||
| ``` bash | ``` bash | ||||
| bash run_distribute_train.sh [RANK_TABLE_FILE] [DATASET_PATH] | bash run_distribute_train.sh [RANK_TABLE_FILE] [DATASET_PATH] | ||||
| ``` | ``` | ||||
| - Run `run_distribute_train_gpu.sh` for distributed training of WarpCTC model on GPU. | - Run `run_distribute_train_gpu.sh` for distributed training of WarpCTC model on GPU. | ||||
| ``` bash | ``` bash | ||||
| bash run_distribute_train_gpu.sh [RANK_SIZE] [DATASET_PATH] | bash run_distribute_train_gpu.sh [RANK_SIZE] [DATASET_PATH] | ||||
| ``` | ``` | ||||
| ## [Evaluation Process](#contents) | ## [Evaluation Process](#contents) | ||||
| ### [Evaluation](#contents) | ### [Evaluation](#contents) | ||||
| - Run `run_eval.sh` for evaluation. | - Run `run_eval.sh` for evaluation. | ||||
| ``` bash | ``` bash | ||||
| bash run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH] [PLATFORM] | bash run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH] [PLATFORM] | ||||
| ``` | ``` | ||||
| @@ -216,7 +223,6 @@ bash run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH] [PLATFORM] | |||||
| | Checkpoint for Fine tuning | 20.3M (.ckpt file) | 20.3M (.ckpt file) | | | Checkpoint for Fine tuning | 20.3M (.ckpt file) | 20.3M (.ckpt file) | | ||||
| | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/warpctc) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/warpctc) | | | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/warpctc) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/warpctc) | | ||||
| ### [Evaluation Performance](#contents) | ### [Evaluation Performance](#contents) | ||||
| | Parameters | WarpCTC | | | Parameters | WarpCTC | | ||||
| @@ -232,7 +238,9 @@ bash run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH] [PLATFORM] | |||||
| | Model for inference | 20.3M (.ckpt file) | | | Model for inference | 20.3M (.ckpt file) | | ||||
| # [Description of Random Situation](#contents) | # [Description of Random Situation](#contents) | ||||
| In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py for weight initialization. | In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py for weight initialization. | ||||
| # [ModelZoo Homepage](#contents) | # [ModelZoo Homepage](#contents) | ||||
| Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | |||||
| Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | |||||
| @@ -43,7 +43,7 @@ WarpCTC是带有一层FC神经网络的二层堆叠LSTM模型。详细信息请 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend/GPU) | - 硬件(Ascend/GPU) | ||||
| - 使用Ascend或GPU处理器来搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawie,审核通过即可获得资源。 | |||||
| - 使用Ascend或GPU处理器来搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -58,7 +58,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -68,7 +68,7 @@ Dataset used: [COCO2014](https://cocodataset.org/#download) | |||||
| ## [Environment Requirements](#contents) | ## [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -70,7 +70,7 @@ YOLOv3使用DarkNet53执行特征提取,这是YOLOv2中的Darknet-19和残差 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend/GPU) | - 硬件(Ascend/GPU) | ||||
| - 使用Ascend或GPU处理器来搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) 至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 使用Ascend或GPU处理器来搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -4,7 +4,7 @@ | |||||
| - [Model Architecture](#model-architecture) | - [Model Architecture](#model-architecture) | ||||
| - [Dataset](#dataset) | - [Dataset](#dataset) | ||||
| - [Environment Requirements](#environment-requirements) | - [Environment Requirements](#environment-requirements) | ||||
| - [Quick Start](#quick-start) | |||||
| - [Quick Start](#quick-start) | |||||
| - [Script Description](#script-description) | - [Script Description](#script-description) | ||||
| - [Script and Sample Code](#script-and-sample-code) | - [Script and Sample Code](#script-and-sample-code) | ||||
| - [Script Parameters](#script-parameters) | - [Script Parameters](#script-parameters) | ||||
| @@ -20,13 +20,12 @@ | |||||
| - [Description of Random Situation](#description-of-random-situation) | - [Description of Random Situation](#description-of-random-situation) | ||||
| - [ModelZoo Homepage](#modelzoo-homepage) | - [ModelZoo Homepage](#modelzoo-homepage) | ||||
| # [YOLOv3-DarkNet53-Quant Description](#contents) | # [YOLOv3-DarkNet53-Quant Description](#contents) | ||||
| You only look once (YOLO) is a state-of-the-art, real-time object detection system. YOLOv3 is extremely fast and accurate. | |||||
| You only look once (YOLO) is a state-of-the-art, real-time object detection system. YOLOv3 is extremely fast and accurate. | |||||
| Prior detection systems repurpose classifiers or localizers to perform detection. They apply the model to an image at multiple locations and scales. High scoring regions of the image are considered detections. | Prior detection systems repurpose classifiers or localizers to perform detection. They apply the model to an image at multiple locations and scales. High scoring regions of the image are considered detections. | ||||
| YOLOv3 use a totally different approach. It apply a single neural network to the full image. This network divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities. | |||||
| YOLOv3 use a totally different approach. It apply a single neural network to the full image. This network divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities. | |||||
| YOLOv3 uses a few tricks to improve training and increase performance, including: multi-scale predictions, a better backbone classifier, and more. The full details are in the paper! | YOLOv3 uses a few tricks to improve training and increase performance, including: multi-scale predictions, a better backbone classifier, and more. The full details are in the paper! | ||||
| @@ -35,43 +34,39 @@ In order to reduce the size of the weight and improve the low-bit computing perf | |||||
| [Paper](https://pjreddie.com/media/files/papers/YOLOv3.pdf): YOLOv3: An Incremental Improvement. Joseph Redmon, Ali Farhadi, | [Paper](https://pjreddie.com/media/files/papers/YOLOv3.pdf): YOLOv3: An Incremental Improvement. Joseph Redmon, Ali Farhadi, | ||||
| University of Washington | University of Washington | ||||
| # [Model Architecture](#contents) | # [Model Architecture](#contents) | ||||
| YOLOv3 use DarkNet53 for performing feature extraction, which is a hybrid approach between the network used in YOLOv2, Darknet-19, and that newfangled residual network stuff. DarkNet53 uses successive 3 × 3 and 1 × 1 convolutional layers and has some shortcut connections as well and is significantly larger. It has 53 convolutional layers. | YOLOv3 use DarkNet53 for performing feature extraction, which is a hybrid approach between the network used in YOLOv2, Darknet-19, and that newfangled residual network stuff. DarkNet53 uses successive 3 × 3 and 1 × 1 convolutional layers and has some shortcut connections as well and is significantly larger. It has 53 convolutional layers. | ||||
| # [Dataset](#contents) | # [Dataset](#contents) | ||||
| Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below. | Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below. | ||||
| Dataset used: [COCO2014](https://cocodataset.org/#download) | |||||
| Dataset used: [COCO2014](https://cocodataset.org/#download) | |||||
| - Dataset size: 19G, 123,287 images, 80 object categories. | - Dataset size: 19G, 123,287 images, 80 object categories. | ||||
| - Train:13G, 82,783 images | |||||
| - Val:6GM, 40,504 images | |||||
| - Annotations: 241M, Train/Val annotations | |||||
| - Train:13G, 82,783 images | |||||
| - Val:6GM, 40,504 images | |||||
| - Annotations: 241M, Train/Val annotations | |||||
| - Data format:zip files | - Data format:zip files | ||||
| - Note:Data will be processed in yolo_dataset.py, and unzip files before uses it. | |||||
| - Note:Data will be processed in yolo_dataset.py, and unzip files before uses it. | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| # [Quick Start](#contents) | # [Quick Start](#contents) | ||||
| After installing MindSpore via the official website, you can start training and evaluation in Ascend as follows: | |||||
| After installing MindSpore via the official website, you can start training and evaluation in Ascend as follows: | |||||
| ``` | |||||
| # The yolov3_darknet53_noquant.ckpt in the follow script is got from yolov3-darknet53 training like paper. | |||||
| ```bash | |||||
| # The yolov3_darknet53_noquant.ckpt in the follow script is got from yolov3-darknet53 training like paper. | |||||
| # The parameter of resume_yolov3 is necessary. | # The parameter of resume_yolov3 is necessary. | ||||
| # The parameter of training_shape define image shape for network, default is "". | # The parameter of training_shape define image shape for network, default is "". | ||||
| # It means use 10 kinds of shape as input shape, or it can be set some kind of shape. | # It means use 10 kinds of shape as input shape, or it can be set some kind of shape. | ||||
| @@ -103,17 +98,16 @@ python eval.py \ | |||||
| sh run_eval.sh dataset/coco2014/ checkpoint/yolov3_quant.ckpt 0 | sh run_eval.sh dataset/coco2014/ checkpoint/yolov3_quant.ckpt 0 | ||||
| ``` | ``` | ||||
| # [Script Description](#contents) | # [Script Description](#contents) | ||||
| ## [Script and Sample Code](#contents) | ## [Script and Sample Code](#contents) | ||||
| ``` | |||||
| ```bash | |||||
| . | . | ||||
| └─yolov3_darknet53_quant | |||||
| └─yolov3_darknet53_quant | |||||
| ├─README.md | ├─README.md | ||||
| ├─mindspore_hub_conf.md # config for mindspore hub | ├─mindspore_hub_conf.md # config for mindspore hub | ||||
| ├─scripts | |||||
| ├─scripts | |||||
| ├─run_standalone_train.sh # launch standalone training(1p) in ascend | ├─run_standalone_train.sh # launch standalone training(1p) in ascend | ||||
| ├─run_distribute_train.sh # launch distributed training(8p) in ascend | ├─run_distribute_train.sh # launch distributed training(8p) in ascend | ||||
| └─run_eval.sh # launch evaluating in ascend | └─run_eval.sh # launch evaluating in ascend | ||||
| @@ -134,10 +128,9 @@ sh run_eval.sh dataset/coco2014/ checkpoint/yolov3_quant.ckpt 0 | |||||
| └─train.py # train net | └─train.py # train net | ||||
| ``` | ``` | ||||
| ## [Script Parameters](#contents) | ## [Script Parameters](#contents) | ||||
| ``` | |||||
| ```bash | |||||
| Major parameters in train.py as follow. | Major parameters in train.py as follow. | ||||
| optional arguments: | optional arguments: | ||||
| @@ -194,21 +187,19 @@ optional arguments: | |||||
| Resize rate for multi-scale training. Default: None | Resize rate for multi-scale training. Default: None | ||||
| ``` | ``` | ||||
| ## [Training Process](#contents) | ## [Training Process](#contents) | ||||
| ### Training on Ascend | |||||
| ### Training on Ascend | |||||
| ### Distributed Training | ### Distributed Training | ||||
| ``` | |||||
| ```bash | |||||
| sh run_distribute_train.sh dataset/coco2014 yolov3_darknet53_noquant.ckpt rank_table_8p.json | sh run_distribute_train.sh dataset/coco2014 yolov3_darknet53_noquant.ckpt rank_table_8p.json | ||||
| ``` | ``` | ||||
| The above shell script will run distribute training in the background. You can view the results through the file `train_parallel[X]/log.txt`. The loss value will be achieved as follows: | The above shell script will run distribute training in the background. You can view the results through the file `train_parallel[X]/log.txt`. The loss value will be achieved as follows: | ||||
| ``` | |||||
| ```bash | |||||
| # distribute training result(8p) | # distribute training result(8p) | ||||
| epoch[0], iter[0], loss:483.341675, 0.31 imgs/sec, lr:0.0 | epoch[0], iter[0], loss:483.341675, 0.31 imgs/sec, lr:0.0 | ||||
| epoch[0], iter[100], loss:55.690952, 3.46 imgs/sec, lr:0.0 | epoch[0], iter[100], loss:55.690952, 3.46 imgs/sec, lr:0.0 | ||||
| @@ -232,14 +223,13 @@ epoch[134], iter[86400], loss:35.603033, 142.23 imgs/sec, lr:1.6245529650404933e | |||||
| epoch[134], iter[86500], loss:34.303755, 145.18 imgs/sec, lr:1.6245529650404933e-06 | epoch[134], iter[86500], loss:34.303755, 145.18 imgs/sec, lr:1.6245529650404933e-06 | ||||
| ``` | ``` | ||||
| ## [Evaluation Process](#contents) | ## [Evaluation Process](#contents) | ||||
| ### Evaluation on Ascend | ### Evaluation on Ascend | ||||
| Before running the command below. | Before running the command below. | ||||
| ``` | |||||
| ```bash | |||||
| python eval.py \ | python eval.py \ | ||||
| --data_dir=./dataset/coco2014 \ | --data_dir=./dataset/coco2014 \ | ||||
| --pretrained=0-130_83330.ckpt \ | --pretrained=0-130_83330.ckpt \ | ||||
| @@ -250,7 +240,7 @@ sh run_eval.sh dataset/coco2014/ checkpoint/0-130_83330.ckpt 0 | |||||
| The above python command will run in the background. You can view the results through the file "log.txt". The mAP of the test dataset will be as follows: | The above python command will run in the background. You can view the results through the file "log.txt". The mAP of the test dataset will be as follows: | ||||
| ``` | |||||
| ```bash | |||||
| # log.txt | # log.txt | ||||
| =============coco eval reulst========= | =============coco eval reulst========= | ||||
| Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.310 | Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.310 | ||||
| @@ -267,8 +257,8 @@ Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.450 | |||||
| Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.558 | Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.558 | ||||
| ``` | ``` | ||||
| # [Model Description](#contents) | # [Model Description](#contents) | ||||
| ## [Performance](#contents) | ## [Performance](#contents) | ||||
| ### Evaluation Performance | ### Evaluation Performance | ||||
| @@ -279,7 +269,7 @@ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.558 | |||||
| | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G | | | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G | | ||||
| | uploaded Date | 09/15/2020 (month/day/year) | | | uploaded Date | 09/15/2020 (month/day/year) | | ||||
| | MindSpore Version | 1.0.0 | | | MindSpore Version | 1.0.0 | | ||||
| | Dataset | COCO2014 | | |||||
| | Dataset | COCO2014 | | |||||
| | Training Parameters | epoch=135, batch_size=16, lr=0.012, momentum=0.9 | | | Training Parameters | epoch=135, batch_size=16, lr=0.012, momentum=0.9 | | ||||
| | Optimizer | Momentum | | | Optimizer | Momentum | | ||||
| | Loss Function | Sigmoid Cross Entropy with logits | | | Loss Function | Sigmoid Cross Entropy with logits | | ||||
| @@ -289,8 +279,7 @@ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.558 | |||||
| | Total time | 8pc: 23.5 hours | | | Total time | 8pc: 23.5 hours | | ||||
| | Parameters (M) | 62.1 | | | Parameters (M) | 62.1 | | ||||
| | Checkpoint for Fine tuning | 474M (.ckpt file) | | | Checkpoint for Fine tuning | 474M (.ckpt file) | | ||||
| | Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/yolov3_darknet53_quant | | |||||
| | Scripts | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/yolov3_darknet53_quant> | | |||||
| ### Inference Performance | ### Inference Performance | ||||
| @@ -306,11 +295,10 @@ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.558 | |||||
| | Accuracy | 8pcs: 31.0% | | | Accuracy | 8pcs: 31.0% | | ||||
| | Model for inference | 474M (.ckpt file) | | | Model for inference | 474M (.ckpt file) | | ||||
| # [Description of Random Situation](#contents) | # [Description of Random Situation](#contents) | ||||
| There are random seeds in distributed_sampler.py, transforms.py, yolo_dataset.py files. | |||||
| There are random seeds in distributed_sampler.py, transforms.py, yolo_dataset.py files. | |||||
| # [ModelZoo Homepage](#contents) | # [ModelZoo Homepage](#contents) | ||||
| Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | |||||
| Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | |||||
| @@ -56,7 +56,7 @@ YOLOv3使用DarkNet53执行特征提取,这是YOLOv2中的Darknet-19和残差 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend处理器) | - 硬件(Ascend处理器) | ||||
| - 准备Ascend或GPU处理器搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install/) | - [MindSpore](https://www.mindspore.cn/install/) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -66,12 +66,12 @@ Dataset used: [COCO2017](<http://images.cocodataset.org/>) | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| # [Quick Start](#contents) | # [Quick Start](#contents) | ||||
| @@ -69,7 +69,7 @@ YOLOv3整体网络架构如下: | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend处理器) | - 硬件(Ascend处理器) | ||||
| - 准备Ascend处理器搭建硬件环境。如需试用Ascend处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 | |||||
| - 准备Ascend处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -62,7 +62,7 @@ other datasets need to use the same format as MS COCO. | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -14,20 +14,18 @@ | |||||
| - [Description of Random Situation](#description-of-random-situation) | - [Description of Random Situation](#description-of-random-situation) | ||||
| - [ModelZoo Homepage](#modelzoo-homepage) | - [ModelZoo Homepage](#modelzoo-homepage) | ||||
| # [GCN Description](#contents) | # [GCN Description](#contents) | ||||
| GCN(Graph Convolutional Networks) was proposed in 2016 and designed to do semi-supervised learning on graph-structured data. A scalable approach based on an efficient variant of convolutional neural networks which operate directly on graphs was presented. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. | GCN(Graph Convolutional Networks) was proposed in 2016 and designed to do semi-supervised learning on graph-structured data. A scalable approach based on an efficient variant of convolutional neural networks which operate directly on graphs was presented. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. | ||||
| [Paper](https://arxiv.org/abs/1609.02907): Thomas N. Kipf, Max Welling. 2016. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR 2016. | [Paper](https://arxiv.org/abs/1609.02907): Thomas N. Kipf, Max Welling. 2016. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR 2016. | ||||
| # [Model Architecture](#contents) | # [Model Architecture](#contents) | ||||
| GCN contains two graph convolution layers. Each layer takes nodes features and adjacency matrix as input, nodes' features are then updated by aggregating neighbours' features. | |||||
| GCN contains two graph convolution layers. Each layer takes nodes features and adjacency matrix as input, nodes' features are then updated by aggregating neighbours' features. | |||||
| # [Dataset](#contents) | # [Dataset](#contents) | ||||
| Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below. | Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below. | ||||
| | Dataset | Type | Nodes | Edges | Classes | Features | Label rate | | | Dataset | Type | Nodes | Edges | Classes | Features | Label rate | | ||||
| @@ -35,29 +33,23 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| | Cora | Citation network | 2708 | 5429 | 7 | 1433 | 0.052 | | | Cora | Citation network | 2708 | 5429 | 7 | 1433 | 0.052 | | ||||
| | Citeseer| Citation network | 3327 | 4732 | 6 | 3703 | 0.036 | | | Citeseer| Citation network | 3327 | 4732 | 6 | 3703 | 0.036 | | ||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | |||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | |||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| # [Quick Start](#contents) | # [Quick Start](#contents) | ||||
| - Install [MindSpore](https://www.mindspore.cn/install/en). | - Install [MindSpore](https://www.mindspore.cn/install/en). | ||||
| - Download the dataset Cora or Citeseer provided by /kimiyoung/planetoid from github. | - Download the dataset Cora or Citeseer provided by /kimiyoung/planetoid from github. | ||||
| - Place the dataset to any path you want, the folder should include files as follows(we use Cora dataset as an example): | - Place the dataset to any path you want, the folder should include files as follows(we use Cora dataset as an example): | ||||
| ``` | |||||
| ```bash | |||||
| . | . | ||||
| └─data | └─data | ||||
| ├─ind.cora.allx | ├─ind.cora.allx | ||||
| @@ -71,15 +63,18 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| ``` | ``` | ||||
| - Generate dataset in mindrecord format for cora or citeseer. | - Generate dataset in mindrecord format for cora or citeseer. | ||||
| ####Usage | |||||
| ```buildoutcfg | |||||
| ## Usage | |||||
| ```bash | |||||
| cd ./scripts | cd ./scripts | ||||
| # SRC_PATH is the dataset file path you downloaded, DATASET_NAME is cora or citeseer | # SRC_PATH is the dataset file path you downloaded, DATASET_NAME is cora or citeseer | ||||
| sh run_process_data.sh [SRC_PATH] [DATASET_NAME] | sh run_process_data.sh [SRC_PATH] [DATASET_NAME] | ||||
| ``` | ``` | ||||
| ####Launch | |||||
| ``` | |||||
| ## Launch | |||||
| ```bash | |||||
| #Generate dataset in mindrecord format for cora | #Generate dataset in mindrecord format for cora | ||||
| sh run_process_data.sh ./data cora | sh run_process_data.sh ./data cora | ||||
| #Generate dataset in mindrecord format for citeseer | #Generate dataset in mindrecord format for citeseer | ||||
| @@ -89,12 +84,12 @@ sh run_process_data.sh ./data citeseer | |||||
| # [Script Description](#contents) | # [Script Description](#contents) | ||||
| ## [Script and Sample Code](#contents) | ## [Script and Sample Code](#contents) | ||||
| ```shell | ```shell | ||||
| . | . | ||||
| └─gcn | |||||
| └─gcn | |||||
| ├─README.md | ├─README.md | ||||
| ├─scripts | |||||
| ├─scripts | |||||
| | ├─run_process_data.sh # Generate dataset in mindrecord format | | ├─run_process_data.sh # Generate dataset in mindrecord format | ||||
| | └─run_train.sh # Launch training, now only Ascend backend is supported. | | └─run_train.sh # Launch training, now only Ascend backend is supported. | ||||
| | | | | ||||
| @@ -106,12 +101,12 @@ sh run_process_data.sh ./data citeseer | |||||
| | | | | ||||
| └─train.py # Train net, evaluation is performed after every training epoch. After the verification result converges, the training stops, then testing is performed. | └─train.py # Train net, evaluation is performed after every training epoch. After the verification result converges, the training stops, then testing is performed. | ||||
| ``` | ``` | ||||
| ## [Script Parameters](#contents) | ## [Script Parameters](#contents) | ||||
| Parameters for training can be set in config.py. | Parameters for training can be set in config.py. | ||||
| ``` | |||||
| ```bash | |||||
| "learning_rate": 0.01, # Learning rate | "learning_rate": 0.01, # Learning rate | ||||
| "epochs": 200, # Epoch sizes for training | "epochs": 200, # Epoch sizes for training | ||||
| "hidden1": 16, # Hidden size for the first graph convolution layer | "hidden1": 16, # Hidden size for the first graph convolution layer | ||||
| @@ -121,26 +116,25 @@ Parameters for training can be set in config.py. | |||||
| ``` | ``` | ||||
| ## [Training, Evaluation, Test Process](#contents) | ## [Training, Evaluation, Test Process](#contents) | ||||
| #### Usage | |||||
| ``` | |||||
| ### Usage | |||||
| ```bash | |||||
| # run train with cora or citeseer dataset, DATASET_NAME is cora or citeseer | # run train with cora or citeseer dataset, DATASET_NAME is cora or citeseer | ||||
| sh run_train.sh [DATASET_NAME] | sh run_train.sh [DATASET_NAME] | ||||
| ``` | ``` | ||||
| #### Launch | |||||
| ### Launch | |||||
| ```bash | ```bash | ||||
| sh run_train.sh cora | sh run_train.sh cora | ||||
| ``` | ``` | ||||
| #### Result | |||||
| ### Result | |||||
| Training result will be stored in the scripts path, whose folder name begins with "train". You can find the result like the followings in log. | Training result will be stored in the scripts path, whose folder name begins with "train". You can find the result like the followings in log. | ||||
| ``` | |||||
| ```bash | |||||
| Epoch: 0001 train_loss= 1.95373 train_acc= 0.09286 val_loss= 1.95075 val_acc= 0.20200 time= 7.25737 | Epoch: 0001 train_loss= 1.95373 train_acc= 0.09286 val_loss= 1.95075 val_acc= 0.20200 time= 7.25737 | ||||
| Epoch: 0002 train_loss= 1.94812 train_acc= 0.32857 val_loss= 1.94717 val_acc= 0.34000 time= 0.00438 | Epoch: 0002 train_loss= 1.94812 train_acc= 0.32857 val_loss= 1.94717 val_acc= 0.34000 time= 0.00438 | ||||
| Epoch: 0003 train_loss= 1.94249 train_acc= 0.47857 val_loss= 1.94337 val_acc= 0.43000 time= 0.00428 | Epoch: 0003 train_loss= 1.94249 train_acc= 0.47857 val_loss= 1.94337 val_acc= 0.43000 time= 0.00428 | ||||
| @@ -158,6 +152,7 @@ Test set results: cost= 1.00983 accuracy= 0.81300 time= 0.39083 | |||||
| ``` | ``` | ||||
| # [Model Description](#contents) | # [Model Description](#contents) | ||||
| ## [Performance](#contents) | ## [Performance](#contents) | ||||
| | Parameters | GCN | | | Parameters | GCN | | ||||
| @@ -171,20 +166,17 @@ Test set results: cost= 1.00983 accuracy= 0.81300 time= 0.39083 | |||||
| | Loss Function | Softmax Cross Entropy | | | Loss Function | Softmax Cross Entropy | | ||||
| | Accuracy | 81.5/70.3 | | | Accuracy | 81.5/70.3 | | ||||
| | Parameters (B) | 92160/59344 | | | Parameters (B) | 92160/59344 | | ||||
| | Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/gcn | | |||||
| | Scripts | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/gcn> | | |||||
| # [Description of Random Situation](#contents) | # [Description of Random Situation](#contents) | ||||
| There are two random situations: | There are two random situations: | ||||
| - Seed is set in train.py according to input argument --seed. | - Seed is set in train.py according to input argument --seed. | ||||
| - Dropout operations. | - Dropout operations. | ||||
| Some seeds have already been set in train.py to avoid the randomness of weight initialization. If you want to disable dropout, please set the corresponding dropout_prob parameter to 0 in src/config.py. | Some seeds have already been set in train.py to avoid the randomness of weight initialization. If you want to disable dropout, please set the corresponding dropout_prob parameter to 0 in src/config.py. | ||||
| # [ModelZoo Homepage](#contents) | # [ModelZoo Homepage](#contents) | ||||
| Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | ||||
| @@ -44,7 +44,7 @@ GCN包含两个图卷积层。每一层以节点特征和邻接矩阵为输入 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend处理器) | - 硬件(Ascend处理器) | ||||
| - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei,审核通过即可获得资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -56,7 +56,7 @@ The backbone structure of BERT is transformer. For BERT_base, the transformer co | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend/GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get access to the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -59,7 +59,7 @@ BERT的主干结构为Transformer。对于BERT_base,Transformer包含12个编 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend处理器) | - 硬件(Ascend处理器) | ||||
| - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,申请通过后,即可获得资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - 更多关于Mindspore的信息,请查看以下资源: | - 更多关于Mindspore的信息,请查看以下资源: | ||||
| @@ -50,7 +50,7 @@ The classical first-order optimization algorithm, such as SGD, has a small amoun | |||||
| ## Environment Requirements | ## Environment Requirements | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -56,7 +56,7 @@ BERT的总体架构包含3个嵌入层,用于查找令牌嵌入、位置嵌入 | |||||
| 环境要求 | 环境要求 | ||||
| - 硬件(Ascend) | - 硬件(Ascend) | ||||
| - 使用Ascend处理器准备硬件环境。- 如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,申请通过即可获得资源。 | |||||
| - 使用Ascend处理器准备硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 更多关于Mindspore的信息,请查看以下资源: | - 更多关于Mindspore的信息,请查看以下资源: | ||||
| @@ -25,9 +25,9 @@ | |||||
| # [FastText](#contents) | # [FastText](#contents) | ||||
| FastText is a fast text classification algorithm, which is simple and efficient. It was proposed by Armand | FastText is a fast text classification algorithm, which is simple and efficient. It was proposed by Armand | ||||
| Joulin, Tomas Mikolov etc. in the artical "Bag of Tricks for Efficient Text Classification" in 2016. It is similar to | |||||
| Joulin, Tomas Mikolov etc. in the article "Bag of Tricks for Efficient Text Classification" in 2016. It is similar to | |||||
| CBOW in model architecture, where the middle word is replace by a label. FastText adopts ngram feature as addition feature | CBOW in model architecture, where the middle word is replace by a label. FastText adopts ngram feature as addition feature | ||||
| to get some information about words. It speeds up training and testing while maintaining high percision, and widly used | |||||
| to get some information about words. It speeds up training and testing while maintaining high precision, and widly used | |||||
| in various tasks of text classification. | in various tasks of text classification. | ||||
| [Paper](https://arxiv.org/pdf/1607.01759.pdf): "Bag of Tricks for Efficient Text Classification", 2016, A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov | [Paper](https://arxiv.org/pdf/1607.01759.pdf): "Bag of Tricks for Efficient Text Classification", 2016, A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov | ||||
| @@ -50,7 +50,7 @@ architecture. In the following sections, we will introduce how to run the script | |||||
| # [Environment Requirements](#content) | # [Environment Requirements](#content) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -47,7 +47,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| ## Platform | ## Platform | ||||
| - Hardware (Ascend) | - Hardware (Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you could get the resources for trial. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - Install [MindSpore](https://www.mindspore.cn/install/en). | - Install [MindSpore](https://www.mindspore.cn/install/en). | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -30,7 +30,7 @@ GPT3 stacks many layers of decoder of transformer. According to the layer number | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get access to the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -45,7 +45,7 @@ In this model, we use the Multi30K dataset as our train and test dataset.As trai | |||||
| # [Environment Requirements](#content) | # [Environment Requirements](#content) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -39,7 +39,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(GPU/CPU/Ascend) | - Hardware(GPU/CPU/Ascend) | ||||
| - If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you could get the resources for trial. | |||||
| - Prepare hardware environment with Ascend, GPU or CPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -48,7 +48,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| # [Quick Start](#contents) | # [Quick Start](#contents) | ||||
| - runing on Ascend | |||||
| - running on Ascend | |||||
| ```bash | ```bash | ||||
| # run training example | # run training example | ||||
| @@ -58,7 +58,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| bash run_eval_ascend.sh 0 ./preprocess lstm-20_390.ckpt | bash run_eval_ascend.sh 0 ./preprocess lstm-20_390.ckpt | ||||
| ``` | ``` | ||||
| - runing on GPU | |||||
| - running on GPU | |||||
| ```bash | ```bash | ||||
| # run training example | # run training example | ||||
| @@ -68,7 +68,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| bash run_eval_gpu.sh 0 ./aclimdb ./glove_dir lstm-20_390.ckpt | bash run_eval_gpu.sh 0 ./aclimdb ./glove_dir lstm-20_390.ckpt | ||||
| ``` | ``` | ||||
| - runing on CPU | |||||
| - running on CPU | |||||
| ```bash | ```bash | ||||
| # run training example | # run training example | ||||
| @@ -200,7 +200,7 @@ Ascend: | |||||
| - Set options in `config.py`, including learning rate and network hyperparameters. | - Set options in `config.py`, including learning rate and network hyperparameters. | ||||
| - runing on Ascend | |||||
| - running on Ascend | |||||
| Run `sh run_train_ascend.sh` for training. | Run `sh run_train_ascend.sh` for training. | ||||
| @@ -217,7 +217,7 @@ Ascend: | |||||
| ... | ... | ||||
| ``` | ``` | ||||
| - runing on GPU | |||||
| - running on GPU | |||||
| Run `sh run_train_gpu.sh` for training. | Run `sh run_train_gpu.sh` for training. | ||||
| @@ -234,7 +234,7 @@ Ascend: | |||||
| ... | ... | ||||
| ``` | ``` | ||||
| - runing on CPU | |||||
| - running on CPU | |||||
| Run `sh run_train_cpu.sh` for training. | Run `sh run_train_cpu.sh` for training. | ||||
| @@ -44,7 +44,7 @@ LSTM模型包含嵌入层、编码器和解码器这几个模块,编码器模 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(GPU/CPU/Ascend) | - 硬件(GPU/CPU/Ascend) | ||||
| - 如果你想尝试Ascend,请发送[Ascend Model Zoo体验资源申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)到ascend@huawei.com申请Ascend体验资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 更多关于Mindspore的信息,请查看以下资源: | - 更多关于Mindspore的信息,请查看以下资源: | ||||
| @@ -349,7 +349,7 @@ GPU: | |||||
| sh run_gpu.sh [--options] | sh run_gpu.sh [--options] | ||||
| ``` | ``` | ||||
| The usage of `run_ascend.sh` is shown as bellow: | |||||
| The usage of `run_ascend.sh` is shown as below: | |||||
| ```text | ```text | ||||
| Usage: run_ascend.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>] | Usage: run_ascend.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>] | ||||
| @@ -371,7 +371,7 @@ options: | |||||
| Notes: Be sure to assign the hccl_json file while running a distributed-training. | Notes: Be sure to assign the hccl_json file while running a distributed-training. | ||||
| The usage of `run_gpu.sh` is shown as bellow: | |||||
| The usage of `run_gpu.sh` is shown as below: | |||||
| ```text | ```text | ||||
| Usage: run_gpu.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>] | Usage: run_gpu.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>] | ||||
| @@ -488,7 +488,7 @@ More detail about LR scheduler could be found in `src/utils/lr_scheduler.py`. | |||||
| ## Platform | ## Platform | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -487,7 +487,7 @@ python weights_average.py --input_files your_checkpoint_list --output_file model | |||||
| ## 平台 | ## 平台 | ||||
| - 硬件(Ascend或GPU) | - 硬件(Ascend或GPU) | ||||
| - 使用Ascend或GPU处理器准备硬件环境。- 如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,申请通过即可获得资源。 | |||||
| - 使用Ascend或GPU处理器准备硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 更多关于Mindspore的信息,请查看以下资源: | - 更多关于Mindspore的信息,请查看以下资源: | ||||
| @@ -340,7 +340,7 @@ GPU: | |||||
| sh run_gpu.sh [--options] | sh run_gpu.sh [--options] | ||||
| ``` | ``` | ||||
| The usage of `run_ascend.sh` is shown as bellow: | |||||
| The usage of `run_ascend.sh` is shown as below: | |||||
| ```text | ```text | ||||
| Usage: run_ascend.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>] | Usage: run_ascend.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>] | ||||
| @@ -362,7 +362,7 @@ options: | |||||
| Notes: Be sure to assign the hccl_json file while running a distributed-training. | Notes: Be sure to assign the hccl_json file while running a distributed-training. | ||||
| The usage of `run_gpu.sh` is shown as bellow: | |||||
| The usage of `run_gpu.sh` is shown as below: | |||||
| ```text | ```text | ||||
| Usage: run_gpu.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>] | Usage: run_gpu.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>] | ||||
| @@ -546,7 +546,7 @@ The comparisons between MASS and other baseline methods in terms of PPL on Corne | |||||
| ## Platform | ## Platform | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you could get the resources for trial. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -40,7 +40,7 @@ Dataset used: [Movie Review Data](<http://www.cs.cornell.edu/people/pabo/movie-r | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -51,7 +51,7 @@ Dataset used: [Movie Review Data](<http://www.cs.cornell.edu/people/pabo/movie-r | |||||
| After installing MindSpore via the official website, you can start training and evaluation as follows: | After installing MindSpore via the official website, you can start training and evaluation as follows: | ||||
| - runing on Ascend | |||||
| - running on Ascend | |||||
| ```python | ```python | ||||
| # run training example | # run training example | ||||
| @@ -51,7 +51,7 @@ The backbone structure of TinyBERT is transformer, the transformer contains four | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -56,7 +56,7 @@ TinyBERT模型的主干结构是转换器,转换器包含四个编码器模块 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend或GPU) | - 硬件(Ascend或GPU) | ||||
| - 使用Ascend或GPU处理器准备硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)到ascend@huawei.com。申请通过后,即可获得资源。 | |||||
| - 使用Ascend或GPU处理器准备硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - 更多关于Mindspore的信息,请查看以下资源: | - 更多关于Mindspore的信息,请查看以下资源: | ||||
| @@ -40,7 +40,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap | |||||
| ## [Environment Requirements](#contents) | ## [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -45,8 +45,8 @@ Transformer具体包括六个编码模块和六个解码模块。每个编码模 | |||||
| ## 环境要求 | ## 环境要求 | ||||
| - 硬件(Ascend处理器) | |||||
| - 使用Ascend处理器准备硬件环境。- 如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei,申请通过后,即可获得资源。 | |||||
| - 硬件(Ascend/GPU处理器) | |||||
| - 使用Ascend或GPU处理器准备硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -38,7 +38,7 @@ The FM and deep component share the same input raw feature vector, which enables | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU/CPU) | - Hardware(Ascend/GPU/CPU) | ||||
| - Prepare hardware environment with Ascend, GPU, or CPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend, GPU, or CPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -49,7 +49,7 @@ The FM and deep component share the same input raw feature vector, which enables | |||||
| After installing MindSpore via the official website, you can start training and evaluation as follows: | After installing MindSpore via the official website, you can start training and evaluation as follows: | ||||
| - runing on Ascend | |||||
| - running on Ascend | |||||
| ```shell | ```shell | ||||
| # run training example | # run training example | ||||
| @@ -42,8 +42,8 @@ FM和深度学习部分拥有相同的输入原样特征向量,让DeepFM能从 | |||||
| ## 环境要求 | ## 环境要求 | ||||
| - 硬件(Ascend或GPU) | |||||
| - 使用Ascend或GPU处理器准备硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,申请通过后,即可获得资源。 | |||||
| - 硬件(Ascend/GPU/CPU) | |||||
| - 使用Ascend、GPU或CPU处理器准备硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -38,7 +38,7 @@ You can download the dataset and put the directory in structure as follows: | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend, GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend, GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -75,7 +75,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -43,7 +43,7 @@ Currently we support host-device mode with column partition and parameter serve | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend or GPU) | - Hardware(Ascend or GPU) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -168,7 +168,7 @@ optional arguments: | |||||
| is a slice of weight, multiple checkpoint files need to be | is a slice of weight, multiple checkpoint files need to be | ||||
| transferred. Use ';' to separate them and sort them in sequence | transferred. Use ';' to separate them and sort them in sequence | ||||
| like "./checkpoints/0.ckpt;./checkpoints/1.ckpt". | like "./checkpoints/0.ckpt;./checkpoints/1.ckpt". | ||||
| (Defalut:./checkpoints/) | |||||
| (Default:./checkpoints/) | |||||
| --eval_file_name Eval output file.(Default:eval.og) | --eval_file_name Eval output file.(Default:eval.og) | ||||
| --loss_file_name Loss output file.(Default:loss.log) | --loss_file_name Loss output file.(Default:loss.log) | ||||
| --host_device_mix Enable host device mode or not.(Default:0) | --host_device_mix Enable host device mode or not.(Default:0) | ||||
| @@ -326,7 +326,7 @@ python eval.py | |||||
| | AUC Score | 0.80937 | 0.80971 | 0.80862 | 0.80834 | | | AUC Score | 0.80937 | 0.80971 | 0.80862 | 0.80834 | | ||||
| | Speed | 20.906 ms/step | 24.465 ms/step | 27.388 ms/step | 236.506 ms/step | | | Speed | 20.906 ms/step | 24.465 ms/step | 27.388 ms/step | 236.506 ms/step | | ||||
| | Loss | wide:0.433,deep:0.444 | wide:0.444, deep:0.456 | wide:0.437, deep: 0.448 | wide:0.444, deep:0.444 | | | Loss | wide:0.433,deep:0.444 | wide:0.444, deep:0.456 | wide:0.437, deep: 0.448 | wide:0.444, deep:0.444 | | ||||
| | Parms(M) | 75.84 | 75.84 | 75.84 | 75.84 | | |||||
| | Params(M) | 75.84 | 75.84 | 75.84 | 75.84 | | |||||
| | Checkpoint for inference | 233MB(.ckpt file) | 230MB(.ckpt) | 233MB(.ckpt file) | 233MB(.ckpt file) | | | Checkpoint for inference | 233MB(.ckpt file) | 230MB(.ckpt) | 233MB(.ckpt file) | 233MB(.ckpt file) | | ||||
| All executable scripts can be found in [here](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/recommend/wide_and_deep/script) | All executable scripts can be found in [here](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/recommend/wide_and_deep/script) | ||||
| @@ -45,7 +45,7 @@ Wide&Deep模型训练了宽线性模型和深度学习神经网络,结合了 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend或GPU) | - 硬件(Ascend或GPU) | ||||
| - 准备Ascend或GPU处理器搭建硬件环境。- 如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com, 申请通过即可获得资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
| @@ -170,7 +170,7 @@ optional arguments: | |||||
| is a slice of weight, multiple checkpoint files need to be | is a slice of weight, multiple checkpoint files need to be | ||||
| transferred. Use ';' to separate them and sort them in sequence | transferred. Use ';' to separate them and sort them in sequence | ||||
| like "./checkpoints/0.ckpt;./checkpoints/1.ckpt". | like "./checkpoints/0.ckpt;./checkpoints/1.ckpt". | ||||
| (Defalut:./checkpoints/) | |||||
| (Default:./checkpoints/) | |||||
| --eval_file_name Eval output file.(Default:eval.og) | --eval_file_name Eval output file.(Default:eval.og) | ||||
| --loss_file_name Loss output file.(Default:loss.log) | --loss_file_name Loss output file.(Default:loss.log) | ||||
| --host_device_mix Enable host device mode or not.(Default:0) | --host_device_mix Enable host device mode or not.(Default:0) | ||||
| @@ -1,71 +1,79 @@ | |||||
| # Contents | # Contents | ||||
| - [Wide&Deep Description](#widedeep-description) | - [Wide&Deep Description](#widedeep-description) | ||||
| - [Model Architecture](#model-architecture) | - [Model Architecture](#model-architecture) | ||||
| - [Dataset](#dataset) | - [Dataset](#dataset) | ||||
| - [Environment Requirements](#environment-requirements) | - [Environment Requirements](#environment-requirements) | ||||
| - [Quick Start](#quick-start) | - [Quick Start](#quick-start) | ||||
| - [Script Description](#script-description) | - [Script Description](#script-description) | ||||
| - [Script and Sample Code](#script-and-sample-code) | |||||
| - [Script Parameters](#script-parameters) | |||||
| - [Training Script Parameters](#training-script-parameters) | |||||
| - [Training Process](#training-process) | |||||
| - [SingleDevice](#singledevice) | |||||
| - [Distribute Training](#distribute-training) | |||||
| - [Evaluation Process](#evaluation-process) | |||||
| - [Script and Sample Code](#script-and-sample-code) | |||||
| - [Script Parameters](#script-parameters) | |||||
| - [Training Script Parameters](#training-script-parameters) | |||||
| - [Training Process](#training-process) | |||||
| - [SingleDevice](#singledevice) | |||||
| - [Distribute Training](#distribute-training) | |||||
| - [Evaluation Process](#evaluation-process) | |||||
| - [Model Description](#model-description) | - [Model Description](#model-description) | ||||
| - [Performance](#performance) | |||||
| - [Training Performance](#training-performance) | |||||
| - [Evaluation Performance](#evaluation-performance) | |||||
| - [Performance](#performance) | |||||
| - [Training Performance](#training-performance) | |||||
| - [Evaluation Performance](#evaluation-performance) | |||||
| - [Description of Random Situation](#description-of-random-situation) | - [Description of Random Situation](#description-of-random-situation) | ||||
| - [ModelZoo Homepage](#modelzoo-homepage) | - [ModelZoo Homepage](#modelzoo-homepage) | ||||
| # [Wide&Deep Description](#contents) | # [Wide&Deep Description](#contents) | ||||
| Wide&Deep model is a classical model in Recommendation and Click Prediction area. This is an implementation of Wide&Deep as described in the [Wide & Deep Learning for Recommender System](https://arxiv.org/pdf/1606.07792.pdf) paper. | Wide&Deep model is a classical model in Recommendation and Click Prediction area. This is an implementation of Wide&Deep as described in the [Wide & Deep Learning for Recommender System](https://arxiv.org/pdf/1606.07792.pdf) paper. | ||||
| # [Model Architecture](#contents) | # [Model Architecture](#contents) | ||||
| Wide&Deep model jointly trained wide linear models and deep neural network, which combined the benefits of memorization and generalization for recommender systems. | |||||
| Wide&Deep model jointly trained wide linear models and deep neural network, which combined the benefits of memorization and generalization for recommender systems. | |||||
| # [Dataset](#contents) | # [Dataset](#contents) | ||||
| - [1] A dataset used in Click Prediction | - [1] A dataset used in Click Prediction | ||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend or GPU) | - Hardware(Ascend or GPU) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | |||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | |||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| # [Quick Start](#contents) | # [Quick Start](#contents) | ||||
| 1. Clone the Code | 1. Clone the Code | ||||
| ```bash | |||||
| git clone https://gitee.com/mindspore/mindspore.git | |||||
| cd mindspore/model_zoo/official/recommend/wide_and_deep_multitable | |||||
| ``` | |||||
| ```bash | |||||
| git clone https://gitee.com/mindspore/mindspore.git | |||||
| cd mindspore/model_zoo/official/recommend/wide_and_deep_multitable | |||||
| ``` | |||||
| 2. Download the Dataset | 2. Download the Dataset | ||||
| > Please refer to [1] to obtain the download link and data preprocess | > Please refer to [1] to obtain the download link and data preprocess | ||||
| 3. Start Training | 3. Start Training | ||||
| Once the dataset is ready, the model can be trained and evaluated on the single device(Ascend) by the command as follows: | |||||
| Once the dataset is ready, the model can be trained and evaluated on the single device(Ascend) by the command as follows: | |||||
| ```bash | ```bash | ||||
| python train_and_eval.py --data_path=./data/mindrecord --data_type=mindrecord | python train_and_eval.py --data_path=./data/mindrecord --data_type=mindrecord | ||||
| ``` | ``` | ||||
| To evaluate the model, command as follows: | To evaluate the model, command as follows: | ||||
| ```bash | ```bash | ||||
| python eval.py --data_path=./data/mindrecord --data_type=mindrecord | python eval.py --data_path=./data/mindrecord --data_type=mindrecord | ||||
| ``` | ``` | ||||
| # [Script Description](#contents) | # [Script Description](#contents) | ||||
| ## [Script and Sample Code](#contents) | ## [Script and Sample Code](#contents) | ||||
| ``` | |||||
| ```bash | |||||
| └── wide_and_deep_multitable | └── wide_and_deep_multitable | ||||
| ├── eval.py | ├── eval.py | ||||
| ├── README.md | ├── README.md | ||||
| @@ -87,10 +95,9 @@ python eval.py --data_path=./data/mindrecord --data_type=mindrecord | |||||
| ### [Training Script Parameters](#contents) | ### [Training Script Parameters](#contents) | ||||
| The parameters is same for ``train_and_eval.py`` and ``train_and_eval_distribute.py`` | |||||
| The parameters is same for ``train_and_eval.py`` and ``train_and_eval_distribute.py`` | |||||
| ``` | |||||
| ```bash | |||||
| usage: train_and_eval.py [-h] [--data_path DATA_PATH] [--epochs EPOCHS] | usage: train_and_eval.py [-h] [--data_path DATA_PATH] [--epochs EPOCHS] | ||||
| [--batch_size BATCH_SIZE] | [--batch_size BATCH_SIZE] | ||||
| [--eval_batch_size EVAL_BATCH_SIZE] | [--eval_batch_size EVAL_BATCH_SIZE] | ||||
| @@ -115,35 +122,41 @@ optional arguments: | |||||
| --deep_layers_dim The dimension of all deep layers.(Default:[1024,1024,1024,1024]) | --deep_layers_dim The dimension of all deep layers.(Default:[1024,1024,1024,1024]) | ||||
| --deep_layers_act The activation function of all deep layers.(Default:'relu') | --deep_layers_act The activation function of all deep layers.(Default:'relu') | ||||
| --keep_prob The keep rate in dropout layer.(Default:1.0) | --keep_prob The keep rate in dropout layer.(Default:1.0) | ||||
| --adam_lr The learning rate of the deep part. (Default:0.003) | |||||
| --ftrl_lr The learning rate of the wide part.(Default:0.1) | |||||
| --l2_coef The coefficient of the L2 pernalty. (Default:0.0) | |||||
| --adam_lr The learning rate of the deep part. (Default:0.003) | |||||
| --ftrl_lr The learning rate of the wide part.(Default:0.1) | |||||
| --l2_coef The coefficient of the L2 pernalty. (Default:0.0) | |||||
| --is_tf_dataset IS_TF_DATASET Whether the input is tfrecords. (Default:True) | --is_tf_dataset IS_TF_DATASET Whether the input is tfrecords. (Default:True) | ||||
| --dropout_flag Enable dropout.(Default:0) | |||||
| --dropout_flag Enable dropout.(Default:0) | |||||
| --output_path OUTPUT_PATH Deprecated | --output_path OUTPUT_PATH Deprecated | ||||
| --ckpt_path CKPT_PATH The location of the checkpoint file.(Defalut:./checkpoints/) | |||||
| --ckpt_path CKPT_PATH The location of the checkpoint file.(Default:./checkpoints/) | |||||
| --eval_file_name EVAL_FILE_NAME Eval output file.(Default:eval.og) | --eval_file_name EVAL_FILE_NAME Eval output file.(Default:eval.og) | ||||
| --loss_file_name LOSS_FILE_NAME Loss output file.(Default:loss.log) | --loss_file_name LOSS_FILE_NAME Loss output file.(Default:loss.log) | ||||
| ``` | ``` | ||||
| ## [Training Process](#contents) | ## [Training Process](#contents) | ||||
| ### [SingleDevice](#contents) | ### [SingleDevice](#contents) | ||||
| To train and evaluate the model, command as follows: | To train and evaluate the model, command as follows: | ||||
| ``` | |||||
| ```bash | |||||
| python train_and_eval.py | python train_and_eval.py | ||||
| ``` | ``` | ||||
| ### [Distribute Training](#contents) | ### [Distribute Training](#contents) | ||||
| To train the model in data distributed training, command as follows: | To train the model in data distributed training, command as follows: | ||||
| ``` | |||||
| ```bash | |||||
| # configure environment path before training | # configure environment path before training | ||||
| bash run_multinpu_train.sh RANK_SIZE EPOCHS DATASET RANK_TABLE_FILE | |||||
| bash run_multinpu_train.sh RANK_SIZE EPOCHS DATASET RANK_TABLE_FILE | |||||
| ``` | ``` | ||||
| ## [Evaluation Process](#contents) | ## [Evaluation Process](#contents) | ||||
| To evaluate the model, command as follows: | To evaluate the model, command as follows: | ||||
| ``` | |||||
| ```bash | |||||
| python eval.py | python eval.py | ||||
| ``` | ``` | ||||
| @@ -151,7 +164,7 @@ python eval.py | |||||
| ## [Performance](#contents) | ## [Performance](#contents) | ||||
| ### Training Performance | |||||
| ### Training Performance | |||||
| | Parameters | Single <br />Ascend | Data-Parallel-8P | | | Parameters | Single <br />Ascend | Data-Parallel-8P | | ||||
| | ------------------------ | ------------------------------- | ------------------------------- | | | ------------------------ | ------------------------------- | ------------------------------- | | ||||
| @@ -166,11 +179,9 @@ python eval.py | |||||
| | MAP Score | 0.6608 | 0.6590 | | | MAP Score | 0.6608 | 0.6590 | | ||||
| | Speed | 284 ms/step | 331 ms/step | | | Speed | 284 ms/step | 331 ms/step | | ||||
| | Loss | wide:0.415,deep:0.415 | wide:0.419, deep: 0.419 | | | Loss | wide:0.415,deep:0.415 | wide:0.419, deep: 0.419 | | ||||
| | Parms(M) | 349 | 349 | | |||||
| | Params(M) | 349 | 349 | | |||||
| | Checkpoint for inference | 1.1GB(.ckpt file) | 1.1GB(.ckpt file) | | | Checkpoint for inference | 1.1GB(.ckpt file) | 1.1GB(.ckpt file) | | ||||
| All executable scripts can be found in [here](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/recommend/wide_and_deep_multitable/script) | All executable scripts can be found in [here](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/recommend/wide_and_deep_multitable/script) | ||||
| ### Evaluation Performance | ### Evaluation Performance | ||||
| @@ -188,11 +199,11 @@ All executable scripts can be found in [here](https://gitee.com/mindspore/mindsp | |||||
| # [Description of Random Situation](#contents) | # [Description of Random Situation](#contents) | ||||
| There are three random situations: | There are three random situations: | ||||
| - Shuffle of the dataset. | - Shuffle of the dataset. | ||||
| - Initialization of some model weights. | - Initialization of some model weights. | ||||
| - Dropout operations. | - Dropout operations. | ||||
| # [ModelZoo Homepage](#contents) | # [ModelZoo Homepage](#contents) | ||||
| Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | |||||
| Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). | |||||
| @@ -38,7 +38,7 @@ Wide&Deep模型训练了宽线性模型和深度学习神经网络,结合了 | |||||
| # 环境要求 | # 环境要求 | ||||
| - 硬件(Ascend或GPU) | - 硬件(Ascend或GPU) | ||||
| - 准备Ascend或GPU处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,申请通过即可获得资源。 | |||||
| - 准备Ascend或GPU处理器搭建硬件环境。 | |||||
| - 框架 | - 框架 | ||||
| - [MindSpore](https://gitee.com/mindspore/mindspore) | - [MindSpore](https://gitee.com/mindspore/mindspore) | ||||
| - 更多关于Mindspore的信息,请查看以下资源: | - 更多关于Mindspore的信息,请查看以下资源: | ||||
| @@ -129,7 +129,7 @@ optional arguments: | |||||
| --is_tf_dataset IS_TF_DATASET Whether the input is tfrecords. (Default:True) | --is_tf_dataset IS_TF_DATASET Whether the input is tfrecords. (Default:True) | ||||
| --dropout_flag Enable dropout.(Default:0) | --dropout_flag Enable dropout.(Default:0) | ||||
| --output_path OUTPUT_PATH Deprecated | --output_path OUTPUT_PATH Deprecated | ||||
| --ckpt_path CKPT_PATH The location of the checkpoint file.(Defalut:./checkpoints/) | |||||
| --ckpt_path CKPT_PATH The location of the checkpoint file.(Default:./checkpoints/) | |||||
| --eval_file_name EVAL_FILE_NAME Eval output file.(Default:eval.og) | --eval_file_name EVAL_FILE_NAME Eval output file.(Default:eval.og) | ||||
| --loss_file_name LOSS_FILE_NAME Loss output file.(Default:loss.log) | --loss_file_name LOSS_FILE_NAME Loss output file.(Default:loss.log) | ||||
| ``` | ``` | ||||
| @@ -29,8 +29,8 @@ The overall network architecture of DQN is show below: | |||||
| ## [Requirements](#content) | ## [Requirements](#content) | ||||
| - Hardware(Ascend/GPU/CPU) | |||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Hardware(Ascend/GPU) | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -37,8 +37,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
| ## [Environment Requirements](#contents) | ## [Environment Requirements](#contents) | ||||
| - Hardware(Ascend | |||||
| - If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Hardware(Ascend) | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -86,7 +86,7 @@ We use about 91K face images as training dataset and 11K as evaluating dataset i | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -70,7 +70,7 @@ We use about 13K images as training dataset and 3K as evaluating dataset in this | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -68,7 +68,7 @@ We use about 122K face images as training dataset and 2K as evaluating dataset i | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -56,7 +56,7 @@ The directory structure is as follows: | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to get Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -56,7 +56,7 @@ The directory structure is as follows: | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -66,7 +66,7 @@ The directory structure is as follows: | |||||
| ## [Environment Requirements](#contents) | ## [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to get Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -77,7 +77,7 @@ Dataset used: [COCO2017](https://cocodataset.org/) | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend) | - Hardware(Ascend) | ||||
| - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| @@ -5,14 +5,14 @@ | |||||
| - [Dataset](#dataset) | - [Dataset](#dataset) | ||||
| - [Environment Requirements](#environment-requirements) | - [Environment Requirements](#environment-requirements) | ||||
| - [Script Description](#script-description) | - [Script Description](#script-description) | ||||
| - [Script and Sample Code](#script-and-sample-code) | |||||
| - [Training Process](#training-process) | |||||
| - [Evaluation Process](#evaluation-process) | |||||
| - [Evaluation](#evaluation) | |||||
| - [Script and Sample Code](#script-and-sample-code) | |||||
| - [Training Process](#training-process) | |||||
| - [Evaluation Process](#evaluation-process) | |||||
| - [Evaluation](#evaluation) | |||||
| - [Model Description](#model-description) | - [Model Description](#model-description) | ||||
| - [Performance](#performance) | |||||
| - [Training Performance](#evaluation-performance) | |||||
| - [Inference Performance](#evaluation-performance) | |||||
| - [Performance](#performance) | |||||
| - [Training Performance](#evaluation-performance) | |||||
| - [Inference Performance](#evaluation-performance) | |||||
| - [Description of Random Situation](#description-of-random-situation) | - [Description of Random Situation](#description-of-random-situation) | ||||
| - [ModelZoo Homepage](#modelzoo-homepage) | - [ModelZoo Homepage](#modelzoo-homepage) | ||||
| @@ -33,20 +33,20 @@ The overall network architecture of GhostNet is show below: | |||||
| Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) | Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) | ||||
| - Dataset size: 7049 colorful images in 1000 classes | - Dataset size: 7049 colorful images in 1000 classes | ||||
| - Train: 3680 images | |||||
| - Test: 3369 images | |||||
| - Train: 3680 images | |||||
| - Test: 3369 images | |||||
| - Data format: RGB images. | - Data format: RGB images. | ||||
| - Note: Data will be processed in src/dataset.py | |||||
| - Note: Data will be processed in src/dataset.py | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| # [Script description](#contents) | # [Script description](#contents) | ||||
| @@ -67,6 +67,7 @@ Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) | |||||
| ``` | ``` | ||||
| ## [Training process](#contents) | ## [Training process](#contents) | ||||
| To Be Done | To Be Done | ||||
| ## [Eval process](#contents) | ## [Eval process](#contents) | ||||
| @@ -75,12 +76,11 @@ To Be Done | |||||
| After installing MindSpore via the official website, you can start evaluation as follows: | After installing MindSpore via the official website, you can start evaluation as follows: | ||||
| ### Launch | ### Launch | ||||
| ``` | |||||
| ```bash | |||||
| # infer example | # infer example | ||||
| Ascend: python eval.py --model [ghostnet/ghostnet-600] --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH] | Ascend: python eval.py --model [ghostnet/ghostnet-600] --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH] | ||||
| GPU: python eval.py --model [ghostnet/ghostnet-600] --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH] | GPU: python eval.py --model [ghostnet/ghostnet-600] --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH] | ||||
| ``` | ``` | ||||
| @@ -89,7 +89,7 @@ After installing MindSpore via the official website, you can start evaluation as | |||||
| ### Result | ### Result | ||||
| ``` | |||||
| ```bash | |||||
| result: {'acc': 0.8113927500681385} ckpt= ./ghostnet_nose_1x_pets.ckpt | result: {'acc': 0.8113927500681385} ckpt= ./ghostnet_nose_1x_pets.ckpt | ||||
| result: {'acc': 0.824475333878441} ckpt= ./ghostnet_1x_pets.ckpt | result: {'acc': 0.824475333878441} ckpt= ./ghostnet_1x_pets.ckpt | ||||
| result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt | result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt | ||||
| @@ -99,9 +99,10 @@ result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt | |||||
| ## [Performance](#contents) | ## [Performance](#contents) | ||||
| #### Evaluation Performance | |||||
| ### Evaluation Performance | |||||
| #### GhostNet on ImageNet2012 | |||||
| ###### GhostNet on ImageNet2012 | |||||
| | Parameters | | | | | Parameters | | | | ||||
| | -------------------------- | -------------------------------------- |---------------------------------- | | | -------------------------- | -------------------------------------- |---------------------------------- | | ||||
| | Model Version | GhostNet |GhostNet-600| | | Model Version | GhostNet |GhostNet-600| | ||||
| @@ -112,7 +113,8 @@ result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt | |||||
| | FLOPs (M) | 142 | 591 | | | FLOPs (M) | 142 | 591 | | ||||
| | Accuracy (Top1) | 73.9 |80.2 | | | Accuracy (Top1) | 73.9 |80.2 | | ||||
| ###### GhostNet on Oxford-IIIT Pet | |||||
| #### GhostNet on Oxford-IIIT Pet | |||||
| | Parameters | | | | | Parameters | | | | ||||
| | -------------------------- | -------------------------------------- |---------------------------------- | | | -------------------------- | -------------------------------------- |---------------------------------- | | ||||
| | Model Version | GhostNet |GhostNet-600| | | Model Version | GhostNet |GhostNet-600| | ||||
| @@ -123,7 +125,7 @@ result: {'acc': 0.8691741618969746} ckpt= ./ghostnet600M_pets.ckpt | |||||
| | FLOPs (M) | 140 | 590 | | | FLOPs (M) | 140 | 590 | | ||||
| | Accuracy (Top1) | 82.4 |86.9 | | | Accuracy (Top1) | 82.4 |86.9 | | ||||
| ###### Comparison with other methods on Oxford-IIIT Pet | |||||
| #### Comparison with other methods on Oxford-IIIT Pet | |||||
| |Model|FLOPs (M)|Latency (ms)*|Accuracy (Top1)| | |Model|FLOPs (M)|Latency (ms)*|Accuracy (Top1)| | ||||
| |-|-|-|-| | |-|-|-|-| | ||||
| @@ -6,14 +6,14 @@ | |||||
| - [Dataset](#dataset) | - [Dataset](#dataset) | ||||
| - [Environment Requirements](#environment-requirements) | - [Environment Requirements](#environment-requirements) | ||||
| - [Script Description](#script-description) | - [Script Description](#script-description) | ||||
| - [Script and Sample Code](#script-and-sample-code) | |||||
| - [Training Process](#training-process) | |||||
| - [Evaluation Process](#evaluation-process) | |||||
| - [Evaluation](#evaluation) | |||||
| - [Script and Sample Code](#script-and-sample-code) | |||||
| - [Training Process](#training-process) | |||||
| - [Evaluation Process](#evaluation-process) | |||||
| - [Evaluation](#evaluation) | |||||
| - [Model Description](#model-description) | - [Model Description](#model-description) | ||||
| - [Performance](#performance) | |||||
| - [Training Performance](#evaluation-performance) | |||||
| - [Inference Performance](#evaluation-performance) | |||||
| - [Performance](#performance) | |||||
| - [Training Performance](#evaluation-performance) | |||||
| - [Inference Performance](#evaluation-performance) | |||||
| - [Description of Random Situation](#description-of-random-situation) | - [Description of Random Situation](#description-of-random-situation) | ||||
| - [ModelZoo Homepage](#modelzoo-homepage) | - [ModelZoo Homepage](#modelzoo-homepage) | ||||
| @@ -38,20 +38,20 @@ The overall network architecture of GhostNet is show below: | |||||
| Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) | Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) | ||||
| - Dataset size: 7049 colorful images in 1000 classes | - Dataset size: 7049 colorful images in 1000 classes | ||||
| - Train: 3680 images | |||||
| - Test: 3369 images | |||||
| - Train: 3680 images | |||||
| - Test: 3369 images | |||||
| - Data format: RGB images. | - Data format: RGB images. | ||||
| - Note: Data will be processed in src/dataset.py | |||||
| - Note: Data will be processed in src/dataset.py | |||||
| # [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
| - Hardware(Ascend/GPU) | - Hardware(Ascend/GPU) | ||||
| - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources. | |||||
| - Prepare hardware environment with Ascend or GPU processor. | |||||
| - Framework | - Framework | ||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - [MindSpore](https://www.mindspore.cn/install/en) | |||||
| - For more information, please check the resources below: | - For more information, please check the resources below: | ||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) | |||||
| - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) | |||||
| # [Script description](#contents) | # [Script description](#contents) | ||||
| @@ -72,6 +72,7 @@ Dataset used: [Oxford-IIIT Pet](https://www.robots.ox.ac.uk/~vgg/data/pets/) | |||||
| ``` | ``` | ||||
| ## [Training process](#contents) | ## [Training process](#contents) | ||||
| To Be Done | To Be Done | ||||
| ## [Eval process](#contents) | ## [Eval process](#contents) | ||||
| @@ -80,12 +81,11 @@ To Be Done | |||||
| After installing MindSpore via the official website, you can start evaluation as follows: | After installing MindSpore via the official website, you can start evaluation as follows: | ||||
| ### Launch | ### Launch | ||||
| ``` | |||||
| ```bash | |||||
| # infer example | # infer example | ||||
| Ascend: python eval.py --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH] | Ascend: python eval.py --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH] | ||||
| GPU: python eval.py --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH] | GPU: python eval.py --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH] | ||||
| ``` | ``` | ||||
| @@ -94,7 +94,7 @@ After installing MindSpore via the official website, you can start evaluation as | |||||
| ### Result | ### Result | ||||
| ``` | |||||
| ```bash | |||||
| result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt | result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt | ||||
| ``` | ``` | ||||
| @@ -102,9 +102,10 @@ result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt | |||||
| ## [Performance](#contents) | ## [Performance](#contents) | ||||
| #### Evaluation Performance | |||||
| ### Evaluation Performance | |||||
| #### GhostNet on ImageNet2012 | |||||
| ###### GhostNet on ImageNet2012 | |||||
| | Parameters | | | | | Parameters | | | | ||||
| | -------------------------- | -------------------------------------- |---------------------------------- | | | -------------------------- | -------------------------------------- |---------------------------------- | | ||||
| | Model Version | GhostNet |GhostNet-int8| | | Model Version | GhostNet |GhostNet-int8| | ||||
| @@ -115,7 +116,8 @@ result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt | |||||
| | FLOPs (M) | 142 | / | | | FLOPs (M) | 142 | / | | ||||
| | Accuracy (Top1) | 73.9 | w/o finetune:72.2, w finetune:73.6 | | | Accuracy (Top1) | 73.9 | w/o finetune:72.2, w finetune:73.6 | | ||||
| ###### GhostNet on Oxford-IIIT Pet | |||||
| #### GhostNet on Oxford-IIIT Pet | |||||
| | Parameters | | | | | Parameters | | | | ||||
| | -------------------------- | -------------------------------------- |---------------------------------- | | | -------------------------- | -------------------------------------- |---------------------------------- | | ||||
| | Model Version | GhostNet |GhostNet-int8| | | Model Version | GhostNet |GhostNet-int8| | ||||
| @@ -126,7 +128,6 @@ result: {'acc': 0.825} ckpt= ./ghostnet_1x_pets_int8.ckpt | |||||
| | FLOPs (M) | 140 | / | | | FLOPs (M) | 140 | / | | ||||
| | Accuracy (Top1) | 82.4 | w/o finetune:81.66, w finetune:82.45 | | | Accuracy (Top1) | 82.4 | w/o finetune:81.66, w finetune:82.45 | | ||||
| # [Description of Random Situation](#contents) | # [Description of Random Situation](#contents) | ||||
| In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py. | In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py. | ||||