From e512fac44f67bf2b47b42ba08fe8e98c9108a6cc Mon Sep 17 00:00:00 2001 From: CaoJian Date: Sat, 29 Aug 2020 16:20:46 +0800 Subject: [PATCH] vgg16 readme update --- model_zoo/official/cv/vgg16/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/model_zoo/official/cv/vgg16/README.md b/model_zoo/official/cv/vgg16/README.md index d6c492865b..27f534e01a 100644 --- a/model_zoo/official/cv/vgg16/README.md +++ b/model_zoo/official/cv/vgg16/README.md @@ -79,6 +79,7 @@ here basic modules mainly include basic operation like: **3×3 conv** and **2× ## Mixed Precision The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. + For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. @@ -370,4 +371,4 @@ after allreduce eval: top5_correct=45582, tot=50000, acc=91.16% In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py. # [ModelZoo Homepage](#contents) - Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). \ No newline at end of file + Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).