Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
|
4 years ago | |
|---|---|---|
| .. | ||
| scripts | 4 years ago | |
| src | 4 years ago | |
| README.md | 4 years ago | |
| create_dataset.py | 4 years ago | |
| eval.py | 4 years ago | |
| requirements.txt | 4 years ago | |
| train.py | 4 years ago | |
SRCNN learns an end-to-end mapping between low- and high-resolution images, with little extra pre/post-processing beyond the optimization. With a lightweight structure, the SRCNN has achieved superior performance than the state-of-the-art methods.
Paper: Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang. Image Super-Resolution Using Deep Convolutional Networks. 2014.
The overall network architecture of SRCNN is show below:
.
└─srcnn
├─README.md
├─scripts
├─run_distribute_train_gpu.sh # launch distributed training with gpu platform
└─run_eval_gpu.sh # launch evaluating with gpu platform
├─src
├─config.py # parameter configuration
├─dataset.py # data preprocessing
├─metric.py # accuracy metric
├─utils.py # some functions which is commonly used
├─srcnn.py # network definition
├─create_dataset.py # generating mindrecord training dataset
├─eval.py # eval net
└─train.py # train net
Parameters for both training and evaluating can be set in config.py.
'lr': 1e-4, # learning rate
'patch_size': 33, # patch_size
'stride': 99, # stride
'scale': 2, # image scale
'epoch_size': 20, # total epoch numbers
'batch_size': 16, # input batchsize
'save_checkpoint': True, # whether saving ckpt file
'keep_checkpoint_max': 10, # max numbers to keep checkpoints
'save_checkpoint_path': 'outputs/' # save checkpoint path
To create dataset, download the training dataset firstly and then convert them to mindrecord files. We can deal with it as follows.
python create_dataset.py --src_folder=/dataset/ILSVRC2013_DET_train --output_folder=/dataset/mindrecord_dir
GPU:
sh run_distribute_train_gpu.sh DEVICE_NUM VISIABLE_DEVICES(0,1,2,3,4,5,6,7) DATASET_PATH
# distributed training example(8p) for GPU
sh run_distribute_train_gpu.sh 8 0,1,2,3,4,5,6,7 /dataset/train
# standalone training example for GPU
sh run_distribute_train_gpu.sh 1 0 /dataset/train
You can find checkpoint file together with result in log.
# Evaluation
sh run_eval_gpu.sh DEVICE_ID DATASET_PATH CHECKPOINT_PATH
# Evaluation with checkpoint
sh run_eval_gpu.sh 1 /dataset/val /ckpt_dir/srcnn-20_*.ckpt
Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log.
result {'PSNR': 36.72421418219669}
| Parameters | SRCNN |
|---|---|
| Resource | NV PCIE V100-32G |
| uploaded Date | 03/02/2021 |
| MindSpore Version | master |
| Dataset | ImageNet2013 scale:2 |
| Training Parameters | src/config.py |
| Optimizer | Adam |
| Loss Function | MSELoss |
| Loss | 0.00179 |
| Total time | 1 h 8ps |
| Checkpoint for Fine tuning | 671 K(.ckpt file) |
| Parameters | |
|---|---|
| Resource | NV PCIE V100-32G |
| uploaded Date | 03/02/2021 |
| MindSpore Version | master |
| Dataset | Set5/Set14/BSDS200 scale:2 |
| batch_size | 1 |
| PSNR | 36.72/32.58/33.81 |
Please check the official homepage.
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
C++ Python Text Unity3D Asset C other