You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

readme.md 5.4 kB

4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147
  1. <TOC>
  2. # Pre-Trained Image Processing Transformer (IPT)
  3. This repository is an official implementation of the paper "Pre-Trained Image Processing Transformer" from CVPR 2021.
  4. We study the low-level computer vision task (e.g., denoising, super-resolution and deraining) and develop a new pre-trained model, namely, image processing transformer (IPT). To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs. The IPT model is trained on these images with multi-heads and multi-tails. In addition, the contrastive learning is introduced for well adapting to different image processing tasks. The pre-trained model can therefore efficiently employed on desired task after fine-tuning. With only one pre-trained model, IPT outperforms the current state-of-the-art methods on various low-level benchmarks.
  5. If you find our work useful in your research or publication, please cite our work:
  6. [1] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. **"Pre-trained image processing transformer"**. <i>**CVPR 2021**.</i> [[arXiv](https://arxiv.org/abs/2012.00364)]
  7. @inproceedings{chen2020pre,
  8. title={Pre-trained image processing transformer},
  9. author={Chen, Hanting and Wang, Yunhe and Guo, Tianyu and Xu, Chang and Deng, Yiping and Liu, Zhenhua and Ma, Siwei and Xu, Chunjing and Xu, Chao and Gao, Wen},
  10. booktitle={CVPR},
  11. year={2021}
  12. }
  13. ## Model architecture
  14. ### The overall network architecture of IPT is shown as below:
  15. ![architecture](./image/ipt.png)
  16. ## Dataset
  17. The benchmark datasets can be downloaded as follows:
  18. For super-resolution:
  19. Set5,
  20. [Set14](https://sites.google.com/site/romanzeyde/research-interests),
  21. [B100](https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/),
  22. [Urban100](https://sites.google.com/site/jbhuang0604/publications/struct_sr).
  23. For denoising:
  24. [CBSD68](https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/).
  25. For deraining:
  26. [Rain100L](https://www.icst.pku.edu.cn/struct/Projects/joint_rain_removal.html)
  27. The result images are converted into YCbCr color space. The PSNR is evaluated on the Y channel only.
  28. ## Requirements
  29. ### Hardware (GPU)
  30. > Prepare hardware environment with GPU.
  31. ### Framework
  32. > [MindSpore](https://www.mindspore.cn/install/en)
  33. ### For more information, please check the resources below:
  34. [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  35. [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  36. ## Script Description
  37. > This is the inference script of IPT, you can following steps to finish the test of image processing tasks, like SR, denoise and derain, via the corresponding pretrained models.
  38. ### Scripts and Sample Code
  39. ```
  40. IPT
  41. ├── eval.py # inference entry
  42. ├── image
  43. │   └── ipt.png # the illustration of IPT network
  44. ├── model
  45. │   ├── IPT_denoise30.ckpt # denoise model weights for noise level 30
  46. │   ├── IPT_denoise50.ckpt # denoise model weights for noise level 50
  47. │   ├── IPT_derain.ckpt # derain model weights
  48. │   ├── IPT_sr2.ckpt # X2 super-resolution model weights
  49. │   ├── IPT_sr3.ckpt # X3 super-resolution model weights
  50. │   └── IPT_sr4.ckpt # X4 super-resolution model weights
  51. ├── readme.md # Readme
  52. ├── scripts
  53. │   └── run_eval.sh # inference script for all tasks
  54. └── src
  55. ├── args.py # options/hyper-parameters of IPT
  56. ├── data
  57. │   ├── common.py # common dataset
  58. │   ├── __init__.py # Class data init function
  59. │   └── srdata.py # flow of loading sr data
  60. ├── foldunfold_stride.py # function of fold and unfold operations for images
  61. ├── metrics.py # PSNR calculator
  62. ├── template.py # setting of model selection
  63. └── vitm.py # IPT network
  64. ```
  65. ### Script Parameter
  66. > For details about hyperparameters, see src/args.py.
  67. ## Evaluation
  68. ### Evaluation Process
  69. > Inference example:
  70. > For SR x4:
  71. ```
  72. python eval.py --dir_data ../../data/ --data_test Set14 --nochange --test_only --ext img --chop_new --scale 4 --pth_path ./model/IPT_sr4.ckpt
  73. ```
  74. > Or one can run following script for all tasks.
  75. ```
  76. sh scripts/run_eval.sh
  77. ```
  78. ### Evaluation Result
  79. The result are evaluated by the value of PSNR (Peak Signal-to-Noise Ratio), and the format is as following.
  80. ```
  81. result: {"Mean psnr of Se5 x4 is 32.68"}
  82. ```
  83. ## Performance
  84. ### Inference Performance
  85. The Results on all tasks are listed as below.
  86. Super-resolution results:
  87. | Scale | Set5 | Set14 | B100 | Urban100 |
  88. | ----- | ----- | ----- | ----- | ----- |
  89. | ×2 | 38.36 | 34.54 | 32.50 | 33.88 |
  90. | ×3 | 34.83 | 30.96 | 29.39 | 29.59 |
  91. | ×4 | 32.68 | 29.01 | 27.81 | 27.24 |
  92. Denoising results:
  93. | noisy level | CBSD68 | Urban100 |
  94. | ----- | ----- | ----- |
  95. | 30 | 32.37 | 33.82 |
  96. | 50 | 29.94 | 31.56 |
  97. Derain results:
  98. | Task | Rain100L |
  99. | ----- | ----- |
  100. | Derain | 41.98 |
  101. ## ModeZoo Homepage
  102. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).