You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 7.4 kB

4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141
  1. # Contents
  2. - [Masked Face Recognition Description](#masked-face-recognition-description)
  3. - [Dataset](#dataset)
  4. - [Environment Requirements](#environment-requirements)
  5. - [Script Description](#script-description)
  6. - [Training](#training)
  7. - [Evaluation](#evaluation)
  8. - [ModelZoo Homepage](#modelzoo-homepage)
  9. ## [Masked Face Recognition Description](#contents)
  10. <p align="center">
  11. <img src="./img/overview.png">
  12. </p>
  13. This is a **MindSpore** implementation of [Masked Face Recognition with Latent Part Detection (ACM MM20)](https://dl.acm.org/doi/10.1145/3394171.3413731) by *Feifei Ding, Peixi Peng, Yangru Huang, Mengyue Geng and Yonghong Tian*.
  14. *Masked Face Recognition* aims to match masked faces with common faces and is important especially during the global outbreak of COVID-19. It is challenging to identify masked faces since most facial cues are occluded by mask.
  15. *Latent Part Detection* (LPD) is a differentiable module that can locate the latent facial part which is robust to mask wearing, and the latent part is further used to extract discriminative features. The proposed LPD model is trained in an end-to-end manner and only utilizes the original and synthetic training data.
  16. ## [Dataset](#contents)
  17. ## Training Dataset
  18. We use [CASIA-WebFace Dataset](http://www.cbsr.ia.ac.cn/english/casia-webFace/casia-webfAce_AgreEmeNtS.pdf) as the training dataset. After downloading CASIA-WebFace, we first detect faces and facial landmarks using `MTCNN` and align faces to a canonical pose using similarity transformation. (see: [MTCNN - face detection & alignment](https://github.com/kpzhang93/MTCNN_face_detection_alignment)).
  19. Collecting and labeling realistic masked facial data requires a great deal of human labor. To address this issue, we generate masked face images based on CASIA-WebFace. We generate 8 kinds of synthetic masked face images to augment training data based on 8 different styles of masks, such as surgical masks, N95 respirators and activated carbon masks. We mix the original face images with the synthetic masked images as the training data.
  20. <p align="center">
  21. <img src="./img/generated_masked_faces.png" width="600px">
  22. </p>
  23. ## Evaluating Dataset
  24. We use [PKU-Masked-Face Dataset](https://pkuml.org/resources/pku-masked-face-dataset.html) as the evaluating dataset. The dataset contains 10,301 face images of 1,018 identities. Each identity has masked and common face images with various orientations, lighting conditions and mask types. Most identities have 5 holistic face images and 5 masked face images with 5 different views: front, left, right, up and down.
  25. The directory structure is as follows:
  26. ```python
  27. .
  28. └─ dataset
  29. ├─ train dataset
  30. ├─ ID1
  31. ├─ ID1_0001.jpg
  32. ├─ ID1_0002.jpg
  33. ...
  34. ├─ ID2
  35. ...
  36. ├─ ID3
  37. ...
  38. ...
  39. ├─ test dataset
  40. ├─ ID1
  41. ├─ ID1_0001.jpg
  42. ├─ ID1_0002.jpg
  43. ...
  44. ├─ ID2
  45. ...
  46. ├─ ID3
  47. ...
  48. ...
  49. ```
  50. ## [Environment Requirements](#contents)
  51. - Hardware(Ascend)
  52. - Prepare hardware environment with Ascend processor.
  53. - Framework
  54. - [MindSpore](https://www.mindspore.cn/install/en)
  55. - For more information, please check the resources below:
  56. - [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  57. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  58. ## [Script Description](#contents)
  59. The entire code structure is as following:
  60. ```python
  61. └─ face_recognition
  62. ├── README.md // descriptions about face_recognition
  63. ├── scripts
  64. │ ├── run_train.sh // shell script for training on Ascend
  65. │ ├── run_eval.sh // shell script for evaluation on Ascend
  66. ├── src
  67. │ ├── dataset
  68. │ │ ├── Dataset.py // loading evaluating dataset
  69. │ │ ├── MGDataset.py // loading training dataset
  70. │ ├── model
  71. │ │ ├── model.py // lpd model
  72. │ │ ├── stn.py // spatial transformer network module
  73. │ ├── utils
  74. │ │ ├── distance.py // calculate distance of two features
  75. │ │ ├── metric.py // calculate mAP and CMC scores
  76. ├─ config.py // hyperparameter setting
  77. ├─ train_dataset.py // training data format setting
  78. ├─ test_dataset.py // evaluating data format setting
  79. ├─ train.py // training scripts
  80. ├─ test.py // evaluation scripts
  81. ```
  82. ## [Training](#contents)
  83. ```bash
  84. sh scripts/run_train.sh [USE_DEVICE_ID]
  85. ```
  86. You will get the loss value of each epoch as following in "./scripts/data_parallel_log_[DEVICE_ID]/outputs/logs/[TIME].log" or "./scripts/log_parallel_graph/face_recognition_[DEVICE_ID].log":
  87. ```python
  88. epoch[0], iter[100], loss:(Tensor(shape=[], dtype=Float32, value= 50.2733), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 32768)), cur_lr:0.000660, mean_fps:743.09 imgs/sec
  89. epoch[0], iter[200], loss:(Tensor(shape=[], dtype=Float32, value= 49.3693), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 32768)), cur_lr:0.001314, mean_fps:4426.42 imgs/sec
  90. epoch[0], iter[300], loss:(Tensor(shape=[], dtype=Float32, value= 48.7081), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 16384)), cur_lr:0.001968, mean_fps:4428.09 imgs/sec
  91. epoch[0], iter[400], loss:(Tensor(shape=[], dtype=Float32, value= 45.7791), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 16384)), cur_lr:0.002622, mean_fps:4428.17 imgs/sec
  92. ...
  93. epoch[8], iter[27300], loss:(Tensor(shape=[], dtype=Float32, value= 2.13556), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 65536)), cur_lr:0.004000, mean_fps:4429.38 imgs/sec
  94. epoch[8], iter[27400], loss:(Tensor(shape=[], dtype=Float32, value= 2.36922), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 65536)), cur_lr:0.004000, mean_fps:4429.88 imgs/sec
  95. epoch[8], iter[27500], loss:(Tensor(shape=[], dtype=Float32, value= 2.08594), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 65536)), cur_lr:0.004000, mean_fps:4430.59 imgs/sec
  96. epoch[8], iter[27600], loss:(Tensor(shape=[], dtype=Float32, value= 2.38706), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 65536)), cur_lr:0.004000, mean_fps:4430.37 imgs/sec
  97. ```
  98. ## [Evaluation](#contents)
  99. ```bash
  100. sh scripts/run_eval.sh [USE_DEVICE_ID]
  101. ```
  102. You will get the result as following in "./scripts/log_inference/outputs/models/logs/[TIME].log":
  103. [test_dataset]: zj2jk=0.9495, jk2zj=0.9480, avg=0.9487
  104. | model | mAP | rank1 | rank5 | rank10|
  105. | ---------| ------| ----- | ----- | ----- |
  106. | Baseline | 27.09 | 70.17 | 87.95 | 91.80 |
  107. | MG | 36.55 | 94.12 | 98.01 | 98.66 |
  108. | LPD | 42.14 | 96.22 | 98.11 | 98.75 |
  109. ## [ModelZoo Homepage](#contents)
  110. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).