You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

RELEASE.md 4.4 kB

5 years ago
6 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157
  1. # Release 1.0.0
  2. ## Major Features and Improvements
  3. ### Differential privacy model training
  4. * Privacy leakage evaluation.
  5. * Parameter verification enhancement.
  6. * Support parallel computing.
  7. ### Model robustness evaluation
  8. * Fuzzing based Adversarial Robustness testing.
  9. * Parameter verification enhancement.
  10. ### Other
  11. * Api & Directory Structure
  12. * Adjusted the directory structure based on different features.
  13. * Optimize the structure of examples.
  14. ## Bugfixes
  15. ## Contributors
  16. Thanks goes to these wonderful people:
  17. Liu Liu, Xiulang Jin, Zhidan Liu and Luobin Liu.
  18. Contributions of any kind are welcome!
  19. # Release 0.7.0-beta
  20. ## Major Features and Improvements
  21. ### Differential privacy model training
  22. * Privacy leakage evaluation.
  23. * Using Membership inference to evaluate the effectiveness of privacy-preserving techniques for AI.
  24. ### Model robustness evaluation
  25. * Fuzzing based Adversarial Robustness testing.
  26. * Coverage-guided test set generation.
  27. ## Bugfixes
  28. ## Contributors
  29. Thanks goes to these wonderful people:
  30. Liu Liu, Xiulang Jin, Zhidan Liu, Luobin Liu and Huanhuan Zheng.
  31. Contributions of any kind are welcome!
  32. # Release 0.6.0-beta
  33. ## Major Features and Improvements
  34. ### Differential privacy model training
  35. * Optimizers with differential privacy
  36. * Differential privacy model training now supports some new policies.
  37. * Adaptive Norm policy is supported.
  38. * Adaptive Noise policy with exponential decrease is supported.
  39. * Differential Privacy Training Monitor
  40. * A new monitor is supported using zCDP as its asymptotic budget estimator.
  41. ## Bugfixes
  42. ## Contributors
  43. Thanks goes to these wonderful people:
  44. Liu Liu, Huanhuan Zheng, XiuLang jin, Zhidan liu.
  45. Contributions of any kind are welcome.
  46. # Release 0.5.0-beta
  47. ## Major Features and Improvements
  48. ### Differential privacy model training
  49. * Optimizers with differential privacy
  50. * Differential privacy model training now supports both Pynative mode and graph mode.
  51. * Graph mode is recommended for its performance.
  52. ## Bugfixes
  53. ## Contributors
  54. Thanks goes to these wonderful people:
  55. Liu Liu, Huanhuan Zheng, Xiulang Jin, Zhidan Liu.
  56. Contributions of any kind are welcome!
  57. # Release 0.3.0-alpha
  58. ## Major Features and Improvements
  59. ### Differential Privacy Model Training
  60. Differential Privacy is coming! By using Differential-Privacy-Optimizers, one can still train a model as usual, while the trained model preserved the privacy of training dataset, satisfying the definition of
  61. differential privacy with proper budget.
  62. * Optimizers with Differential Privacy([PR23](https://gitee.com/mindspore/mindarmour/pulls/23), [PR24](https://gitee.com/mindspore/mindarmour/pulls/24))
  63. * Some common optimizers now have a differential privacy version (SGD/
  64. Adam). We are adding more.
  65. * Automatically and adaptively add Gaussian Noise during training to achieve Differential Privacy.
  66. * Automatically stop training when Differential Privacy Budget exceeds.
  67. * Differential Privacy Monitor([PR22](https://gitee.com/mindspore/mindarmour/pulls/22))
  68. * Calculate overall budget consumed during training, indicating the ultimate protect effect.
  69. ## Bug fixes
  70. ## Contributors
  71. Thanks goes to these wonderful people:
  72. Liu Liu, Huanhuan Zheng, Zhidan Liu, Xiulang Jin
  73. Contributions of any kind are welcome!
  74. # Release 0.2.0-alpha
  75. ## Major Features and Improvements
  76. - Add a white-box attack method: M-DI2-FGSM([PR14](https://gitee.com/mindspore/mindarmour/pulls/14)).
  77. - Add three neuron coverage metrics: KMNCov, NBCov, SNACov([PR12](https://gitee.com/mindspore/mindarmour/pulls/12)).
  78. - Add a coverage-guided fuzzing test framework for deep neural networks([PR13](https://gitee.com/mindspore/mindarmour/pulls/13)).
  79. - Update the MNIST Lenet5 examples.
  80. - Remove some duplicate code.
  81. ## Bug fixes
  82. ## Contributors
  83. Thanks goes to these wonderful people:
  84. Liu Liu, Huanhuan Zheng, Zhidan Liu, Xiulang Jin
  85. Contributions of any kind are welcome!
  86. # Release 0.1.0-alpha
  87. Initial release of MindArmour.
  88. ## Major Features
  89. - Support adversarial attack and defense on the platform of MindSpore.
  90. - Include 13 white-box and 7 black-box attack methods.
  91. - Provide 5 detection algorithms to detect attacking in multiple way.
  92. - Provide adversarial training to enhance model security.
  93. - Provide 6 evaluation metrics for attack methods and 9 evaluation metrics for defense methods.

MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 对抗样本鲁棒性模块 对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。