You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 1.4 kB

1234567891011121314151617181920212223242526272829303132
  1. # Application demos of model fuzzing
  2. ## Introduction
  3. The same as the traditional software fuzz testing, we can also design fuzz test for AI models. Compared to
  4. branch coverage or line coverage of traditional software, some people propose the
  5. concept of 'neuron coverage' based on the unique structure of deep neural network. We can use the neuron coverage
  6. as a guide to search more metamorphic inputs to test our models.
  7. ## 1. calculation of neuron coverage
  8. There are five metrics proposed for evaluating the neuron coverage of a test:NC, Effective NC, KMNC, NBC and SNAC.
  9. Usually we need to feed all the training dataset into the model first, and record the output range of all neurons
  10. (however, in KMNC, NBC and SNAC, only the last layer of neurons are recorded in our method). In the testing phase,
  11. we feed test samples into the model, and calculate those three metrics mentioned above according to those neurons'
  12. output distribution.
  13. ```sh
  14. cd examples/ai_fuzzer/
  15. python lenet5_mnist_coverage.py
  16. ```
  17. ## 2. fuzz test for AI model
  18. We have provided several types of methods for manipulating metamorphic inputs: affine transformation, pixel
  19. transformation and adversarial attacks. Usually we feed the original samples into the fuzz function as seeds, and
  20. then metamorphic samples are generated through iterative manipulations.
  21. ```sh
  22. cd examples/ai_fuzzer/
  23. python lenet5_mnist_fuzzing.py
  24. ```

MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 对抗样本鲁棒性模块 对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。