You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 1.3 kB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
  1. # Dataset
  2. Dataset used: [COCO2017](<https://cocodataset.org/>)
  3. - Dataset size:19G
  4. - Train:18G,118000 images
  5. - Val:1G,5000 images
  6. - Annotations:241M,instances,captions,person_keypoints etc
  7. - Data format:image and json files
  8. - Note:Data will be processed in dataset.py
  9. # Environment Requirements
  10. - Install [MindSpore](https://www.mindspore.cn/install/en).
  11. - Download the dataset COCO2017.
  12. - We use COCO2017 as dataset in this example.
  13. Install Cython and pycocotool, and you can also install mmcv to process data.
  14. ```
  15. pip install Cython
  16. pip install pycocotools
  17. pip install mmcv==0.2.14
  18. ```
  19. And change the COCO_ROOT and other settings you need in `config.py`. The directory structure is as follows:
  20. ```
  21. .
  22. └─cocodataset
  23. ├─annotations
  24. ├─instance_train2017.json
  25. └─instance_val2017.json
  26. ├─val2017
  27. └─train2017
  28. ```
  29. # Quick start
  30. You can download the pre-trained model checkpoint file [here](<https://www.mindspore.cn/resources/hub/details?2505/MindSpore/ascend/0.7/fasterrcnn_v1.0_coco2017>).
  31. ```
  32. python coco_attack_pgd.py --pre_trained [PRETRAINED_CHECKPOINT_FILE]
  33. ```
  34. > Adversarial samples will be generated and saved as pickle file.

MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 对抗样本鲁棒性模块 对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。