Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
|
4 years ago | |
|---|---|---|
| .. | ||
| src | 4 years ago | |
| Readme.md | 4 years ago | |
| eval.py | 4 years ago | |
| index.txt | 5 years ago | |
| mindpsore_hub_conf.py | 4 years ago | |
The Adversarial Pruning method is a reliable neural network pruning algorithm by setting up a scientific control. We prefer to have a more rigorous research design by including a scientific control group as an essential part to minimize the effect of all factors except the association between the filter and expected network output. Acting as a control group, knockoff feature is generated to mimic the feature map produced by the network filter, but they are conditionally independent of the example label given the real feature map. Besides the real feature map on an intermediate layer, the corresponding knockoff feature is brought in as another auxiliary input signal for the subsequent layers.
Paper: Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing Xu, Chao Xu, Chang Xu. Scientific Control for Reliable Neural Network Pruning. Submitted to NeurIPS 2020.
Dataset used: Oxford-IIIT Pet
step 1: Download dataset
step 2: Convert the dataset to mindrecord:
cd ./src
python data_to_mindrecord_test.py
Dataset size: 7049 colorful images in 1000 classes
Data format: RGB images.
├── Adversarial Pruning
├── Readme.md # descriptions about adversarial-pruning # shell script for evaluation with CPU, GPU or Ascend
├── src
│ ├──config.py # parameter configuration
│ ├──dataset.py # creating dataset
│ ├──resnet_imgnet.py # Pruned ResNet architecture
├── eval.py # evaluation script
├── index.txt # channel index of each layer after pruning
├── mindspore_hub_conf.py # export model for hub
To Be Done
After installing MindSpore via the official website, you can start evaluation as follows:
# infer example
Ascend: python eval.py --dataset_path ~/Pets/test.mindrecord --platform Ascend --checkpoint_path [CHECKPOINT_PATH]
GPU: python eval.py --dataset_path ~/Pets/test.mindrecord --platform GPU --checkpoint_path [CHECKPOINT_PATH]
checkpoint can be produced in training process.
result: {'acc': 0.8023984736985554} ckpt= ./resnet50-imgnet-0.65x-80.24.ckpt
| Parameters | |
|---|---|
| Model Version | ResNet50-0.65x |
| uploaded Date | 09/10/2020 (month/day/year) ; |
| MindSpore Version | 0.6.0-alpha |
| Dataset | ImageNet2012 |
| Parameters (M) | 14.6 |
| FLOPs (G) | 2.1 |
| Accuracy (Top1) | 75.80 |
| Parameters | |
|---|---|
| Model Version | ResNet50-0.65x |
| uploaded Date | 09/10/2020 (month/day/year) ; |
| MindSpore Version | 0.6.0-alpha |
| Dataset | Oxford-IIIT Pet |
| Parameters (M) | 14.6 |
| FLOPs (M) | 2.1 |
| Accuracy (Top1) | 80.24 |
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
Please check the official homepage.
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
C++ Python Text Unity3D Asset C other