You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 8.2 kB

5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190
  1. # Contents
  2. - [GCN Description](#gcn-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Environment Requirements](#environment-requirements)
  6. - [Quick Start](#quick-start)
  7. - [Script Description](#script-description)
  8. - [Script and Sample Code](#script-and-sample-code)
  9. - [Script Parameters](#script-parameters)
  10. - [Training, Evaluation, Test Process](#training-evaluation-test-process)
  11. - [Model Description](#model-description)
  12. - [Performance](#performance)
  13. - [Description of Random Situation](#description-of-random-situation)
  14. - [ModelZoo Homepage](#modelzoo-homepage)
  15. # [GCN Description](#contents)
  16. GCN(Graph Convolutional Networks) was proposed in 2016 and designed to do semi-supervised learning on graph-structured data. A scalable approach based on an efficient variant of convolutional neural networks which operate directly on graphs was presented. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes.
  17. [Paper](https://arxiv.org/abs/1609.02907): Thomas N. Kipf, Max Welling. 2016. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR 2016.
  18. # [Model Architecture](#contents)
  19. GCN contains two graph convolution layers. Each layer takes nodes features and adjacency matrix as input, nodes' features are then updated by aggregating neighbours' features.
  20. # [Dataset](#contents)
  21. Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
  22. | Dataset | Type | Nodes | Edges | Classes | Features | Label rate |
  23. | ------- | ---------------: |-----: | ----: | ------: |--------: | ---------: |
  24. | Cora | Citation network | 2708 | 5429 | 7 | 1433 | 0.052 |
  25. | Citeseer| Citation network | 3327 | 4732 | 6 | 3703 | 0.036 |
  26. # [Environment Requirements](#contents)
  27. - Hardware(Ascend)
  28. - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  29. - Framework
  30. - [MindSpore](https://gitee.com/mindspore/mindspore)
  31. - For more information, please check the resources below:
  32. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  33. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  34. # [Quick Start](#contents)
  35. - Install [MindSpore](https://www.mindspore.cn/install/en).
  36. - Download the dataset Cora or Citeseer provided by /kimiyoung/planetoid from github.
  37. - Place the dataset to any path you want, the folder should include files as follows(we use Cora dataset as an example):
  38. ```
  39. .
  40. └─data
  41. ├─ind.cora.allx
  42. ├─ind.cora.ally
  43. ├─ind.cora.graph
  44. ├─ind.cora.test.index
  45. ├─ind.cora.tx
  46. ├─ind.cora.ty
  47. ├─ind.cora.x
  48. └─ind.cora.y
  49. ```
  50. - Generate dataset in mindrecord format for cora or citeseer.
  51. ####Usage
  52. ```buildoutcfg
  53. cd ./scripts
  54. # SRC_PATH is the dataset file path you downloaded, DATASET_NAME is cora or citeseer
  55. sh run_process_data.sh [SRC_PATH] [DATASET_NAME]
  56. ```
  57. ####Launch
  58. ```
  59. #Generate dataset in mindrecord format for cora
  60. sh run_process_data.sh ./data cora
  61. #Generate dataset in mindrecord format for citeseer
  62. sh run_process_data.sh ./data citeseer
  63. ```
  64. # [Script Description](#contents)
  65. ## [Script and Sample Code](#contents)
  66. ```shell
  67. .
  68. └─gcn
  69. ├─README.md
  70. ├─scripts
  71. | ├─run_process_data.sh # Generate dataset in mindrecord format
  72. | └─run_train.sh # Launch training, now only Ascend backend is supported.
  73. |
  74. ├─src
  75. | ├─config.py # Parameter configuration
  76. | ├─dataset.py # Data preprocessin
  77. | ├─gcn.py # GCN backbone
  78. | └─metrics.py # Loss and accuracy
  79. |
  80. └─train.py # Train net, evaluation is performed after every training epoch. After the verification result converges, the training stops, then testing is performed.
  81. ```
  82. ## [Script Parameters](#contents)
  83. Parameters for training can be set in config.py.
  84. ```
  85. "learning_rate": 0.01, # Learning rate
  86. "epochs": 200, # Epoch sizes for training
  87. "hidden1": 16, # Hidden size for the first graph convolution layer
  88. "dropout": 0.5, # Dropout ratio for the first graph convolution layer
  89. "weight_decay": 5e-4, # Weight decay for the parameter of the first graph convolution layer
  90. "early_stopping": 10, # Tolerance for early stopping
  91. ```
  92. ## [Training, Evaluation, Test Process](#contents)
  93. #### Usage
  94. ```
  95. # run train with cora or citeseer dataset, DATASET_NAME is cora or citeseer
  96. sh run_train.sh [DATASET_NAME]
  97. ```
  98. #### Launch
  99. ```bash
  100. sh run_train.sh cora
  101. ```
  102. #### Result
  103. Training result will be stored in the scripts path, whose folder name begins with "train". You can find the result like the followings in log.
  104. ```
  105. Epoch: 0001 train_loss= 1.95373 train_acc= 0.09286 val_loss= 1.95075 val_acc= 0.20200 time= 7.25737
  106. Epoch: 0002 train_loss= 1.94812 train_acc= 0.32857 val_loss= 1.94717 val_acc= 0.34000 time= 0.00438
  107. Epoch: 0003 train_loss= 1.94249 train_acc= 0.47857 val_loss= 1.94337 val_acc= 0.43000 time= 0.00428
  108. Epoch: 0004 train_loss= 1.93550 train_acc= 0.55000 val_loss= 1.93957 val_acc= 0.46400 time= 0.00421
  109. Epoch: 0005 train_loss= 1.92617 train_acc= 0.67143 val_loss= 1.93558 val_acc= 0.45400 time= 0.00430
  110. ...
  111. Epoch: 0196 train_loss= 0.60326 train_acc= 0.97857 val_loss= 1.05155 val_acc= 0.78200 time= 0.00418
  112. Epoch: 0197 train_loss= 0.60377 train_acc= 0.97143 val_loss= 1.04940 val_acc= 0.78000 time= 0.00418
  113. Epoch: 0198 train_loss= 0.60680 train_acc= 0.95000 val_loss= 1.04847 val_acc= 0.78000 time= 0.00414
  114. Epoch: 0199 train_loss= 0.61920 train_acc= 0.96429 val_loss= 1.04797 val_acc= 0.78400 time= 0.00413
  115. Epoch: 0200 train_loss= 0.57948 train_acc= 0.96429 val_loss= 1.04753 val_acc= 0.78600 time= 0.00415
  116. Optimization Finished!
  117. Test set results: cost= 1.00983 accuracy= 0.81300 time= 0.39083
  118. ...
  119. ```
  120. # [Model Description](#contents)
  121. ## [Performance](#contents)
  122. | Parameters | GCN |
  123. | -------------------------- | -------------------------------------------------------------- |
  124. | Resource | Ascend 910 |
  125. | uploaded Date | 06/09/2020 (month/day/year) |
  126. | MindSpore Version | 1.0.0 |
  127. | Dataset | Cora/Citeseer |
  128. | Training Parameters | epoch=200 |
  129. | Optimizer | Adam |
  130. | Loss Function | Softmax Cross Entropy |
  131. | Accuracy | 81.5/70.3 |
  132. | Parameters (B) | 92160/59344 |
  133. | Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/gcn |
  134. # [Description of Random Situation](#contents)
  135. There are two random situations:
  136. - Seed is set in train.py according to input argument --seed.
  137. - Dropout operations.
  138. Some seeds have already been set in train.py to avoid the randomness of weight initialization. If you want to disable dropout, please set the corresponding dropout_prob parameter to 0 in src/config.py.
  139. # [ModelZoo Homepage](#contents)
  140. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).