Browse Source

readme modify

tags/v1.1.0
zhanke 5 years ago
parent
commit
1ebab144cc
3 changed files with 11 additions and 11 deletions
  1. +6
    -9
      model_zoo/official/gnn/bgcf/README.md
  2. +2
    -1
      model_zoo/official/gnn/gat/README.md
  3. +3
    -1
      model_zoo/official/gnn/gcn/README.md

+ 6
- 9
model_zoo/official/gnn/bgcf/README.md View File

@@ -33,6 +33,7 @@ Specially, BGCF contains two main modules. The first is sampling, which produce
aggregate the neighbors sampling from nodes consisting of mean aggregator and attention aggregator. aggregate the neighbors sampling from nodes consisting of mean aggregator and attention aggregator.
# [Dataset](#contents) # [Dataset](#contents)
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
- Dataset size: - Dataset size:
Statistics of dataset used are summarized as below: Statistics of dataset used are summarized as below:
@@ -61,10 +62,6 @@ aggregate the neighbors sampling from nodes consisting of mean aggregator and at
sh run_process_data_ascend.sh [SRC_PATH] sh run_process_data_ascend.sh [SRC_PATH]
``` ```
- Launch
```
# Generate dataset in mindrecord format for Amazon-Beauty.
sh ./run_process_data_ascend.sh ./data
# [Features](#contents) # [Features](#contents)
@@ -128,12 +125,12 @@ Parameters for both training and evaluation can be set in config.py.
```python ```python
"learning_rate": 0.001, # Learning rate "learning_rate": 0.001, # Learning rate
"num_epochs": 600, # Epoch sizes for training
"num_epoch": 600, # Epoch sizes for training
"num_neg": 10, # Negative sampling rate "num_neg": 10, # Negative sampling rate
"raw_neighs": 40, # Num of sampling neighbors in raw graph "raw_neighs": 40, # Num of sampling neighbors in raw graph
"gnew_neighs": 20, # Num of sampling neighbors in sample graph "gnew_neighs": 20, # Num of sampling neighbors in sample graph
"input_dim": 64, # User and item embedding dimension "input_dim": 64, # User and item embedding dimension
"l2_coeff": 0.03 # l2 coefficient
"l2": 0.03 # l2 coefficient
"neighbor_dropout": [0.0, 0.2, 0.3]# Dropout ratio for different aggregation layer "neighbor_dropout": [0.0, 0.2, 0.3]# Dropout ratio for different aggregation layer
"num_graphs":5 # Num of sample graph "num_graphs":5 # Num of sample graph
``` ```
@@ -200,8 +197,8 @@ Parameters for both training and evaluation can be set in config.py.
| Parameter | BGCF | | Parameter | BGCF |
| ------------------------------------ | ----------------------------------------- | | ------------------------------------ | ----------------------------------------- |
| Resource | Ascend 910 | | Resource | Ascend 910 |
| uploaded Date | |
| MindSpore Version | |
| uploaded Date | 09/23/2020(month/day/year) |
| MindSpore Version | 1.0.0 |
| Dataset | Amazon-Beauty | | Dataset | Amazon-Beauty |
| Training Parameter | epoch=600 | | Training Parameter | epoch=600 |
| Optimizer | Adam | | Optimizer | Adam |
@@ -209,7 +206,7 @@ Parameters for both training and evaluation can be set in config.py.
| Recall@20 | 0.1534 | | Recall@20 | 0.1534 |
| NDCG@20 | 0.0912 | | NDCG@20 | 0.0912 |
| Training Cost | 25min | | Training Cost | 25min |
| Scripts | |
| Scripts | [bgcf script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/bgcf) |
# [Description of random situation](#contents) # [Description of random situation](#contents)


+ 2
- 1
model_zoo/official/gnn/gat/README.md View File

@@ -30,6 +30,7 @@ Graph Attention Networks(GAT) was proposed in 2017 by Petar Veličković et al.
Note that according to whether this attention layer is the output layer of the network or not, the node update function can be concatenate or average. Note that according to whether this attention layer is the output layer of the network or not, the node update function can be concatenate or average.


# [Dataset](#contents) # [Dataset](#contents)
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
- Dataset size: - Dataset size:
Statistics of dataset used are summerized as below: Statistics of dataset used are summerized as below:


@@ -175,7 +176,7 @@ Parameters for both training and evaluation can be set in config.py.
| ------------------------------------ | ----------------------------------------- | | ------------------------------------ | ----------------------------------------- |
| Resource | Ascend 910 | | Resource | Ascend 910 |
| uploaded Date | 06/16/2020(month/day/year) | | uploaded Date | 06/16/2020(month/day/year) |
| MindSpore Version | 0.5.0-beta |
| MindSpore Version | 1.0.0 |
| Dataset | Cora/Citeseer | | Dataset | Cora/Citeseer |
| Training Parameter | epoch=200 | | Training Parameter | epoch=200 |
| Optimizer | Adam | | Optimizer | Adam |


+ 3
- 1
model_zoo/official/gnn/gcn/README.md View File

@@ -28,6 +28,8 @@ GCN contains two graph convolution layers. Each layer takes nodes features and a




# [Dataset](#contents) # [Dataset](#contents)
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.

| Dataset | Type | Nodes | Edges | Classes | Features | Label rate | | Dataset | Type | Nodes | Edges | Classes | Features | Label rate |
| ------- | ---------------: |-----: | ----: | ------: |--------: | ---------: | | ------- | ---------------: |-----: | ----: | ------: |--------: | ---------: |
| Cora | Citation network | 2708 | 5429 | 7 | 1433 | 0.052 | | Cora | Citation network | 2708 | 5429 | 7 | 1433 | 0.052 |
@@ -162,7 +164,7 @@ Test set results: cost= 1.00983 accuracy= 0.81300 time= 0.39083
| -------------------------- | -------------------------------------------------------------- | | -------------------------- | -------------------------------------------------------------- |
| Resource | Ascend 910 | | Resource | Ascend 910 |
| uploaded Date | 06/09/2020 (month/day/year) | | uploaded Date | 06/09/2020 (month/day/year) |
| MindSpore Version | 0.5.0-beta |
| MindSpore Version | 1.0.0 |
| Dataset | Cora/Citeseer | | Dataset | Cora/Citeseer |
| Training Parameters | epoch=200 | | Training Parameters | epoch=200 |
| Optimizer | Adam | | Optimizer | Adam |


Loading…
Cancel
Save