Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
|
5 years ago | |
|---|---|---|
| .. | ||
| scripts | 5 years ago | |
| src | 5 years ago | |
| README.md | 5 years ago | |
| train.py | 5 years ago | |
Bayesian Graph Collaborative Filtering(BGCF) was proposed in 2020 by Sun J, Guo W, Zhang D et al. By naturally incorporating the
uncertainty in the user-item interaction graph shows excellent performance on Amazon recommendation dataset.This is an example of
training of BGCF with Amazon-Beauty dataset in MindSpore. More importantly, this is the first open source version for BGCF.
Paper: Sun J, Guo W, Zhang D, et al. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks[C]//Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020: 2030-2039.
Specially, BGCF contains two main modules. The first is sampling, which produce sample graphs based in node copying. Another module
aggregate the neighbors sampling from nodes consisting of mean aggregator and attention aggregator.
Dataset size:
Statistics of dataset used are summarized as below:
| Amazon-Beauty | |
|---|---|
| Task | Recommendation |
| # User | 7068 (1 graph) |
| # Item | 3570 |
| # Interaction | 79506 |
| # Training Data | 60818 |
| # Test Data | 18688 |
| # Density | 0.315% |
Data Preparation
.
└─data
├─ratings_Beauty.csv
cd ./scripts
# SRC_PATH is the dataset file path you download.
sh run_process_data_ascend.sh [SRC_PATH]
# Generate dataset in mindrecord format for Amazon-Beauty.
sh ./run_process_data_ascend.sh ./data
To ultilize the strong computation power of Ascend chip, and accelerate the training process, the mixed training method is used. MindSpore is able to cope with FP32 inputs and FP16 operators. In BGCF example, the model is set to FP16 mode except for the loss calculation part.
After installing MindSpore via the official website and Dataset is correctly generated, you can start training and evaluation as follows.
running on Ascend
# run training example with Amazon-Beauty dataset
sh run_train_ascend.sh
.
└─bgcf
├─README.md
├─scripts
| ├─run_process_data_ascend.sh # Generate dataset in mindrecord format
| └─run_train_ascend.sh # Launch training
|
├─src
| ├─bgcf.py # BGCF model
| ├─callback.py # Callback function
| ├─config.py # Training configurations
| ├─dataset.py # Data preprocessing
| ├─metrics.py # Recommendation metrics
| └─utils.py # Utils for training bgcf
|
└─train.py # Train net
Parameters for both training and evaluation can be set in config.py.
config for BGCF dataset
"learning_rate": 0.001, # Learning rate
"num_epochs": 600, # Epoch sizes for training
"num_neg": 10, # Negative sampling rate
"raw_neighs": 40, # Num of sampling neighbors in raw graph
"gnew_neighs": 20, # Num of sampling neighbors in sample graph
"input_dim": 64, # User and item embedding dimension
"l2_coeff": 0.03 # l2 coefficient
"neighbor_dropout": [0.0, 0.2, 0.3]# Dropout ratio for different aggregation layer
"num_graphs":5 # Num of sample graph
sh run_train_ascend.sh
Training result will be stored in the scripts path, whose folder name begins with "train". You can find the result like the
followings in log.
Epoch 001 iter 12 loss 34696.242
Epoch 002 iter 12 loss 34275.508
Epoch 003 iter 12 loss 30620.635
Epoch 004 iter 12 loss 21628.908
...
Epoch 597 iter 12 loss 3662.3152
Epoch 598 iter 12 loss 3640.7612
Epoch 599 iter 12 loss 3654.9087
Epoch 600 iter 12 loss 3632.4585
epoch:600, recall_@10:0.10393, recall_@20:0.15669, ndcg_@10:0.07564, ndcg_@20:0.09343,
sedp_@10:0.01936, sedp_@20:0.01544, nov_@10:7.58599, nov_@20:7.79782
...
| Parameter | BGCF |
|---|---|
| Resource | Ascend 910 |
| uploaded Date | 09/04/2020(month/day/year) |
| MindSpore Version | 1.0 |
| Dataset | Amazon-Beauty |
| Training Parameter | epoch=600 |
| Optimizer | Adam |
| Loss Function | BPR loss |
| Recall@20 | 0.1534 |
| NDCG@20 | 0.0912 |
| Total time | 30min |
| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/bgcf |
BGCF model contains lots of dropout operations, if you want to disable dropout, set the neighbor_dropout to [0.0, 0.0, 0.0] in src/config.py.
Please check the official homepage.
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
C++ Python Text Unity3D Asset C other