15 Commits (b51b3a6764c2c1d6f67df852f1866c9a0a865502)

Author SHA1 Message Date
  wangnan39@huawei.com 0fe9e2e4cb support import dynamic_lr from nn 5 years ago
  wangnan39@huawei.com ab811fca8f add AdamOffload optimizer 5 years ago
  Jiaqi a30ccea62c sparse optimizer 5 years ago
  wangnan39@huawei.com 082433183d uniform learning_rate behavior of optimizers 5 years ago
  panyifeng 44e74ad5aa Apply indexed_slices 5 years ago
  wangnan39@huawei.com 172728a6a6 support weight decay for sparse optimizer 5 years ago
  panyifeng 3c2057297e support multi param for tuple grad 5 years ago
  wangnan39@huawei.com d4e3d69f37 support loss scale for sparse situation 5 years ago
  wangnan39@huawei.com 4042f16ce4 add lazy adam optim and support sparse adam & ftrl for cpu backend 5 years ago
  mindspore-ci-bot 9dfb1011fe !1854 add SparseApplyAdam and SparseApplyLazyAdam ops 5 years ago
  wangnan39@huawei.com de21dbdaef add ops SparseApplyAdam and SparseApplyLazyAdam 5 years ago
  wangnan39@huawei.com c9b7d95c2c fix lr check bug in AdamWeightDecayDynamicLR 5 years ago
  jinyaohui 26fd75895d pylint waring clean 5 years ago
  guohongzilong 824bc30a94 learning rate and weight decay support group mode 5 years ago
  zhunaipan 930a1fb0a8 initial version 5 years ago