2028 Commits (d28c63b6c075fd4be03258b5150301d112c73575)

Author SHA1 Message Date
  hesham f837ddc956 - Bug when empty strings sent to Python 5 years ago
  yujianfeng c956dfff51 Add SparseAdam and SparseLazyAdam cpu kernel 5 years ago
  zhaozhenlong 9ce86e8832 composed op CosineEmbeddingLoss 5 years ago
  Wei Luning ee8420aefa Make assign operator to use the signature. 5 years ago
  lichenever fad6b5744c remove auto parallel st 5 years ago
  yoonlee666 ab36f3d3cd edit example script 5 years ago
  zhaoting b16a552d41 Revert "Revert "add pattern AdjustAllReduceMulAdduse the old opadd test case for bugtemp fix try"" 5 years ago
  mindspore-ci-bot beb714d2d0 !1911 add a function to charge the node input and output is a scalar 5 years ago
  mindspore-ci-bot c4b3534913 !1905 update cpu lstm 5 years ago
  mindspore-ci-bot 39338c8627 !1101 Add GCN training scripts 5 years ago
  baihuawei 9c74e39b12 update cpu lstm 5 years ago
  chentingting e801d48906 add gcn training scripts 5 years ago
  yanzhenxiang2020 89302a60cf add pack op for aicpu 5 years ago
  liuxiao 6856c2ac7a Adapt ops ApplyProximalAdagrad for GE 5 years ago
  mindspore-ci-bot cf04a7b190 !1741 summary support cpu 5 years ago
  yujianfeng 2ff9e74d07 Add unique process for duplicated indices 5 years ago
  mindspore-ci-bot 5499161531 !1862 fixed validator for ApplyRMSProp,CumProd, CumSum,ReduceProd etc 5 years ago
  wenkai ab6b6add0b cpu support summary 5 years ago
  WilliamLian 9808e47663 change checkAicpu to CheckAICPU & add charge Scalar function to charge the input or output is scalar 5 years ago
  mindspore-ci-bot 69cbf58517 !1902 Fix bert scripts. 5 years ago
  chenhaozhe 1be7ad52bb fix bert scripts 5 years ago
  mindspore-ci-bot 10fd781b15 !1831 Add order parameter function in group params 5 years ago
  jiangjinsheng 51affc2f1b fixed validator for CumProd, ReduceProd, ApplyRMSProp 5 years ago
  mindspore-ci-bot b350ee0c00 !1824 add single batchnorm fission pass 5 years ago
  mindspore-ci-bot c82a8bf483 !1678 modify print 5 years ago
  guohongzilong 85a06b00c6 add order function in group params 5 years ago
  mindspore-ci-bot 0a897b0ce7 !1865 add inv,invgrad&invert for vm 5 years ago
  huanghui b4c0ed4b36 add signle batchnorm fission pass 5 years ago
  yao_yf ce03ce5af2 data_parallel_grad_reducer 5 years ago
  zhaojichen cdb7ec937b add inv,invgrad&invert for vm 5 years ago
  zhaozhenlong 1f342fb926 add op BroadcastTo 5 years ago
  huangdongrun 1642be4a67 fix initiliazer 5 years ago
  mindspore-ci-bot 65eacc9593 !1787 optimize transdata for pynative mode 5 years ago
  mindspore-ci-bot 1c640face9 !1826 fix bug when check learning_rate in AdamWeightDecayDynamicLR 5 years ago
  chujinjin 7465abc798 optimize transdata for pynative 5 years ago
  mindspore-ci-bot bd34c6ec8b !1853 Fix initializer 5 years ago
  mindspore-ci-bot 1973594bd1 !1774 [MD] minddataset support padding samples 5 years ago
  mindspore-ci-bot b236beae28 !1615 convert constant bool tensor to bool 5 years ago
  mindspore-ci-bot 197251eb66 !1838 Add SparseApplyFtrl cpu kernel 5 years ago
  liyong feff8899ac support padding samples 5 years ago
  mindspore-ci-bot c51d90d84e !1767 Move LayerNormGrad split pass ahead of kernel select 5 years ago
  mindspore-ci-bot 5c21616293 !1807 Implemented Ngram TensorOp for dataset 5 years ago
  huangdongrun 9081041199 fix initiliazer 5 years ago
  mindspore-ci-bot 769ae609b4 !1808 consistent design for num_samples 5 years ago
  yujianfeng 5d4b75838f Add SparseApplyFtrl cpu kernel 5 years ago
  Zirui Wu dbf9936ec4 Implemented n-gram for dataset TensorOp 5 years ago
  huangdongrun beacc26077 * add isconstant primitive 5 years ago
  mindspore-ci-bot fac6c56db5 !1851 Add batch norm fusion pattern for mix precision 5 years ago
  mindspore-ci-bot 063ad7c7f0 !1810 optimize slice/slicegrad performance 5 years ago
  jinyaohui 5e43edc474 clean pylint 5 years ago