82 Commits (e64a53bf1b979ba8613c1a80cb439a3af1c6cad6)

Author SHA1 Message Date
  mindspore-ci-bot a2cd05339f !2180 Gpu Gelu kernel support fp16 5 years ago
  mindspore-ci-bot d57decc8a3 !2338 Gpu Minimum & Maximum kernels support int32 5 years ago
  lizhenyu eb68c9953d change ftrl operator st 5 years ago
  wilfChen 480bf4151b Gpu Minimum & Maximum kernels support int32 5 years ago
  mindspore-ci-bot a9d06edae9 !2282 remove _quant_op.py from __init__.py 5 years ago
  mindspore-ci-bot fce37a5fbe !2281 add Sigmoid and SigmoidGrad operation of GPU 5 years ago
  wilfChen 8f4cd76582 gpu Gelu kernel support fp16 5 years ago
  chenzomi bbce6faff9 remove _quant_ops.py from __init__.py 5 years ago
  mindspore-ci-bot 2e002ab64c !2292 gpu fix all nop node graph execute 5 years ago
  limingqi107 0f4397cece fix all nop node graph execute 5 years ago
  lizhenyu ea0cd5ccdd add Sigmoid and SigmoidGrad operation of GPU 5 years ago
  mindspore-ci-bot 74c3e15675 !2194 fix FakeQuantPerLayer/FakeQuantPerLayerGrad symmetric=True calculation error bug 5 years ago
  mindspore-ci-bot 19e66f06e2 !2150 Gpu Tanh kernel support fp16 5 years ago
  mindspore-ci-bot fe797aaf10 !2229 add ftrl optimizer 5 years ago
  mindspore-ci-bot 95d887a35b !2226 add adam op for wide&deep model 5 years ago
  mindspore-ci-bot c4863683ef !2235 add SigmoidCrossEntropyWithLogitsGrad operation 5 years ago
  mindspore-ci-bot 116ed509bf !2234 add SigmoidCrossEntropyWithLogits op 5 years ago
  lizhenyu 636b8e2b88 add SigmoidCrossEntropyWithLogitsGrad op 5 years ago
  mindspore-ci-bot 4642df207a !2210 gpu optimize the max device memory config 5 years ago
  lizhenyu 694a8213b7 add adam optimizer 5 years ago
  lizhenyu ac2217dbae add SigmoidCrossEntropyWithLogits op 5 years ago
  lizhenyu c3360a84cd add ftrl optimizer 5 years ago
  wilfChen 9201ea5ed2 replace tanh implement with cudnn 5 years ago
  limingqi107 55b3557c0d gpu optimize the max device memory config 5 years ago
  王东旭 4e09ae83eb fix FakeQuantPerLayer/FakeQuantPerLayerGrad symmetric bug 5 years ago
  liuxiao df63a3195d fix input value check for SparseApplyFtrl and SparseApplyAdagrad 5 years ago
  mindspore-ci-bot d4a7c87b22 !2093 GPU add argmaxwithvalue 5 years ago
  VectorSL 17377912ba gpu add argmaxwithvalue 5 years ago
  buxue 66bbdb4a31 change tensor dtype and shape from function to attr 5 years ago
  mindspore-ci-bot 87fa15de80 !2021 GPU add akg kernel greaterequal notequal 5 years ago
  VectorSL cf2fc1cecf gpu add notequal greaterequal akg kernel 5 years ago
  buxue 0cd57ddc5d check arg is tensor with vm backend 5 years ago
  jiangjinsheng 51affc2f1b fixed validator for CumProd, ReduceProd, ApplyRMSProp 5 years ago
  mindspore-ci-bot 9c33da391a !1513 refine data copy in multi-graph 5 years ago
  liuwenhao4 a7ad0d0a49 Fixing some tiny faults about Pylint in my code(ops) 5 years ago
  lizhenyu a25b84055c refine data copy in multi-graph 5 years ago
  liuwenhao4 f3f0cbaeee Fixing some tiny faults about Pylint in my code(ops) 5 years ago
  cristoval f6c20178d2 fix pylint check issues 5 years ago
  jinyaohui 5a914994ba clean pylint 5 years ago
  jinyaohui bcfaff97f9 clean pylint 5 years ago
  wilfChen 1991a89f40 LayerNormGrad fix & codex 5 years ago
  wilfChen 59c4cf256c gpu support broadcast kernels 5 years ago
  mindspore-ci-bot 680ce090a3 !1057 matmul support fp16 5 years ago
  mindspore-ci-bot 0edc6d254a !370 Gpu Support UnsortedSegmentSum kernel 5 years ago
  mindspore-ci-bot 907b609b05 !994 gpu broadcast kernel support different dims 5 years ago
  mindspore-ci-bot b5096e1f6c !1021 gpu support MinimumGrad & MaximumGrad kernel 5 years ago
  mindspore-ci-bot da7054645a !948 gpu support LogSoftmax & LogSoftmaxGrad kernel 5 years ago
  wilfChen b56572bb89 matmul support fp16 5 years ago
  wilfChen 00e78bf6c4 gpu support MinimumGrad & MaximumGrad kernel 5 years ago
  wilfChen 31f3611f9a gpu support UnsortedSegmentSum kernel 5 years ago