195 Commits (1bcffd29dd9cae804580da4d3b890ae47d40f93c)

Author SHA1 Message Date
  zhaoting 5c0962acfa add gpu split and restructure gpu concat 5 years ago
  peixu_ren 1feca960aa Rollback to Normal on D 5 years ago
  wilfChen c10e07734c gpu support TopK kernel 5 years ago
  wilfChen dfb958de1e Gpu support BroadcastTo kernel 5 years ago
  peixu_ren 20ca96c62b Add random normal MindSpore interface 5 years ago
  kingfo add3778a61 add grad all in pynative mode 5 years ago
  wilfChen 0fdc304a8e gpu support smoothl1loss 5 years ago
  wilfChen d54154a1f9 Gpu support ctcloss kernel 5 years ago
  mindspore-ci-bot 4c6bff75af !1393 Gpu Support AdamWeightDecay optimizer fusion 5 years ago
  He Wei 43e0967024 Decouple ir::Tensor class from python 5 years ago
  wilfChen 034d2ea2aa Gpu Adam Fusion 5 years ago
  mindspore-ci-bot 8870956954 !2441 add fake quant test case for gpu 5 years ago
  chenzomi 8873f9dc7e add fake quant test case for gpu 5 years ago
  mindspore-ci-bot a2cd05339f !2180 Gpu Gelu kernel support fp16 5 years ago
  mindspore-ci-bot d57decc8a3 !2338 Gpu Minimum & Maximum kernels support int32 5 years ago
  lizhenyu eb68c9953d change ftrl operator st 5 years ago
  wilfChen 480bf4151b Gpu Minimum & Maximum kernels support int32 5 years ago
  mindspore-ci-bot a9d06edae9 !2282 remove _quant_op.py from __init__.py 5 years ago
  mindspore-ci-bot fce37a5fbe !2281 add Sigmoid and SigmoidGrad operation of GPU 5 years ago
  wilfChen 8f4cd76582 gpu Gelu kernel support fp16 5 years ago
  chenzomi bbce6faff9 remove _quant_ops.py from __init__.py 5 years ago
  mindspore-ci-bot 2e002ab64c !2292 gpu fix all nop node graph execute 5 years ago
  limingqi107 0f4397cece fix all nop node graph execute 5 years ago
  lizhenyu ea0cd5ccdd add Sigmoid and SigmoidGrad operation of GPU 5 years ago
  mindspore-ci-bot 74c3e15675 !2194 fix FakeQuantPerLayer/FakeQuantPerLayerGrad symmetric=True calculation error bug 5 years ago
  mindspore-ci-bot 19e66f06e2 !2150 Gpu Tanh kernel support fp16 5 years ago
  mindspore-ci-bot fe797aaf10 !2229 add ftrl optimizer 5 years ago
  mindspore-ci-bot 95d887a35b !2226 add adam op for wide&deep model 5 years ago
  mindspore-ci-bot c4863683ef !2235 add SigmoidCrossEntropyWithLogitsGrad operation 5 years ago
  mindspore-ci-bot 116ed509bf !2234 add SigmoidCrossEntropyWithLogits op 5 years ago
  lizhenyu 636b8e2b88 add SigmoidCrossEntropyWithLogitsGrad op 5 years ago
  mindspore-ci-bot 4642df207a !2210 gpu optimize the max device memory config 5 years ago
  lizhenyu 694a8213b7 add adam optimizer 5 years ago
  lizhenyu ac2217dbae add SigmoidCrossEntropyWithLogits op 5 years ago
  lizhenyu c3360a84cd add ftrl optimizer 5 years ago
  wilfChen 9201ea5ed2 replace tanh implement with cudnn 5 years ago
  limingqi107 55b3557c0d gpu optimize the max device memory config 5 years ago
  王东旭 4e09ae83eb fix FakeQuantPerLayer/FakeQuantPerLayerGrad symmetric bug 5 years ago
  liuxiao df63a3195d fix input value check for SparseApplyFtrl and SparseApplyAdagrad 5 years ago
  mindspore-ci-bot d4a7c87b22 !2093 GPU add argmaxwithvalue 5 years ago
  VectorSL 17377912ba gpu add argmaxwithvalue 5 years ago
  buxue 66bbdb4a31 change tensor dtype and shape from function to attr 5 years ago
  mindspore-ci-bot 87fa15de80 !2021 GPU add akg kernel greaterequal notequal 5 years ago
  VectorSL cf2fc1cecf gpu add notequal greaterequal akg kernel 5 years ago
  buxue 0cd57ddc5d check arg is tensor with vm backend 5 years ago
  jiangjinsheng 51affc2f1b fixed validator for CumProd, ReduceProd, ApplyRMSProp 5 years ago
  mindspore-ci-bot 9c33da391a !1513 refine data copy in multi-graph 5 years ago
  liuwenhao4 a7ad0d0a49 Fixing some tiny faults about Pylint in my code(ops) 5 years ago
  lizhenyu a25b84055c refine data copy in multi-graph 5 years ago
  liuwenhao4 f3f0cbaeee Fixing some tiny faults about Pylint in my code(ops) 5 years ago