108 Commits (f679568d86599b520ea7eb6bccceeb98760b821d)

Author SHA1 Message Date
  linqingke f679568d86 gpu ops code and test case. 5 years ago
  mindspore-ci-bot 5f10417b9f !3276 make gpu equal op support int32 5 years ago
  mindspore-ci-bot cf4353f728 !3220 Add random normal op at MindSpore front-end 5 years ago
  qujianwei 7479fb24a0 make gpu equal op support int32 5 years ago
  peixu_ren 9b45018dfd Add random normal op at MindSpore front-end 5 years ago
  VectorSL 90f15df037 add int64-->fp16 and update conv pad 5 years ago
  mindspore-ci-bot 32921ea3dc !3166 add gpu oneslike op 5 years ago
  qujianwei fb2ac74d9a add gpu oneslike kernel 5 years ago
  mindspore-ci-bot 11732f0ea2 !3135 GPU cast support more type 5 years ago
  VectorSL aef2c1984e cast support more types 5 years ago
  mindspore-ci-bot 251683096a !3045 Gpu support TopK kernel 5 years ago
  mindspore-ci-bot e249197c73 !3003 gpu support BroadcastTo kernels 5 years ago
  mindspore-ci-bot ad09bf3e87 !3083 add gpu split and restructure gpu concat 5 years ago
  zhaoting 5c0962acfa add gpu split and restructure gpu concat 5 years ago
  peixu_ren 1feca960aa Rollback to Normal on D 5 years ago
  wilfChen c10e07734c gpu support TopK kernel 5 years ago
  wilfChen dfb958de1e Gpu support BroadcastTo kernel 5 years ago
  peixu_ren 20ca96c62b Add random normal MindSpore interface 5 years ago
  kingfo add3778a61 add grad all in pynative mode 5 years ago
  wilfChen 0fdc304a8e gpu support smoothl1loss 5 years ago
  wilfChen d54154a1f9 Gpu support ctcloss kernel 5 years ago
  mindspore-ci-bot 4c6bff75af !1393 Gpu Support AdamWeightDecay optimizer fusion 5 years ago
  He Wei 43e0967024 Decouple ir::Tensor class from python 5 years ago
  wilfChen 034d2ea2aa Gpu Adam Fusion 5 years ago
  mindspore-ci-bot 8870956954 !2441 add fake quant test case for gpu 5 years ago
  chenzomi 8873f9dc7e add fake quant test case for gpu 5 years ago
  mindspore-ci-bot a2cd05339f !2180 Gpu Gelu kernel support fp16 5 years ago
  mindspore-ci-bot d57decc8a3 !2338 Gpu Minimum & Maximum kernels support int32 5 years ago
  lizhenyu eb68c9953d change ftrl operator st 5 years ago
  wilfChen 480bf4151b Gpu Minimum & Maximum kernels support int32 5 years ago
  mindspore-ci-bot a9d06edae9 !2282 remove _quant_op.py from __init__.py 5 years ago
  mindspore-ci-bot fce37a5fbe !2281 add Sigmoid and SigmoidGrad operation of GPU 5 years ago
  wilfChen 8f4cd76582 gpu Gelu kernel support fp16 5 years ago
  chenzomi bbce6faff9 remove _quant_ops.py from __init__.py 5 years ago
  mindspore-ci-bot 2e002ab64c !2292 gpu fix all nop node graph execute 5 years ago
  limingqi107 0f4397cece fix all nop node graph execute 5 years ago
  lizhenyu ea0cd5ccdd add Sigmoid and SigmoidGrad operation of GPU 5 years ago
  mindspore-ci-bot 74c3e15675 !2194 fix FakeQuantPerLayer/FakeQuantPerLayerGrad symmetric=True calculation error bug 5 years ago
  mindspore-ci-bot 19e66f06e2 !2150 Gpu Tanh kernel support fp16 5 years ago
  mindspore-ci-bot fe797aaf10 !2229 add ftrl optimizer 5 years ago
  mindspore-ci-bot 95d887a35b !2226 add adam op for wide&deep model 5 years ago
  mindspore-ci-bot c4863683ef !2235 add SigmoidCrossEntropyWithLogitsGrad operation 5 years ago
  mindspore-ci-bot 116ed509bf !2234 add SigmoidCrossEntropyWithLogits op 5 years ago
  lizhenyu 636b8e2b88 add SigmoidCrossEntropyWithLogitsGrad op 5 years ago
  mindspore-ci-bot 4642df207a !2210 gpu optimize the max device memory config 5 years ago
  lizhenyu 694a8213b7 add adam optimizer 5 years ago
  lizhenyu ac2217dbae add SigmoidCrossEntropyWithLogits op 5 years ago
  lizhenyu c3360a84cd add ftrl optimizer 5 years ago
  wilfChen 9201ea5ed2 replace tanh implement with cudnn 5 years ago
  limingqi107 55b3557c0d gpu optimize the max device memory config 5 years ago