578 Commits (a146f982bc50db00b1716ac4cd85da57d6a5ca25)

Author SHA1 Message Date
  mindspore-ci-bot 3e2eb70fc0 !8669 Split op extending PrimitiveWithCheck 5 years ago
  mindspore-ci-bot 442217314d !8861 [MS][GPU][DynamicShapeUpdate] Converting UnsortedSegmentMax from normal to dynamic shape op 5 years ago
  mindspore-ci-bot 28772d4fb7 !9132 Fix bug for SparseGatherV2 GPU and add dynamic shape testcase 5 years ago
  mindspore-ci-bot 9b0ec824c4 !9096 DynamicShapeOp int64 support and bug fix 5 years ago
  mindspore-ci-bot 2e31899101 !9095 [MS][DynamicShape][API] Fixes for UnsortedSegementSum DynamicShape support API + InferImpl func 5 years ago
  mindspore-ci-bot b76c7cc7bf !9025 ReLU gpu op supports int32 and int64 5 years ago
  tom__chen 4fa6238dde change Split op to extend PrimitiveWithCheck 5 years ago
  TFbunny 419f8bf72a fix dynamic shape and add testcase for SparseGatherV2 GPU 5 years ago
  mindspore-ci-bot f28b73f231 !9040 ExpandDims dynamic shape 5 years ago
  mindspore-ci-bot 28e6c7f29e !9034 [MS][DynamicShape] - Converting P.Square to DynamicShape op 5 years ago
  mindspore-ci-bot b0496aaa10 !8673 Add min/max_shape to ScatterAdd/Update and Transpose and add new dynamic shape testcases 5 years ago
  Jonathan Yan 9f70ebac64 Cast/ReLU dynamic shapelsgpu op supports int32 and int64 5 years ago
  jonwe 36ea519009 ExpandDims dynamic shape 5 years ago
  mindspore-ci-bot 40222f59a7 !9043 Add support to op L2Loss on gpu 5 years ago
  danishnxt cfc3542261 First Commit SegMax changes + typo in SegSum InferImpl 5 years ago
  Peilin Wang 1dd302ae93 int64 support and typo fix 5 years ago
  zhouyuanshen f6e87143c6 add support to op L2Loss on gpu 5 years ago
  mindspore-ci-bot 0aa63f21c5 !9039 register FastGelu for activation 5 years ago
  danishnxt 6f64fffdb4 Adding API fix for handling dynamic_shape correctly plus STs for SegSum dyn_shape 5 years ago
  TFbunny 5e19a642f9 fix and add testcase for dynamic shape scatteradd/update transpose 5 years ago
  mindspore-ci-bot e6ebb310eb !8928 [MS][GPU][CUDA] - New GPU kernel -> LinSpace 5 years ago
  danishnxt 64a6769d3f Update Square Op to support Dynamic Shape 5 years ago
  mindspore-ci-bot c3b057bd69 !8825 [MS][GPU] GatherV2_Dynamic_BugFix 5 years ago
  mindspore-ci-bot ebef1df00b !8994 split dropout op and expand dropout 5 years ago
  danishnxt a17f76dd1d Initial Commit - GPU LinSpace 5 years ago
  shibeiji cd850bd210 register activation operator fast_gelu 5 years ago
  zengzitao 3ef0e9f053 substitute dropout by cudnnuniformreal and dropout 5 years ago
  danishnxt 241c8f3d96 GatherUpdate 5 years ago
  yanzhenxiang2020 dca109c9a5 fix example of categorical and rnntloss 5 years ago
  jonwe 9a6ced3cc7 DinNoNan gpu kernel supports int8/uint8 5 years ago
  mindspore-ci-bot 0793198891 !8917 Add grad definition for sigmoidGrad. 5 years ago
  mindspore-ci-bot 6a04c21456 !8875 Fix some bug about GatherD and GatherDGrad op on gpu 5 years ago
  mindspore-ci-bot d915fd9b96 !8592 change repeat_elements to a composite op 5 years ago
  hedongodng 0279323b99 add grad definition for sigmoidGrad operation 5 years ago
  zhouyuanshen e1026961f7 fix some bug in gather and gathergrad op 5 years ago
  mindspore-ci-bot 82456e3f2b !8749 batchnorm2d test case for use_batch_statistics is false 5 years ago
  mindspore-ci-bot 044c7d183c !8803 Validate SampledSoftmaxLoss Args 5 years ago
  jonwe 036e28bb96 batchnorm2d use_batch_statistics false test case 5 years ago
  tom__chen a52cd685fc change repeat_element op to a composite op 5 years ago
  Jonathan Yan 22cb2814e8 validate SampledSoftmaxLoss args 5 years ago
  Peilin Wang 3d4d5ec77f initial commit for gpu dynamic shape testing op 5 years ago
  mindspore-ci-bot 99fc0a4e64 !8526 Add 'in_channel' and 'out_channel' to cell_attr_register 5 years ago
  mindspore-ci-bot b07f6383cb !8674 Fix doc example for ScatterUpdate and support int8/uint8 5 years ago
  mindspore-ci-bot 3b946d4eb2 !8678 expand logsoftmax and grad, delete cast in softmax and fix layernorm compute dsl 5 years ago
  TFbunny fb1a65c469 add int8 and uint8 support to scatteradd and fix doc example 5 years ago
  mindspore-ci-bot 1bd0ed450c !8630 fix mul broadcast and tanhgrad bug 5 years ago
  zengzitao 266bfa50bf expand logsoftmax and logsoftmax_grad, delete softmax's cast and fix layernorm op 5 years ago
  mindspore-ci-bot 0d00f6c14e !7039 Add equal op for cpu 5 years ago
  zhaoting 6d8fb56ea8 fix mul broadcast and tanhgrad bug 5 years ago
  mindspore-ci-bot b411c05de0 !8495 Adapt DynamicGRUV2 forward for Ascend new backend. 5 years ago