1005 Commits (c030dffd680d4a8a1017edbe8d33ffc2fbc01564)

Author SHA1 Message Date
  mindspore-ci-bot e3371a01d5 !8751 Fix minor bugs in bijector and distribution 5 years ago
  mindspore-ci-bot 82456e3f2b !8749 batchnorm2d test case for use_batch_statistics is false 5 years ago
  mindspore-ci-bot 7689062c7d !8727 Clean code for gnmt_v2 network 5 years ago
  mindspore-ci-bot 044c7d183c !8803 Validate SampledSoftmaxLoss Args 5 years ago
  gaojing a23ddaf409 clean_code 5 years ago
  jonwe 036e28bb96 batchnorm2d use_batch_statistics false test case 5 years ago
  Jonathan Yan 22cb2814e8 validate SampledSoftmaxLoss args 5 years ago
  Peilin Wang 3d4d5ec77f initial commit for gpu dynamic shape testing op 5 years ago
  mindspore-ci-bot 99fc0a4e64 !8526 Add 'in_channel' and 'out_channel' to cell_attr_register 5 years ago
  Xun Deng b5e05472ce fix minor bugs in bijector and distribution utils, fix docs issues 5 years ago
  mindspore-ci-bot b07f6383cb !8674 Fix doc example for ScatterUpdate and support int8/uint8 5 years ago
  mindspore-ci-bot fb0e866ad1 !8269 forward unique dynamic shape 5 years ago
  yao_yf 31819bb4a7 support forward unique 5 years ago
  mindspore-ci-bot 3b946d4eb2 !8678 expand logsoftmax and grad, delete cast in softmax and fix layernorm compute dsl 5 years ago
  TFbunny fb1a65c469 add int8 and uint8 support to scatteradd and fix doc example 5 years ago
  mindspore-ci-bot 1bd0ed450c !8630 fix mul broadcast and tanhgrad bug 5 years ago
  zengzitao 266bfa50bf expand logsoftmax and logsoftmax_grad, delete softmax's cast and fix layernorm op 5 years ago
  mindspore-ci-bot 52eb2d3401 !8477 Add Cauchy distribution 5 years ago
  mindspore-ci-bot 0d00f6c14e !7039 Add equal op for cpu 5 years ago
  zhaoting 6d8fb56ea8 fix mul broadcast and tanhgrad bug 5 years ago
  mindspore-ci-bot b411c05de0 !8495 Adapt DynamicGRUV2 forward for Ascend new backend. 5 years ago
  mindspore-ci-bot 20ee16daf9 !8625 Add Ascend st testcases for graph kernel 5 years ago
  looop5 f5f66abd06 Add testcases in Ascend back-end for graph kernel 5 years ago
  mindspore-ci-bot efd3e6a168 !8566 add dynamic for cache ops 5 years ago
  mindspore-ci-bot 9c957072e2 !8097 Remove redundant phi nodes 5 years ago
  fangzehua b7d8e87647 add dynamic ops 5 years ago
  mindspore-ci-bot f885f6636f !8564 expand layernorm_grad 5 years ago
  mindspore-ci-bot 929d31653d !8535 add export air test 5 years ago
  yujianfeng 3176d377e6 Remove redundant phi nodes 5 years ago
  zengzitao 326540cbbd expand layernorm_grad op 5 years ago
  changzherui ffeacf13e4 add exoirt air test 5 years ago
  liuxiao93 d471ac491e Adapt DynamicGRUV2 forward for Ascend new backend. 5 years ago
  bai-yangfan b4e020d081 dataset_attr_name 5 years ago
  ZPaC 5df0350b67 PS mode supports negative index looking up. 5 years ago
  mindspore-ci-bot 3fdd75dd5e !8450 add supports to op randomcategorical on gpu 5 years ago
  mindspore-ci-bot dbe5229c56 !8492 expand maxmium_grad minimum_grad and dropout_grad 5 years ago
  mindspore-ci-bot 71af3bf1ac !8283 Minimum Op and Mul Op support dynamic shape 5 years ago
  Jonathan Yan 5a8238a09c mul v1 5 years ago
  wanyiming 237bcfd36b add_channel_to_attr 5 years ago
  Xun Deng 1a68ccb40b added Cauchy distribution 5 years ago
  zengzitao 28f1db74dd expand maximum_grad minimum_grad dropout_grad op 5 years ago
  mindspore-ci-bot 7d6039d384 !7858 [MS][GPU] Add Unique Op 5 years ago
  mindspore-ci-bot a511e32cf8 !8386 [MS] Changing TensorDot from P Operations op to Composite op 5 years ago
  zhouyuanshen 048fc49aed add support to op RandomCategorical 5 years ago
  mindspore-ci-bot fed8702a29 !8457 update test_resnet50_quant loss threshold to 2.6 5 years ago
  mindspore-ci-bot 8e3bf54fd2 !7032 add CTCGreedyDecoder ops for aicpu 5 years ago
  yuchaojie 733a37be13 update test_resnet50_quant loss threshold to 2.6 5 years ago
  mindspore-ci-bot 623df51e06 !8420 mode_train_file 5 years ago
  bai-yangfan a9cba56191 mode_train 5 years ago
  mindspore-ci-bot eb5ae1a0fc !8423 nn.Dense support ND*2D 5 years ago