1300 Commits (c6acafcbc6d925cde76c4a80b6a4e2632b66f39d)

Author SHA1 Message Date
  qianlong e9c4378ed8 add support of soft dvpp op 5 years ago
  mindspore-ci-bot b120cbcfb4 !4249 Add parameter checking in distributions 5 years ago
  mindspore-ci-bot 6772bbde8d !4021 Add ScalarAffine and Softplus bijector 5 years ago
  mindspore-ci-bot 266df56570 !4220 [MD] add num_samples support in minddataset pk sampler 5 years ago
  islam_amin e52d85ba9d Added RandomAffine Op 5 years ago
  Xun Deng be6db4a6fe add scalar affine and softplus bijector 5 years ago
  Xun Deng 415dad3adb added some parameter checking 5 years ago
  mindspore-ci-bot 9ad82f79fd !3979 Adding MixUp operation 5 years ago
  liyong 7341421d3b fix num samples in pk sampler 5 years ago
  mindspore-ci-bot e195fa6f9e !4192 Fix md five_crop ci problem 5 years ago
  xiefangqi 3d7826b6df test five crop problem 5 years ago
  mindspore-ci-bot fe0b8d6272 !4068 Add parallel operator for Concat 5 years ago
  Mahdi 1896950ae5 Added Mixup 5 years ago
  peixu_ren 517fed55a5 Added LGamma op 5 years ago
  mindspore-ci-bot e19d382473 !3346 Maintain epoch/repeat count for ops 5 years ago
  yangzhenzhang f4bb43bbaf add concat op 5 years ago
  panyifeng 34e50e5d6e fix cell bprop 5 years ago
  Lixia Chen ac85b77b76 Maintain epoch/repeat count for ops 5 years ago
  mindspore-ci-bot 470328eeaf !4060 fix do concat in while loop specialize error 5 years ago
  Wei Luning d4d6457ea7 init parameter data by defaultOnly keep no data as MetaTensor in auto parallel mode 5 years ago
  fary86 7602054acd Fix do concat in while loop specialize error 5 years ago
  kingfo 28dabf0332 fix grad flag update issue in pynative 5 years ago
  mindspore-ci-bot 0b407dfe78 !4005 unsortedsegsum grad 5 years ago
  mindspore-ci-bot c493859978 !4004 add squreasumall grad 5 years ago
  mindspore-ci-bot 235378d5d7 !3789 eager mode sparse 5 years ago
  mindspore-ci-bot 7ec0b5857a !3977 [MD] Refactor Concatenate Op 5 years ago
  nhussain 61769b2dd9 fix concat 5 years ago
  fangzehua 99f2be7064 unsortedsegsum grad 5 years ago
  fangzehua 7011508379 add squreasumall grad 5 years ago
  panyifeng 9927e6eb5c eager mode sparse 5 years ago
  mindspore-ci-bot 8dec74908a !4000 improve interface '__bool__' for tensor 5 years ago
  mindspore-ci-bot 617b98f104 !3966 [AutoParallel]Add dropout distributed op 5 years ago
  buxue ace34525cd improve interface '__bool__' for tensor 5 years ago
  lichenever bfc96de1b9 add dropout distributed op 5 years ago
  peixu_ren b9c8c0b439 Added Bijector TransformDistribution base classes and two instances: power and exp bijectors 5 years ago
  mindspore-ci-bot a301fc1757 !3916 remove loss_scale range check which is a temp fix. 5 years ago
  mindspore-ci-bot 5827ba9a8a !3940 View code support interface 'all' and 'any' of tensor 5 years ago
  buxue 2c4cb49a11 support interface 'all' and 'any' of tensor 5 years ago
  zhousiyi 7d31deb6fa remove loss_scale range check to make FP32Imm(inf) comparison equal 5 years ago
  mindspore-ci-bot 8b396cea98 !3915 Revert modification of opt 5 years ago
  mindspore-ci-bot bac1781539 !3648 Add some testing cases for mindspore.profiler 5 years ago
  mindspore-ci-bot c3edfe12bc !3715 Update python pattern expression 5 years ago
  mindspore-ci-bot af9398b39a !3756 Change distribution api 5 years ago
  mindspore-ci-bot 2da29bce66 !3370 same graph used in different context should be treat differently. 5 years ago
  simson 3617121ccf revert modification of opt 5 years ago
  mindspore-ci-bot aa65cbf733 !3846 Add TBE op SquaredDifference\Xdivy\Xlogy for VM. 5 years ago
  mindspore-ci-bot 27a0a2e333 !3880 fix auto mix precision issue in pynative 5 years ago
  mindspore-ci-bot a182a87a6d !3820 support tensor attr shape and dtype in graph mode 5 years ago
  liuxiao93 374b7b8583 Add TBE op SquaredDifference for VM. 5 years ago
  kingfo 577535b387 fix gradients issue in pynative 5 years ago