127 Commits (a17f76dd1d83728cdc8ffcb52de694dfd3fcf12e)

Author SHA1 Message Date
  Gaoxiong e4c3d3e0e9 update graph kernel split model 5 years ago
  chenfei 369ee9ef9f add float64 of mixed_precision_cast 5 years ago
  mindspore-ci-bot 232dff3598 !8685 [GraphKernel] For fp16 value, declare fp32 firstly and than cast to fp16 in expander 5 years ago
  mindspore-ci-bot 3b946d4eb2 !8678 expand logsoftmax and grad, delete cast in softmax and fix layernorm compute dsl 5 years ago
  tronzhang 80f071e9fa declare fp32 and than cast to fp16 in expander 5 years ago
  tronzhang 9d7494f4df split shape ops for more fusion pportunity. 5 years ago
  zengzitao 266bfa50bf expand logsoftmax and logsoftmax_grad, delete softmax's cast and fix layernorm op 5 years ago
  mindspore-ci-bot 6bdb46c705 !8629 print error info when the function does not meet the indentation standard of AST 5 years ago
  mindspore-ci-bot a321f402c8 !8579 Add view function for tensor 5 years ago
  buxue a3937c2863 print error info when the function does not meet the indentation standard of AST 5 years ago
  mindspore-ci-bot f885f6636f !8564 expand layernorm_grad 5 years ago
  l00591931 7a192973ff Add view function for tensor 5 years ago
  zengzitao 326540cbbd expand layernorm_grad op 5 years ago
  mindspore-ci-bot c11c79170e !8554 Add expand_as function to tensor 5 years ago
  l00591931 ba7ee2aa13 add expand_as function 5 years ago
  zengzitao 28f1db74dd expand maximum_grad minimum_grad dropout_grad op 5 years ago
  dayschan 195b1fe8d5 Add Transpose into fusible list. 5 years ago
  zengzitao db27783d54 expand tanh_grad and reduce_mean, fix bug and add test_case in ci 5 years ago
  zengzitao 53043ae18f support expand fused_adam and fused_adam_weight_decay op 5 years ago
  mindspore-ci-bot b3855530e3 !7838 Enumerate function enable tensor as input 5 years ago
  l00591931 6f165ee5e3 enumerate function and enumerate test case added 5 years ago
  zengzitao 5cfa172720 expand gelu and gelugrad op 5 years ago
  mindspore-ci-bot 5c4940cdcc !7892 Convert non-scalar tensor to parameter 5 years ago
  zengzitao febdb1850c expand bias_add and bias_add_grad op 5 years ago
  dayschan b6c2812a29 Convert non-scalar tensor to parameter 5 years ago
  chenzomi 44bf4c3e37 [ME] format code 5 years ago
  jjfeing b9f97b60b0 fix special tbe op compile 5 years ago
  caifubi d3b978147f Ascend Dynamic Shape 5 years ago
  dayschan 7599686a72 GraphKernel supports multi-output kernels 5 years ago
  jjfeing 7dda95d247 set soc version 5 years ago
  lingyunli63 dd48f10c3d add assign ops in composite_topi 5 years ago
  wenchunjiang 1a407db3fd increase tbe compile max process number from 16 to 24 5 years ago
  mindspore-ci-bot ea9d39e84b !6734 fix bug of getting python function name 5 years ago
  fary86 3f5640f18a Fix bug of getting name of a function 5 years ago
  root 4e85071055 redundant codes clean 5 years ago
  mindspore-ci-bot 356ba2f1a7 !6575 update graph kernel model usage info 5 years ago
  buxue 483c8a179a improve the recognition of Parameter object and raise error when convert keywordarg to pydata 5 years ago
  Gaoxiong 1cb8b803f9 update usage info 5 years ago
  buxue a86e4ac370 print warning log when parse attributes not defined of the object 5 years ago
  buxue 14f6c95c28 add overflow check for make_range and optimize isinstance processing 5 years ago
  buxue 458498900c support not in and add check for grad_with_sens with no sense provide. 5 years ago
  Yi Huaijie 38babd1452 delete redundant codex 5 years ago
  buxue b9c9046b93 support function as condition of if 5 years ago
  dayschan 37a48f6aac GraphKernel supports GPU 5 years ago
  mindspore-ci-bot a838c9bd3d !5685 update run for br: master 5 years ago
  mindspore-ci-bot b40677002f !5714 [refine]change top graph and add cell class 5 years ago
  wuxuejian bd527a331d update aicpu proto and update module: graphengine 5 years ago
  Wei Luning e6f82af849 add cell class to c++ 5 years ago
  buxue 6fa60f6666 raise ValueError when call hook function in graph mode 5 years ago
  Wei Luning 879a519136 updata signature 5 years ago