changed the function to "TypeIdToString", and use the Type::ToString() function,
instead of TypeId-String map.
changed the DtypeToTypeId together, the original StringToType can be used.
added a new interface StringToTypeId.
it's unreasonable to change the node when generating kernel json.
instead, it should be set in a pass.
most of the operators in original akg_kernel_attrs_process are not longer used,
so we deleted them, leaving only the "Cast" and "MatMul/BatchMatMul".
only Linux system is supported now.
change the default value of `ENABLE_AKG` to off, and controlled by option `-K`.
the `ENABLE_AKG` is auto enabled when `ENABLE_GPU` or `ENABLE_D` is on.
since now, we can use `ENABLE_AKG` to control the compilation of graphkernel
and akg codes.
fix usage description for option "-K", it should be "[-K on|off]".
LLVM is required by akg for cpu kernels, so AKG for cpu is default disabled now.
* change the graphkernel's passes code(backend/optimizer/graph_kernel/*) to the
new namespace `mindspore::graphkernel`, to decouple from `mindspore::opt`.
* change the original `mindspore::opt::graphkernel` to `mindspore::graphkernel::inner` (graph_kernel/model)
* change the original `mindspore::opt::expanders` to `mindspore::graphkernel::expanders` (graph_kernel/expanders)
TODO: modify graph_kernel_flags, kernel_compiler/akg/
The "throw" statement is not allowed in mindspore project (codedex check),
so we remove the self-define exception and replace with MS_LOG(EXCEPTION).
In GraphKernelExpanders, we check the return value instead.
The rollback function in ArithmeticSimplify / TrnasformOpOptimizer
is not supported now.
what's more,
changed the c++ op expanders from .h files to .cc files,
the OpExpanderRegister is called in each .cc file, likes
the operator registers in mindspore.
modifications for pass transform_op_optimizer:
1. Changed the maxflow-mincut algorithm to the Dinic's Algorithm,
since bug exists in the original ISAP codes.
if the algorithm is slow, we can apply some optimization for it. (e.g. current-arc optimization)
2. Added the pass TransformOpOptimizer in OptLevel_3.
this pass collects nodes around the specific transform operator (only Transpose now),
and use the mincut algorithm to get a plan, then re-link the original graph and
re-inference the shape and format of graph.
modifications for litegraph:
1. the class Node inherits from std::enable_shared_from_this,so we can get a shared_ptr by a pure pointer.
2. modified the Infer interface. it don't change the node, only inference the infos and return them.
This reverts commit b077aa1cab.
Revert "[feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset"
This reverts commit 4e6f7dc97d.
delete pass_registry_test.cc
comment hiai_nlu_model_multi.pb related line
transplant the op expander code from python to c++, base on LiteGraph.
the c++ expander will be called in priority if it was registered in OpExpanderFactory.
add two examples, BiasAdd and ExpandDims.
remove BiasAdd from python expanders.
since the ExpandDims is also imported by other ops (e.g. BatchNorm), we don't remove it now.
For convenience, we may change some operators' shape in equivalent way,
such as changing the scalar value (shape is empty) to a tensor with shape [1].
It's ok for the intermediate tensors, but not for the outputs.
So we save the output shapes in pre-process stage, and restore them in post-process stage.