 added python api based on cpp api
1st draft of python iterator
Added Cifar10 and Cifar100 pybind port
Change pybind to use IR for Skip and Manifest
Signed-off-by: alex-yuyue <yue.yu1@huawei.com>
DatasetNode as a base for all IR nodes
namespace change
Fix the namespace issue and make ut tests work
Signed-off-by: alex-yuyue <yue.yu1@huawei.com>
Add VOCDataset
!63 Added RandomDataset
* Added RandomDataset
add imagefolder ir
Pybind switch: CelebA and UT
!61 CLUE example with class definition
* Merge branch 'python-api' of gitee.com:ezphlow/mindspore into clue_class_pybind
* Passing testcases
* Added CLUE, not working
add ManifestDataset IR
Signed-off-by: alex-yuyue <yue.yu1@huawei.com>
Update Coco & VOC & TFReader, Update clang-format, Reorder
datasets_binding
!69 Add Generator and move c_dataset.Iterator to dataset.Iterator
* Add GeneratorDataset to c_dataset
* Add GeneratorDataset to c_dataset
!67 Moving c_datasets and adding sampler wrapper
* Need to add create() method in datasets.py
* migration from c_dataset to dataset part 1
!71 Fix indent error
* Fix indentation error
!72 Fix c_api tests cases
* Fix c_api tests cases
!73 Added CSV Dataset
* Added CSVDataset
pybind switch: Take and CelebA fixes
!75 move c_dataset functionality to datasets
* Fixed existing testcases
* Added working clue and imagefolder
* Added sampler conversion from pybind
* Added sampler creation
!77 Add Python API tree
* Python API tree
add minddataset
TextFileDataset pybind
Rename to skip test_concat.py and test_minddataset_exception.py
!80 Add batch IR to python-api branch, most test cases work
* staging III
* staging, add pybind
Enable more c_api take and CelebA tests; delete util_c_api
!84 Schema changes in datasets.py
* Schema changes
!85 Remove input_indexes from sub-classes
* remove input_index from each subclass
!83 Remove C datasets
* Removed c_dataset package
* Remove c_datasets
!82 pybind switch: shuffle
* pybind switch: shuffle
!86 Add build_vocab
* Add build_vocab
Rebase with upstream/master
_shuffle conflict
BatchNode error
!88 Fix rebase problem
* fix rebase problem
Enable more unit tests; code typo/nit fixes
!91 Fix python vocag hang
* Fix python vocab hang
!89 Added BucketBatchByLength Pybind switch
* Added BucketBatchByLength
Update and enable more tet_c_api_*.py tests
!95 Add BuildSentencePeiceVocab
* - Add BuildSentencePeiceVocab
!96 Fix more tests
* - Fix some tests
- Enable more test_c_api_*
- Add syncwait
!99 pybind switch for device op
* pybind switch for device op
!93 Add getters to python API
* Add getters to python API
!101 Validate tree, error if graph
* - Add sync wait
!103 TFrecord/Random Datasets schema problem
* - TfRecord/Random schem aproblem
!102 Added filter pybind switch
* Added Filter pybind switch
!104 Fix num_samples
* - TfRecord/Random schem aproblem
!105 Fix to_device hang
* Fix to_device hang
!94 Adds Cache support for CLUE dataset
* Added cache for all dataset ops
* format change
* Added CLUE cache support
* Added Cache conversion
Add save pybind
fix compile err
init modify concat_node
!107 Fix some tests cases
* Fix tests cases
Enable and fix more tests
!109 pybind switch for get dataset size
* pybind_get_dataset_size
some check-code fixes for pylint, cpplint and clang-format
!113 Add callback
* revert
* dataset_sz 1 line
* fix typo
* get callback to work
!114 Make Android compile clean
* Make Android Compile Clean
Fix build issues due to rebase
!115 Fix more tests
* Fix tests cases
* !93 Add getters to python API
fix test_profiling.py
!116 fix get dataset size
* fix get dataset size
!117 GetColumnNames pybind switch
* Added GetColumnNames pybind switch
code-check fixes: clangformat, cppcheck, cpplint, pylint
Delete duplicate test_c_api_*.py files; more lint fixes
!121 Fix cpp tests
* Remove extra call to getNext in cpp tests
!122 Fix Schema with Generator
* Fix Schema with Generator
fix some cases of csv & mindrecord
!124 fix tfrecord get_dataset_size and add some UTs
* fix tfrecord get dataset size and add some ut for get_dataset_size
!125 getter separation
* Getter separation
!126 Fix sampler.GetNumSamples
* Fix sampler.GetNumSampler
!127 Assign runtime getter to each get function
* Assign runtime getter to each get function
Fix compile issues
!128 Match master code
* Match master code
!129 Cleanup DeviceOp/save code
* Cleanup ToDevice/Save code
!130 Add cache fix
* Added cache fix for map and image folder
!132 Fix testing team issues
* Pass queue_name from python to C++
* Add Schema.from_json
!131 Fix Cache op issues and delete de_pipeline
* Roll back C++ change
* Removed de_pipeline and passing all cache tests.
* fixed cache tests
!134 Cleanup datasets.py part1
* Cleanup dataset.py part1
!133 Updated validation for SentencePieceVocab.from_dataset
* Added type_check for column names in SentencePieceVocab.from_dataset
Rebase on master 181120 10:20
fix profiling
temporary solution of catching stauts from Node.Build()
!141 ToDevice Termination
* ToDevice termination
pylint fixes
!137 Fix test team issues and add some corresponding tests
* Fix test team issues and add some corresponding tests
!138 TreeGetter changes to use OptPass
* Getter changes to use OptPass (Zirui)
Rebase fix
!143 Fix cpplint issue
* Fix cpplint issue
pylint fixes in updated testcases
!145 Reset exceptions testcase
* reset exception test to master
!146 Fix Check_Pylint Error
* Fix Check_Pylint Error
!147 fix android
* fix android
!148 ToDevice changes
* Add ToDevice to the iterator List for cleanup at exit
!149 Pylint issue
* Add ToDevice to the iterator List for cleanup at exit
!150 Pylint 2
* Add ToDevice to the iterator List for cleanup at exit
!152 ExecutionTree error
* ET destructor error
!153 in getter_pass, only remove callback, without deleting map op
* getter pass no longer removes map
!156 early __del__ of iterator/to_device
* early __del__ of iterator
!155 Address review comments Eric 1
* Added one liner fix to validators.py
* roll back signature fix
* lint fix
* Eric Address comments 2
* C++ lint fix
* Address comments Eric 1
!158 Review rework for dataset bindings - part 1
* Reorder nodes repeat and rename
* Review rework for dataset bindings - part 1
!154 Fixing minor problems in the comments (datasets.py, python_tree_consumer.cc, iterators_bindings.cc, and iterators.py)
* Fixing minor problems in the comments (datasets.py, python_tree_consum…
!157 add replace none
* Add replace_none to datasets.py, address comments in tests
Trying to resolve copy
Override the deepcopy method of deviceop
Create_ir_tree method
Create_ir_tree method 2
Create_ir_tree method 2
del to_device if already exists
del to_device if already exists
cache getters shapes and types
Added yolov3 relaxation, to be rolled back
Get shapes and types together
bypass yolo
NumWorkers for MapOp
revert Yolo
revert Thor
Print more info
Debug code: Update LOG INFO to LOG ERROR
do not remove epochctrl for getter pass
Remove repeat(1)
pritn batch size
add log to tree_consumer and device_queue op
Revert PR 8744
Signed-off-by: alex-yuyue <yue.yu1@huawei.com>
__del__ toDEvice
__del__ toDevice2
!165 add ifndef ENABLE_ANDROID to device queue print
* Add ifndef ENABLE_ANDROID to device queue print
revert some changes
!166 getter: get_data_info
* getter: get_data_info
!168 add back tree print
* revert info to warnning in one log
* add back the missed print tree log
Release GIL in GetDataInfo
5 years ago |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288 |
- /**
- * Copyright 2019 Huawei Technologies Co., Ltd
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
- #ifndef MINDSPORE_CCSRC_MINDDATA_DATASET_ENGINE_EXECUTION_TREE_H_
- #define MINDSPORE_CCSRC_MINDDATA_DATASET_ENGINE_EXECUTION_TREE_H_
-
- #include <functional>
- #include <memory>
- #include <stack>
- #include <string>
- #include <vector>
- #ifndef ENABLE_ANDROID
- #if !defined(_WIN32) && !defined(_WIN64)
- #include <sys/sysinfo.h>
- #include <opencv2/imgproc/imgproc.hpp>
- #endif
- #endif
- #include "minddata/dataset/engine/datasetops/dataset_op.h"
- #include "minddata/dataset/util/status.h"
- #include "mindspore/ccsrc/minddata/dataset/engine/perf/profiling.h"
- namespace mindspore {
- namespace dataset {
- // Forward declares
- class TaskGroup;
- class DatasetOp;
- class Pass;
- using OptPass = std::vector<std::unique_ptr<Pass>>;
- class ExecutionTree {
- public:
- // Prepare flags used during tree prepare phase
- enum PrepareFlags {
- kDePrepNone = 0,
- kDePrepRepeat = 1, // Processing a repeat operation
- kDePrepCache = 2 // Processing a cache operation
- };
-
- // State flags for the lifecycle of the tree
- enum TreeState {
- kDeTStateInit = 0, // The freshly initialized state after construction
- kDeTStateBuilding, // The tree is being built, nodes are being added
- kDeTStatePrepare, // The tree has been assigned a root node and is pending prepare
- kDeTStateReady, // The tree has been prepared and is ready to be launched
- kDeTStateExecuting, // The tree has been launched and is executing
- kDeTStateEpochEnd, // The tree has been received end of epoch signal, just for profiling
- kDeTStateFinished // The tree has been drained, dataset iterator received EOF
- };
-
- class Iterator {
- public:
- // Constructor
- // @param root The root node to start iterating from
- explicit Iterator(const std::shared_ptr<DatasetOp> &root = nullptr);
-
- // Destructor
- ~Iterator() {}
-
- Iterator &operator++() {
- ++ind_;
- return *this;
- } // prefix ++ overload
- Iterator operator++(int) {
- Iterator it = *this;
- it.ind_ = ind_;
- ind_++;
- return it;
- } // post-fix ++ overload
- Iterator &operator--() {
- --ind_;
- return *this;
- } // prefix -- overload
- Iterator operator--(int) {
- Iterator it = *this;
- it.ind_ = ind_;
- ind_--;
- return it;
- } // post-fix -- overload
- DatasetOp &operator*() { return *nodes_[ind_]; } // dereference operator
- std::shared_ptr<DatasetOp> operator->() { return nodes_[ind_]; }
-
- // getter function
- // @return Shared pointer to the current operator
- std::shared_ptr<DatasetOp> get() { return nodes_[ind_]; }
-
- bool operator==(const Iterator &rhs) { return nodes_[ind_] == rhs.nodes_[rhs.ind_]; }
-
- bool operator!=(const Iterator &rhs) { return nodes_[ind_] != rhs.nodes_[rhs.ind_]; }
-
- int32_t NumNodes() { return nodes_.size(); }
-
- private:
- int32_t ind_; // the cur node our Iterator points to
- std::vector<std::shared_ptr<DatasetOp>> nodes_; // store the nodes in post order
- void PostOrderTraverse(const std::shared_ptr<DatasetOp> &);
- };
-
- // Constructor
- ExecutionTree();
-
- // Destructor
- ~ExecutionTree();
-
- // Associates a DatasetOp with this tree. This assigns a valid node id to the operator and
- // provides it with a link to the tree. A node cannot form any relationships (parent/child) with
- // other nodes unless they are associated with the same tree.
- // @param op - The operator to associate
- // @return Status - The error code return
- Status AssociateNode(const std::shared_ptr<DatasetOp> &op);
-
- // Sets the root node of the tree
- // @param op - The operator to assign as root
- // @return Status - The error code return
- Status AssignRoot(const std::shared_ptr<DatasetOp> &op);
-
- // Start the execution of the tree
- // @return Status - The error code return
- Status Launch();
-
- /// A print method typically used for debugging
- /// \param out - The output stream to write output to
- void Print(std::ostream &out, const std::shared_ptr<DatasetOp> &op = nullptr) const;
-
- // Returns an iterator positioned at the start
- // @return Iterator - The iterator
- ExecutionTree::Iterator begin(const std::shared_ptr<DatasetOp> &root = nullptr) const {
- return Iterator(root == nullptr ? root_ : root);
- }
-
- // Returns an iterator positioned at the end
- // @return Iterator - The iterator
- ExecutionTree::Iterator end() const { return Iterator(nullptr); }
-
- // << Stream output operator overload
- // @notes This allows you to write the debug print info using stream operators
- // @param out - reference to the output stream being overloaded
- // @param exe_tree - reference to the execution tree to display
- // @return - the output stream must be returned
- friend std::ostream &operator<<(std::ostream &out, ExecutionTree &exe_tree) {
- exe_tree.Print(out);
- return out;
- }
-
- // Given the number of workers, launches the worker entry function for each. Essentially a
- // wrapper for the TaskGroup handling that is stored inside the execution tree.
- // @param num_workers - The number of workers to launch
- // @param func - The function entry point that workers will execute
- // @return Status - The error code return
- Status LaunchWorkers(int32_t num_workers, std::function<Status(uint32_t)> func, std::string name = "");
-
- // Getter method
- // @return shared_ptr to the root operator
- std::shared_ptr<DatasetOp> root() const { return root_; }
-
- // Getter method
- // @return the prepare flags
- uint32_t PrepareFlags() const { return prepare_flags_; }
-
- // The driver of the prepare phase of the execution tree.
- // Prepare phase consists of three sub phases
- //
- // 1. PrepareTreePreAction()
- // Compulsory transformation/action pre optimization.
- // For example, CacheOp Insertion
- //
- // 2. Optimize()
- // Optimization transformation/action, optional
- // For example, MapOp Fusion
- //
- // 3. PrepareTreePostAction()
- // Compulsory transformation/action post optimization.
- // For example, repeatOp inlining
- //
- // @return Status - The error code return
- Status Prepare(int num_epochs = -1);
-
- // Compulsory transformation/action pre optimization.
- // @return Status - The error code return
- Status PrepareTreePreAction();
-
- // Compulsory transformation/action post optimization.
- // @return Status - The error code return
- Status PrepareTreePostAction();
-
- // Optimization transformation/action, optional.
- // @return Status - The error code return
- Status Optimize();
-
- // The DEPRECATED driver of the prepare phase of the execution tree. The prepare phase will recursively
- // walk the tree to perform modifications to the tree or specific nodes within the tree to get
- // it ready for execution.
- // @param Total number of epochs that will be run on this tree
- // @return Status - The error code return
- Status PrepareDeprecated();
-
- // Recursive function used during prepare phase to visit a node and drive any pre- and post-
- // node actions during a tree walk.
- // @param op - The dataset op to work on
- // @return Status - The error code return
- Status PrepareNode(const std::shared_ptr<DatasetOp> &dataset_op);
-
- // Return the pointer to the TaskGroup
- // @return raw pointer to the TaskGroup
- TaskGroup *AllTasks() const { return tg_.get(); }
-
- // Return if the ExecutionTree is at end of epoch status
- // @return bool - true is ExecutionTree is end of epoch status
- bool IsEpochEnd() const { return tree_state_ == TreeState::kDeTStateEpochEnd; }
-
- // Set the ExecutionTree to EOE state
- void SetEpochEnd() { tree_state_ = TreeState::kDeTStateEpochEnd; }
-
- // Set the ExecutionTree to executing state
- void SetExecuting() { tree_state_ = TreeState::kDeTStateExecuting; }
-
- // Return if the ExecutionTree is finished (iterator receives EOF).
- // @return Bool - true is ExecutionTree is finished
- bool isFinished() const { return tree_state_ == TreeState::kDeTStateFinished; }
-
- // Return if the ExecutionTree is ready.
- // @return Bool - true is ExecutionTree is ready
- bool isPrepared() const {
- return tree_state_ == TreeState::kDeTStateReady || tree_state_ == kDeTStateExecuting ||
- tree_state_ == kDeTStateFinished;
- }
-
- // Set the ExecutionTree to Finished state.
- void SetFinished() { tree_state_ = TreeState::kDeTStateFinished; }
-
- // Getter for profiling manager, no ownership
- ProfilingManager *GetProfilingManager() { return profiling_manager_.get(); }
-
- // Set optional optimization if tree has not been prepared yet
- Status SetOptimize(bool value) {
- if (tree_state_ != kDeTStateInit && tree_state_ != kDeTStateBuilding) {
- std::string optimize = (optimize_ == true) ? "true" : "false";
- std::string msg = "Tree has already been prepared with OPTIMIZE set to " + optimize;
- RETURN_STATUS_UNEXPECTED(msg);
- } else {
- optimize_ = value;
- return Status::OK();
- }
- }
-
- // Optional optimizations status
- bool OptimizationEnabled() const { return optimize_; }
-
- // Getter function to get the total number of epochs to be run on this tree.
- // @return total number of epochs
- int32_t num_epochs() { return num_epochs_; }
-
- // set the function ptr that overrides the pre-pass which allows caller to adjust the existing pre_pass and
- // introduce new passes. E.g. caller can override the num_epoch in EpochInjectionPass
- void SetPrePassOverride(std::function<OptPass(OptPass)> pre_pass_override) { pre_pass_override_ = pre_pass_override; }
-
- private:
- // A helper functions for doing the recursive printing
- // @param dataset_op - The dataset op to print
- // @param indent - an indent string for aligning child levels in output
- // @param last - an indicator if it's the last child or not
- // @param detailed - should it display the detailed node output or the summary line
- void PrintNode(std::ostream &out, const std::shared_ptr<DatasetOp> &dataset_op, std::string indent, bool last,
- bool detailed) const;
-
- std::unique_ptr<TaskGroup> tg_; // Class for worker management
- std::shared_ptr<DatasetOp> root_; // The root node of the tree
- int32_t id_count_; // Counter for generating operator id's
- uint32_t prepare_flags_; // Flags used during tree prepare
- TreeState tree_state_; // Tracking the current tree state
- int32_t num_epochs_; // Total number of epochs to run for this tree
- std::unique_ptr<ProfilingManager> profiling_manager_; // Profiling manager
- bool optimize_; // Flag to enable optional optimizations
- std::function<OptPass(OptPass)> pre_pass_override_; // function ptr that overrides pre pass, called in PrePrepare()
- };
- } // namespace dataset
- } // namespace mindspore
-
- #endif // MINDSPORE_CCSRC_MINDDATA_DATASET_ENGINE_EXECUTION_TREE_H_
|