You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

serializer_deserializer.py 19 kB

added python api based on cpp api 1st draft of python iterator Added Cifar10 and Cifar100 pybind port Change pybind to use IR for Skip and Manifest Signed-off-by: alex-yuyue <yue.yu1@huawei.com> DatasetNode as a base for all IR nodes namespace change Fix the namespace issue and make ut tests work Signed-off-by: alex-yuyue <yue.yu1@huawei.com> Add VOCDataset !63 Added RandomDataset * Added RandomDataset add imagefolder ir Pybind switch: CelebA and UT !61 CLUE example with class definition * Merge branch 'python-api' of gitee.com:ezphlow/mindspore into clue_class_pybind * Passing testcases * Added CLUE, not working add ManifestDataset IR Signed-off-by: alex-yuyue <yue.yu1@huawei.com> Update Coco & VOC & TFReader, Update clang-format, Reorder datasets_binding !69 Add Generator and move c_dataset.Iterator to dataset.Iterator * Add GeneratorDataset to c_dataset * Add GeneratorDataset to c_dataset !67 Moving c_datasets and adding sampler wrapper * Need to add create() method in datasets.py * migration from c_dataset to dataset part 1 !71 Fix indent error * Fix indentation error !72 Fix c_api tests cases * Fix c_api tests cases !73 Added CSV Dataset * Added CSVDataset pybind switch: Take and CelebA fixes !75 move c_dataset functionality to datasets * Fixed existing testcases * Added working clue and imagefolder * Added sampler conversion from pybind * Added sampler creation !77 Add Python API tree * Python API tree add minddataset TextFileDataset pybind Rename to skip test_concat.py and test_minddataset_exception.py !80 Add batch IR to python-api branch, most test cases work * staging III * staging, add pybind Enable more c_api take and CelebA tests; delete util_c_api !84 Schema changes in datasets.py * Schema changes !85 Remove input_indexes from sub-classes * remove input_index from each subclass !83 Remove C datasets * Removed c_dataset package * Remove c_datasets !82 pybind switch: shuffle * pybind switch: shuffle !86 Add build_vocab * Add build_vocab Rebase with upstream/master _shuffle conflict BatchNode error !88 Fix rebase problem * fix rebase problem Enable more unit tests; code typo/nit fixes !91 Fix python vocag hang * Fix python vocab hang !89 Added BucketBatchByLength Pybind switch * Added BucketBatchByLength Update and enable more tet_c_api_*.py tests !95 Add BuildSentencePeiceVocab * - Add BuildSentencePeiceVocab !96 Fix more tests * - Fix some tests - Enable more test_c_api_* - Add syncwait !99 pybind switch for device op * pybind switch for device op !93 Add getters to python API * Add getters to python API !101 Validate tree, error if graph * - Add sync wait !103 TFrecord/Random Datasets schema problem * - TfRecord/Random schem aproblem !102 Added filter pybind switch * Added Filter pybind switch !104 Fix num_samples * - TfRecord/Random schem aproblem !105 Fix to_device hang * Fix to_device hang !94 Adds Cache support for CLUE dataset * Added cache for all dataset ops * format change * Added CLUE cache support * Added Cache conversion Add save pybind fix compile err init modify concat_node !107 Fix some tests cases * Fix tests cases Enable and fix more tests !109 pybind switch for get dataset size * pybind_get_dataset_size some check-code fixes for pylint, cpplint and clang-format !113 Add callback * revert * dataset_sz 1 line * fix typo * get callback to work !114 Make Android compile clean * Make Android Compile Clean Fix build issues due to rebase !115 Fix more tests * Fix tests cases * !93 Add getters to python API fix test_profiling.py !116 fix get dataset size * fix get dataset size !117 GetColumnNames pybind switch * Added GetColumnNames pybind switch code-check fixes: clangformat, cppcheck, cpplint, pylint Delete duplicate test_c_api_*.py files; more lint fixes !121 Fix cpp tests * Remove extra call to getNext in cpp tests !122 Fix Schema with Generator * Fix Schema with Generator fix some cases of csv & mindrecord !124 fix tfrecord get_dataset_size and add some UTs * fix tfrecord get dataset size and add some ut for get_dataset_size !125 getter separation * Getter separation !126 Fix sampler.GetNumSamples * Fix sampler.GetNumSampler !127 Assign runtime getter to each get function * Assign runtime getter to each get function Fix compile issues !128 Match master code * Match master code !129 Cleanup DeviceOp/save code * Cleanup ToDevice/Save code !130 Add cache fix * Added cache fix for map and image folder !132 Fix testing team issues * Pass queue_name from python to C++ * Add Schema.from_json !131 Fix Cache op issues and delete de_pipeline * Roll back C++ change * Removed de_pipeline and passing all cache tests. * fixed cache tests !134 Cleanup datasets.py part1 * Cleanup dataset.py part1 !133 Updated validation for SentencePieceVocab.from_dataset * Added type_check for column names in SentencePieceVocab.from_dataset Rebase on master 181120 10:20 fix profiling temporary solution of catching stauts from Node.Build() !141 ToDevice Termination * ToDevice termination pylint fixes !137 Fix test team issues and add some corresponding tests * Fix test team issues and add some corresponding tests !138 TreeGetter changes to use OptPass * Getter changes to use OptPass (Zirui) Rebase fix !143 Fix cpplint issue * Fix cpplint issue pylint fixes in updated testcases !145 Reset exceptions testcase * reset exception test to master !146 Fix Check_Pylint Error * Fix Check_Pylint Error !147 fix android * fix android !148 ToDevice changes * Add ToDevice to the iterator List for cleanup at exit !149 Pylint issue * Add ToDevice to the iterator List for cleanup at exit !150 Pylint 2 * Add ToDevice to the iterator List for cleanup at exit !152 ExecutionTree error * ET destructor error !153 in getter_pass, only remove callback, without deleting map op * getter pass no longer removes map !156 early __del__ of iterator/to_device * early __del__ of iterator !155 Address review comments Eric 1 * Added one liner fix to validators.py * roll back signature fix * lint fix * Eric Address comments 2 * C++ lint fix * Address comments Eric 1 !158 Review rework for dataset bindings - part 1 * Reorder nodes repeat and rename * Review rework for dataset bindings - part 1 !154 Fixing minor problems in the comments (datasets.py, python_tree_consumer.cc, iterators_bindings.cc, and iterators.py) * Fixing minor problems in the comments (datasets.py, python_tree_consum… !157 add replace none * Add replace_none to datasets.py, address comments in tests Trying to resolve copy Override the deepcopy method of deviceop Create_ir_tree method Create_ir_tree method 2 Create_ir_tree method 2 del to_device if already exists del to_device if already exists cache getters shapes and types Added yolov3 relaxation, to be rolled back Get shapes and types together bypass yolo NumWorkers for MapOp revert Yolo revert Thor Print more info Debug code: Update LOG INFO to LOG ERROR do not remove epochctrl for getter pass Remove repeat(1) pritn batch size add log to tree_consumer and device_queue op Revert PR 8744 Signed-off-by: alex-yuyue <yue.yu1@huawei.com> __del__ toDEvice __del__ toDevice2 !165 add ifndef ENABLE_ANDROID to device queue print * Add ifndef ENABLE_ANDROID to device queue print revert some changes !166 getter: get_data_info * getter: get_data_info !168 add back tree print * revert info to warnning in one log * add back the missed print tree log Release GIL in GetDataInfo
5 years ago
5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452
  1. # Copyright 2019-2021 Huawei Technologies Co., Ltd
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. # ==============================================================================
  15. """
  16. Functions to support dataset serialize and deserialize.
  17. """
  18. import json
  19. import os
  20. import pickle
  21. import sys
  22. import mindspore.common.dtype as mstype
  23. from mindspore import log as logger
  24. from . import datasets as de
  25. from ..vision.utils import Inter, Border, ImageBatchFormat
  26. def serialize(dataset, json_filepath=""):
  27. """
  28. Serialize dataset pipeline into a json file.
  29. Args:
  30. dataset (Dataset): the starting node.
  31. json_filepath (str): a filepath where a serialized json file will be generated.
  32. Returns:
  33. dict containing the serialized dataset graph.
  34. Raises:
  35. OSError cannot open a file
  36. Examples:
  37. >>> dataset = ds.MnistDataset(mnist_dataset_dir, 100)
  38. >>> one_hot_encode = c_transforms.OneHot(10) # num_classes is input argument
  39. >>> dataset = dataset.map(operation=one_hot_encode, input_column_names="label")
  40. >>> dataset = dataset.batch(batch_size=10, drop_remainder=True)
  41. >>> # serialize it to json file
  42. >>> ds.engine.serialize(dataset, json_filepath="/path/to/mnist_dataset_pipeline.json")
  43. >>> serialized_data = ds.engine.serialize(dataset) # serialize it to Python dict
  44. """
  45. return dataset.to_json(json_filepath)
  46. def deserialize(input_dict=None, json_filepath=None):
  47. """
  48. Construct a de pipeline from a json file produced by de.serialize().
  49. Args:
  50. input_dict (dict): a Python dictionary containing a serialized dataset graph
  51. json_filepath (str): a path to the json file.
  52. Returns:
  53. de.Dataset or None if error occurs.
  54. Raises:
  55. OSError cannot open a file.
  56. Examples:
  57. >>> dataset = ds.MnistDataset(mnist_dataset_dir, 100)
  58. >>> one_hot_encode = c_transforms.OneHot(10) # num_classes is input argument
  59. >>> dataset = dataset.map(operation=one_hot_encode, input_column_names="label")
  60. >>> dataset = dataset.batch(batch_size=10, drop_remainder=True)
  61. >>> # Use case 1: to/from json file
  62. >>> ds.engine.serialize(dataset, json_filepath="/path/to/mnist_dataset_pipeline.json")
  63. >>> dataset = ds.engine.deserialize(json_filepath="/path/to/mnist_dataset_pipeline.json")
  64. >>> # Use case 2: to/from Python dictionary
  65. >>> serialized_data = ds.engine.serialize(dataset)
  66. >>> dataset = ds.engine.deserialize(input_dict=serialized_data)
  67. """
  68. data = None
  69. if input_dict:
  70. data = construct_pipeline(input_dict)
  71. if json_filepath:
  72. dict_pipeline = dict()
  73. with open(json_filepath, 'r') as json_file:
  74. dict_pipeline = json.load(json_file)
  75. data = construct_pipeline(dict_pipeline)
  76. return data
  77. def expand_path(node_repr, key, val):
  78. """Convert relative to absolute path."""
  79. if isinstance(val, list):
  80. node_repr[key] = [os.path.abspath(file) for file in val]
  81. else:
  82. node_repr[key] = os.path.abspath(val)
  83. def show(dataset, indentation=2):
  84. """
  85. Write the dataset pipeline graph onto logger.info.
  86. Args:
  87. dataset (Dataset): the starting node.
  88. indentation (int, optional): indentation used by the json print. Pass None to not indent.
  89. """
  90. pipeline = dataset.to_json()
  91. logger.info(json.dumps(pipeline, indent=indentation))
  92. def compare(pipeline1, pipeline2):
  93. """
  94. Compare if two dataset pipelines are the same.
  95. Args:
  96. pipeline1 (Dataset): a dataset pipeline.
  97. pipeline2 (Dataset): a dataset pipeline.
  98. """
  99. return pipeline1.to_json() == pipeline2.to_json()
  100. def construct_pipeline(node):
  101. """Construct the Python Dataset objects by following the dictionary deserialized from json file."""
  102. op_type = node.get('op_type')
  103. if not op_type:
  104. raise ValueError("op_type field in the json file can't be None.")
  105. # Instantiate Python Dataset object based on the current dictionary element
  106. dataset = create_node(node)
  107. # Initially it is not connected to any other object.
  108. dataset.children = []
  109. # Construct the children too and add edge between the children and parent.
  110. for child in node['children']:
  111. dataset.children.append(construct_pipeline(child))
  112. return dataset
  113. def create_node(node):
  114. """Parse the key, value in the node dictionary and instantiate the Python Dataset object"""
  115. logger.info('creating node: %s', node['op_type'])
  116. dataset_op = node['op_type']
  117. op_module = "mindspore.dataset"
  118. # Get the Python class to be instantiated.
  119. # Example:
  120. # "op_type": "MapDataset",
  121. # "op_module": "mindspore.dataset.datasets",
  122. if node.get("children"):
  123. pyclass = getattr(sys.modules[op_module], "Dataset")
  124. else:
  125. pyclass = getattr(sys.modules[op_module], dataset_op)
  126. pyobj = None
  127. # Find a matching Dataset class and call the constructor with the corresponding args.
  128. # When a new Dataset class is introduced, another if clause and parsing code needs to be added.
  129. # Dataset Source Ops (in alphabetical order)
  130. pyobj = create_dataset_node(pyclass, node, dataset_op)
  131. if not pyobj:
  132. # Dataset Ops (in alphabetical order)
  133. pyobj = create_dataset_operation_node(node, dataset_op)
  134. return pyobj
  135. def create_dataset_node(pyclass, node, dataset_op):
  136. """Parse the key, value in the dataset node dictionary and instantiate the Python Dataset object"""
  137. pyobj = None
  138. if dataset_op == 'CelebADataset':
  139. sampler = construct_sampler(node.get('sampler'))
  140. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  141. pyobj = pyclass(node['dataset_dir'], node.get('num_parallel_workers'), node.get('shuffle'), node.get('usage'),
  142. sampler, node.get('decode'), node.get('extensions'), num_samples, node.get('num_shards'),
  143. node.get('shard_id'))
  144. elif dataset_op == 'Cifar10Dataset':
  145. sampler = construct_sampler(node.get('sampler'))
  146. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  147. pyobj = pyclass(node['dataset_dir'], node['usage'], num_samples, node.get('num_parallel_workers'),
  148. node.get('shuffle'), sampler, node.get('num_shards'), node.get('shard_id'))
  149. elif dataset_op == 'Cifar100Dataset':
  150. sampler = construct_sampler(node.get('sampler'))
  151. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  152. pyobj = pyclass(node['dataset_dir'], node['usage'], num_samples, node.get('num_parallel_workers'),
  153. node.get('shuffle'), sampler, node.get('num_shards'), node.get('shard_id'))
  154. elif dataset_op == 'ClueDataset':
  155. shuffle = to_shuffle_mode(node.get('shuffle'))
  156. if isinstance(shuffle, str):
  157. shuffle = de.Shuffle(shuffle)
  158. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  159. pyobj = pyclass(node['dataset_files'], node.get('task'),
  160. node.get('usage'), num_samples, node.get('num_parallel_workers'), shuffle,
  161. node.get('num_shards'), node.get('shard_id'))
  162. elif dataset_op == 'CocoDataset':
  163. sampler = construct_sampler(node.get('sampler'))
  164. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  165. pyobj = pyclass(node['dataset_dir'], node.get('annotation_file'), node.get('task'), num_samples,
  166. node.get('num_parallel_workers'), node.get('shuffle'), node.get('decode'), sampler,
  167. node.get('num_shards'), node.get('shard_id'))
  168. elif dataset_op == 'CSVDataset':
  169. shuffle = to_shuffle_mode(node.get('shuffle'))
  170. if isinstance(shuffle, str):
  171. shuffle = de.Shuffle(shuffle)
  172. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  173. pyobj = pyclass(node['dataset_files'], node.get('field_delim'),
  174. node.get('column_defaults'), node.get('column_names'), num_samples,
  175. node.get('num_parallel_workers'), shuffle,
  176. node.get('num_shards'), node.get('shard_id'))
  177. elif dataset_op == 'ImageFolderDataset':
  178. sampler = construct_sampler(node.get('sampler'))
  179. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  180. pyobj = pyclass(node['dataset_dir'], num_samples, node.get('num_parallel_workers'),
  181. node.get('shuffle'), sampler, node.get('extensions'),
  182. node.get('class_indexing'), node.get('decode'), node.get('num_shards'),
  183. node.get('shard_id'))
  184. elif dataset_op == 'ManifestDataset':
  185. sampler = construct_sampler(node.get('sampler'))
  186. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  187. pyobj = pyclass(node['dataset_file'], node['usage'], num_samples,
  188. node.get('num_parallel_workers'), node.get('shuffle'), sampler,
  189. node.get('class_indexing'), node.get('decode'), node.get('num_shards'),
  190. node.get('shard_id'))
  191. elif dataset_op == 'MnistDataset':
  192. sampler = construct_sampler(node.get('sampler'))
  193. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  194. pyobj = pyclass(node['dataset_dir'], node['usage'], num_samples, node.get('num_parallel_workers'),
  195. node.get('shuffle'), sampler, node.get('num_shards'), node.get('shard_id'))
  196. elif dataset_op == 'TextFileDataset':
  197. shuffle = to_shuffle_mode(node.get('shuffle'))
  198. if isinstance(shuffle, str):
  199. shuffle = de.Shuffle(shuffle)
  200. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  201. pyobj = pyclass(node['dataset_files'], num_samples,
  202. node.get('num_parallel_workers'), shuffle,
  203. node.get('num_shards'), node.get('shard_id'))
  204. elif dataset_op == 'TFRecordDataset':
  205. shuffle = to_shuffle_mode(node.get('shuffle'))
  206. if isinstance(shuffle, str):
  207. shuffle = de.Shuffle(shuffle)
  208. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  209. pyobj = pyclass(node['dataset_files'], node.get('schema'), node.get('columns_list'),
  210. num_samples, node.get('num_parallel_workers'),
  211. shuffle, node.get('num_shards'), node.get('shard_id'))
  212. elif dataset_op == 'VOCDataset':
  213. sampler = construct_sampler(node.get('sampler'))
  214. num_samples = check_and_replace_input(node.get('num_samples'), 0, None)
  215. pyobj = pyclass(node['dataset_dir'], node.get('task'), node.get('usage'), node.get('class_indexing'),
  216. num_samples, node.get('num_parallel_workers'), node.get('shuffle'),
  217. node.get('decode'), sampler, node.get('num_shards'), node.get('shard_id'))
  218. return pyobj
  219. def create_dataset_operation_node(node, dataset_op):
  220. """Parse the key, value in the dataset operation node dictionary and instantiate the Python Dataset object"""
  221. pyobj = None
  222. if dataset_op == 'Batch':
  223. pyobj = de.Dataset().batch(node['batch_size'], node.get('drop_remainder'))
  224. elif dataset_op == 'Map':
  225. tensor_ops = construct_tensor_ops(node.get('operations'))
  226. pyobj = de.Dataset().map(tensor_ops, node.get('input_columns'), node.get('output_columns'),
  227. node.get('column_order'), node.get('num_parallel_workers'),
  228. False, None, node.get('callbacks'))
  229. elif dataset_op == 'Project':
  230. pyobj = de.Dataset().project(node['columns'])
  231. elif dataset_op == 'Rename':
  232. pyobj = de.Dataset().rename(node['input_columns'], node['output_columns'])
  233. elif dataset_op == 'Repeat':
  234. pyobj = de.Dataset().repeat(node.get('count'))
  235. elif dataset_op == 'Shuffle':
  236. pyobj = de.Dataset().shuffle(node.get('buffer_size'))
  237. elif dataset_op == 'Skip':
  238. pyobj = de.Dataset().skip(node.get('count'))
  239. elif dataset_op == 'Take':
  240. pyobj = de.Dataset().take(node.get('count'))
  241. elif dataset_op == 'Transfer':
  242. pyobj = de.Dataset().to_device(node.get('send_epoch_end'), node.get('create_data_info_queue'))
  243. elif dataset_op == 'Zip':
  244. # Create ZipDataset instance, giving dummy input dataset that will be overrode in the caller.
  245. pyobj = de.ZipDataset((de.Dataset(), de.Dataset()))
  246. else:
  247. raise RuntimeError(dataset_op + " is not yet supported by ds.engine.deserialize().")
  248. return pyobj
  249. def construct_sampler(in_sampler):
  250. """Instantiate Sampler object based on the information from dictionary['sampler']"""
  251. sampler = None
  252. if in_sampler is not None:
  253. if "num_samples" in in_sampler:
  254. num_samples = check_and_replace_input(in_sampler['num_samples'], 0, None)
  255. sampler_name = in_sampler['sampler_name']
  256. sampler_module = "mindspore.dataset"
  257. sampler_class = getattr(sys.modules[sampler_module], sampler_name)
  258. if sampler_name == 'DistributedSampler':
  259. sampler = sampler_class(in_sampler['num_shards'], in_sampler['shard_id'], in_sampler.get('shuffle'))
  260. elif sampler_name == 'PKSampler':
  261. sampler = sampler_class(in_sampler['num_val'], in_sampler.get('num_class'), in_sampler('shuffle'))
  262. elif sampler_name == 'RandomSampler':
  263. sampler = sampler_class(in_sampler.get('replacement'), num_samples)
  264. elif sampler_name == 'SequentialSampler':
  265. sampler = sampler_class(in_sampler.get('start_index'), num_samples)
  266. elif sampler_name == 'SubsetRandomSampler':
  267. sampler = sampler_class(in_sampler['indices'], num_samples)
  268. elif sampler_name == 'WeightedRandomSampler':
  269. sampler = sampler_class(in_sampler['weights'], num_samples, in_sampler.get('replacement'))
  270. else:
  271. raise ValueError("Sampler type is unknown: {}.".format(sampler_name))
  272. if in_sampler.get("child_sampler"):
  273. for child in in_sampler["child_sampler"]:
  274. sampler.add_child(construct_sampler(child))
  275. return sampler
  276. def construct_tensor_ops(operations):
  277. """Instantiate tensor op object(s) based on the information from dictionary['operations']"""
  278. result = []
  279. for op in operations:
  280. op_name = op.get('tensor_op_name')
  281. op_params = op.get('tensor_op_params')
  282. if op.get('is_python_front_end_op'): # check if it's a py_transform op
  283. result.append(pickle.loads(op_params.encode()))
  284. else:
  285. if op_name == "HwcToChw": op_name = "HWC2CHW"
  286. if op_name == "UniformAug": op_name = "UniformAugment"
  287. op_module_vis = sys.modules["mindspore.dataset.vision.c_transforms"]
  288. op_module_trans = sys.modules["mindspore.dataset.transforms.c_transforms"]
  289. if hasattr(op_module_vis, op_name):
  290. op_class = getattr(op_module_vis, op_name, None)
  291. elif hasattr(op_module_trans, op_name[:-2]):
  292. op_name = op_name[:-2] # to remove op from the back of the name
  293. op_class = getattr(op_module_trans, op_name, None)
  294. else:
  295. raise RuntimeError(op_name + " is not yet supported by deserialize().")
  296. if op_params is None: # If no parameter is specified, call it directly
  297. result.append(op_class())
  298. else:
  299. # Input parameter type cast
  300. for key, val in op_params.items():
  301. if key in ['center', 'fill_value']:
  302. op_params[key] = tuple(val)
  303. elif key in ['interpolation', 'resample']:
  304. op_params[key] = Inter(to_interpolation_mode(val))
  305. elif key in ['padding_mode']:
  306. op_params[key] = Border(to_border_mode(val))
  307. elif key in ['data_type']:
  308. op_params[key] = to_mstype(val)
  309. elif key in ['image_batch_format']:
  310. op_params[key] = to_image_batch_format(val)
  311. elif key in ['policy']:
  312. op_params[key] = to_policy(val)
  313. elif key in ['transform', 'transforms']:
  314. op_params[key] = construct_tensor_ops(val)
  315. result.append(op_class(**op_params))
  316. return result
  317. def to_policy(op_list):
  318. policy_tensor_ops = []
  319. for policy_list in op_list:
  320. sub_policy_tensor_ops = []
  321. for policy_item in policy_list:
  322. sub_policy_tensor_ops.append(
  323. (construct_tensor_ops(policy_item.get('tensor_op')), policy_item.get('prob')))
  324. policy_tensor_ops.append(sub_policy_tensor_ops)
  325. return policy_tensor_ops
  326. def to_shuffle_mode(shuffle):
  327. if shuffle == 2: return "global"
  328. if shuffle == 1: return "file"
  329. return False
  330. def to_interpolation_mode(inter):
  331. return {
  332. 0: Inter.LINEAR,
  333. 1: Inter.NEAREST,
  334. 2: Inter.CUBIC,
  335. 3: Inter.AREA
  336. }[inter]
  337. def to_border_mode(border):
  338. return {
  339. 0: Border.CONSTANT,
  340. 1: Border.EDGE,
  341. 2: Border.REFLECT,
  342. 3: Border.SYMMETRIC
  343. }[border]
  344. def to_mstype(data_type):
  345. return {
  346. "bool": mstype.bool_,
  347. "int8": mstype.int8,
  348. "int16": mstype.int16,
  349. "int32": mstype.int32,
  350. "int64": mstype.int64,
  351. "uint8": mstype.uint8,
  352. "uint16": mstype.uint16,
  353. "uint32": mstype.uint32,
  354. "uint64": mstype.uint64,
  355. "float16": mstype.float16,
  356. "float32": mstype.float32,
  357. "float64": mstype.float64,
  358. "string": mstype.string
  359. }[data_type]
  360. def to_image_batch_format(image_batch_format):
  361. return {
  362. 0: ImageBatchFormat.NHWC,
  363. 1: ImageBatchFormat.NCHW
  364. }[image_batch_format]
  365. def check_and_replace_input(input_value, expect, replace):
  366. return replace if input_value == expect else input_value