Before performing inference, the mindir file must bu exported by export script on the 910 environment. We only provide an example of inference using MINDIR model.
Current batch_Size can only be set to 1. The inference result will be just the network outputs, which will be save in binary file. The accuracy is calculated by `src/metric.`.
We need five or seven parameters for this scripts.
We need five or seven parameters for this scripts.
@@ -283,15 +308,23 @@ epoch: 1 step: 3, loss is 476.802
epoch: 1 step: 458, loss is 3.1283689
epoch: 1 step: 458, loss is 3.1283689
epoch time: 150753.701, per step time: 329.157
epoch time: 150753.701, per step time: 329.157
...
...
```
```
#### Transfer Training
You can train your own model based on either pretrained classification model or pretrained detection model. You can perform transfer training by following steps.
1. Convert your own dataset to COCO or VOC style. Otherwise you havet to add your own data preprocess code.
2. Change config.py according to your own dataset, especially the `num_classes`.
3. Set argument `filter_weight` to `True` while calling `train.py`, this will filter the final detection box weight from the pretrained model.
4. Build your own bash scripts using new config and arguments for further convenient.
### [Evaluation Process](#contents)
### [Evaluation Process](#contents)
#### Evaluation on Ascend
#### Evaluation on Ascend
```shell
```shell
sh run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
`EXPORT_FORMAT` should be in ["AIR", "ONNX", "MINDIR"]
## Inference Process
`EXPORT_FORMAT` should be in ["AIR", "MINDIR"]
### Usage
### Infer on Ascend310
Before performing inference, the air file must bu exported by export script on the 910 environment.
Current batch_Size can only be set to 1. The precision calculation process needs about 70G+ memory space.
Before performing inference, the mindir file must bu exported by export script on the 910 environment. We only provide an example of inference using MINDIR model.
Current batch_Size can only be set to 1. The precision calculation process needs about 70G+ memory space, otherwise the process will be killed for execeeding memory limits.
```shell
```shell
# Ascend310 inference
# Ascend310 inference
sh run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]