You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.en.md 23 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554
  1. # Demo of Object Detection
  2. The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) and MindSpore Lite object detection models to perform on-device inference, detect the content captured by a device camera, and display the most possible detection result on the application's image preview screen.
  3. ## Deploying an Application
  4. The following section describes how to build and execute an on-device object detecion task on MindSpore Lite.
  5. ### Running Dependencies
  6. - Android Studio 3.2 or later (Android 4.0 or later is recommended.)
  7. - Native development kit (NDK) 21.3
  8. - CMake 3.10.2
  9. - Android software development kit (SDK) 26 or later
  10. - OpenCV 4.0.0 or later (included in the sample code)
  11. ### Building and Running
  12. 1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)
  13. ![start_home](images/home.png)
  14. If you have any Android Studio configuration problem when trying this demo, please refer to item 5 to resolve it.
  15. 2. Connect to an Android device and runs the object detection application.
  16. Connect to the Android device through a USB cable for debugging. Click `Run 'app'` to run the sample project on your device.
  17. ![run_app](images/project_structure.png)
  18. For details about how to connect the Android Studio to a device for debugging, see <https://developer.android.com/studio/run/device>.
  19. 3. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result.
  20. ![result](images/object_detection.png)
  21. 4. The solutions of Android Studio configuration problems:
  22. | | Warning | Solution |
  23. | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
  24. | 1 | Gradle sync failed: NDK not configured. | Specify the installed ndk directory in local.properties:ndk.dir={ndk的安装目录} |
  25. | 2 | Requested NDK version did not match the version requested by ndk.dir | Manually download corresponding [NDK Version](https://developer.android.com/ndk/downloads),and specify the sdk directory in Project Structure - Android NDK location.(You can refer to the figure below.) |
  26. | 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | Update Android Studio Version in Tools - help - Checkout for Updates. |
  27. | 4 | SSL peer shut down incorrectly | Run this demo again. |
  28. ![project_structure](images/project_structure.png)
  29. ## Detailed Description of the Sample Program
  30. This object detection sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed .
  31. ### Configuring MindSpore Lite Dependencies
  32. When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module.
  33. In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory.
  34. Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
  35. mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz [Download link](https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.0.1/lite/android_aarch64/mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz)
  36. ```text
  37. android{
  38. defaultConfig{
  39. externalNativeBuild{
  40. cmake{
  41. arguments "-DANDROID_STL=c++_shared"
  42. }
  43. }
  44. ndk{
  45. abiFilters 'arm64-v8a'
  46. }
  47. }
  48. }
  49. ```
  50. Create a link to the `.so` library file in the `app/CMakeLists.txt` file:
  51. ```text
  52. # Set MindSpore Lite Dependencies.
  53. set(MINDSPORELITE_VERSION mindspore-lite-1.0.1-runtime-arm64-cpu)
  54. include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
  55. add_library(mindspore-lite SHARED IMPORTED )
  56. add_library(minddata-lite SHARED IMPORTED )
  57. set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
  58. ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libmindspore-lite.so)
  59. set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
  60. ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
  61. # Link target library.
  62. target_link_libraries(
  63. ...
  64. mindspore-lite
  65. minddata-lite
  66. ...
  67. )
  68. ```
  69. ### Downloading and Deploying a Model File
  70. In this example, the download.gradle File configuration auto download `ssd.ms`and placed in the 'app / libs / arm64-v8a' directory.
  71. Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
  72. ssd.ms [ssd.ms]( https://download.mindspore.cn/model_zoo/official/lite/ssd_mobilenetv2_lite/ssd.ms)
  73. ### Compiling On-Device Inference Code
  74. Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference.
  75. The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`.
  76. 1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
  77. - Load a model file. Create and configure the context for model inference.
  78. ```cpp
  79. // Buffer is the model data passed in by the Java layer
  80. jlong bufferLen = env->GetDirectBufferCapacity(buffer);
  81. char *modelBuffer = CreateLocalModelBuffer(env, buffer);
  82. ```
  83. - Create a session.
  84. ```cpp
  85. void **labelEnv = new void *;
  86. MSNetWork *labelNet = new MSNetWork;
  87. *labelEnv = labelNet;
  88. // Create context.
  89. lite::Context *context = new lite::Context;
  90. context->device_ctx_.type = lite::DT_CPU;
  91. context->thread_num_ = numThread; //Specify the number of threads to run inference
  92. // Create the mindspore session.
  93. labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
  94. delete(context);
  95. ```
  96. - Load the model file and build a computational graph for inference.
  97. ```cpp
  98. void MSNetWork::CreateSessionMS(char* modelBuffer, size_t bufferLen, std::string name, mindspore::lite::Context* ctx)
  99. {
  100. CreateSession(modelBuffer, bufferLen, ctx);
  101. session = mindspore::session::LiteSession::CreateSession(ctx);
  102. auto model = mindspore::lite::Model::Import(modelBuffer, bufferLen);
  103. int ret = session->CompileGraph(model);
  104. }
  105. ```
  106. 2. Pre-Process the imagedata and convert the input image into the Tensor format of the MindSpore model.
  107. ```cpp
  108. // Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
  109. LiteMat lite_mat_bgr,lite_norm_mat_cut;
  110. if (!BitmapToLiteMat(env, srcBitmap, lite_mat_bgr)){
  111. MS_PRINT("BitmapToLiteMat error");
  112. return NULL;
  113. }
  114. int srcImageWidth = lite_mat_bgr.width_;
  115. int srcImageHeight = lite_mat_bgr.height_;
  116. if(!PreProcessImageData(lite_mat_bgr, lite_norm_mat_cut)){
  117. MS_PRINT("PreProcessImageData error");
  118. return NULL;
  119. }
  120. ImgDims inputDims;
  121. inputDims.channel =lite_norm_mat_cut.channel_;
  122. inputDims.width = lite_norm_mat_cut.width_;
  123. inputDims.height = lite_norm_mat_cut.height_;
  124. // Get the mindsore inference environment which created in loadModel().
  125. void **labelEnv = reinterpret_cast<void **>(netEnv);
  126. if (labelEnv == nullptr) {
  127. MS_PRINT("MindSpore error, labelEnv is a nullptr.");
  128. return NULL;
  129. }
  130. MSNetWork *labelNet = static_cast<MSNetWork *>(*labelEnv);
  131. auto mSession = labelNet->session;
  132. if (mSession == nullptr) {
  133. MS_PRINT("MindSpore error, Session is a nullptr.");
  134. return NULL;
  135. }
  136. MS_PRINT("MindSpore get session.");
  137. auto msInputs = mSession->GetInputs();
  138. auto inTensor = msInputs.front();
  139. float *dataHWC = reinterpret_cast<float *>(lite_norm_mat_cut.data_ptr_);
  140. // copy input Tensor
  141. memcpy(inTensor->MutableData(), dataHWC,
  142. inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
  143. delete[] (dataHWC);
  144. ```
  145. 3. The input image shall be NHWC(1:300:300:3).
  146. ```cpp
  147. bool PreProcessImageData(const LiteMat &lite_mat_bgr, LiteMat *lite_norm_mat_ptr) {
  148. bool ret = false;
  149. LiteMat lite_mat_resize;
  150. LiteMat &lite_norm_mat_cut = *lite_norm_mat_ptr;
  151. ret = ResizeBilinear(lite_mat_bgr, lite_mat_resize, 300, 300);
  152. if (!ret) {
  153. MS_PRINT("ResizeBilinear error");
  154. return false;
  155. }
  156. LiteMat lite_mat_convert_float;
  157. ret = ConvertTo(lite_mat_resize, lite_mat_convert_float, 1.0 / 255.0);
  158. if (!ret) {
  159. MS_PRINT("ConvertTo error");
  160. return false;
  161. }
  162. float means[3] = {0.485, 0.456, 0.406};
  163. float vars[3] = {1.0 / 0.229, 1.0 / 0.224, 1.0 / 0.225};
  164. SubStractMeanNormalize(lite_mat_convert_float, lite_norm_mat_cut, means, vars);
  165. return true;
  166. }
  167. ```
  168. 4. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
  169. Perform graph execution and on-device inference.
  170. ```cpp
  171. // After the model and image tensor data is loaded, run inference.
  172. auto status = mSession->RunGraph();
  173. ```
  174. Obtain the output data.
  175. ```cpp
  176. auto names = mSession->GetOutputTensorNames();
  177. typedef std::unordered_map<std::string,
  178. std::vector<mindspore::tensor::MSTensor *>> Msout;
  179. std::unordered_map<std::string,
  180. mindspore::tensor::MSTensor *> msOutputs;
  181. for (const auto &name : names) {
  182. auto temp_dat =mSession->GetOutputByTensorName(name);
  183. msOutputs.insert(std::pair<std::string, mindspore::tensor::MSTensor *> {name, temp_dat});
  184. }
  185. std::string retStr = ProcessRunnetResult(msOutputs, ret);
  186. ```
  187. The model output the object category scores (1:1917:81) and the object detection location offset (1:1917:4). The location offset can be calcalation the object location in getDefaultBoxes function .
  188. ```cpp
  189. void SSDModelUtil::getDefaultBoxes() {
  190. float fk[6] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
  191. std::vector<struct WHBox> all_sizes;
  192. struct Product mProductData[19 * 19] = {0};
  193. for (int i = 0; i < 6; i++) {
  194. fk[i] = config.model_input_height / config.steps[i];
  195. }
  196. float scale_rate =
  197. (config.max_scale - config.min_scale) / (sizeof(config.num_default) / sizeof(int) - 1);
  198. float scales[7] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0};
  199. for (int i = 0; i < sizeof(config.num_default) / sizeof(int); i++) {
  200. scales[i] = config.min_scale + scale_rate * i;
  201. }
  202. for (int idex = 0; idex < sizeof(config.feature_size) / sizeof(int); idex++) {
  203. float sk1 = scales[idex];
  204. float sk2 = scales[idex + 1];
  205. float sk3 = sqrt(sk1 * sk2);
  206. struct WHBox tempWHBox;
  207. all_sizes.clear();
  208. if (idex == 0) {
  209. float w = sk1 * sqrt(2);
  210. float h = sk1 / sqrt(2);
  211. tempWHBox.boxw = 0.1;
  212. tempWHBox.boxh = 0.1;
  213. all_sizes.push_back(tempWHBox);
  214. tempWHBox.boxw = w;
  215. tempWHBox.boxh = h;
  216. all_sizes.push_back(tempWHBox);
  217. tempWHBox.boxw = h;
  218. tempWHBox.boxh = w;
  219. all_sizes.push_back(tempWHBox);
  220. } else {
  221. tempWHBox.boxw = sk1;
  222. tempWHBox.boxh = sk1;
  223. all_sizes.push_back(tempWHBox);
  224. for (int j = 0; j < sizeof(config.aspect_ratios[idex]) / sizeof(int); j++) {
  225. float w = sk1 * sqrt(config.aspect_ratios[idex][j]);
  226. float h = sk1 / sqrt(config.aspect_ratios[idex][j]);
  227. tempWHBox.boxw = w;
  228. tempWHBox.boxh = h;
  229. all_sizes.push_back(tempWHBox);
  230. tempWHBox.boxw = h;
  231. tempWHBox.boxh = w;
  232. all_sizes.push_back(tempWHBox);
  233. }
  234. tempWHBox.boxw = sk3;
  235. tempWHBox.boxh = sk3;
  236. all_sizes.push_back(tempWHBox);
  237. }
  238. for (int i = 0; i < config.feature_size[idex]; i++) {
  239. for (int j = 0; j < config.feature_size[idex]; j++) {
  240. mProductData[i * config.feature_size[idex] + j].x = i;
  241. mProductData[i * config.feature_size[idex] + j].y = j;
  242. }
  243. }
  244. int productLen = config.feature_size[idex] * config.feature_size[idex];
  245. for (int i = 0; i < productLen; i++) {
  246. for (int j = 0; j < all_sizes.size(); j++) {
  247. struct NormalBox tempBox;
  248. float cx = (mProductData[i].y + 0.5) / fk[idex];
  249. float cy = (mProductData[i].x + 0.5) / fk[idex];
  250. tempBox.y = cy;
  251. tempBox.x = cx;
  252. tempBox.h = all_sizes[j].boxh;
  253. tempBox.w = all_sizes[j].boxw;
  254. mDefaultBoxes.push_back(tempBox);
  255. }
  256. }
  257. }
  258. }
  259. ```
  260. - The higher scores and location of category can be calcluted by the nonMaximumSuppression function.
  261. ```cpp
  262. void SSDModelUtil::nonMaximumSuppression(const YXBoxes *const decoded_boxes,
  263. const float *const scores,
  264. const std::vector<int> &in_indexes,
  265. std::vector<int> &out_indexes, const float nmsThreshold,
  266. const int count, const int max_results) {
  267. int nR = 0; //number of results
  268. std::vector<bool> del(count, false);
  269. for (size_t i = 0; i < in_indexes.size(); i++) {
  270. if (!del[in_indexes[i]]) {
  271. out_indexes.push_back(in_indexes[i]);
  272. if (++nR == max_results) {
  273. break;
  274. }
  275. for (size_t j = i + 1; j < in_indexes.size(); j++) {
  276. const auto boxi = decoded_boxes[in_indexes[i]], boxj = decoded_boxes[in_indexes[j]];
  277. float a[4] = {boxi.xmin, boxi.ymin, boxi.xmax, boxi.ymax};
  278. float b[4] = {boxj.xmin, boxj.ymin, boxj.xmax, boxj.ymax};
  279. if (IOU(a, b) > nmsThreshold) {
  280. del[in_indexes[j]] = true;
  281. }
  282. }
  283. }
  284. }
  285. }
  286. ```
  287. - For the targets whose probability is greater than the threshold value, the output rectangle box needs to be restored to the original size after the rectangular box is filtered by NMS algorithm.
  288. ```cpp
  289. std::string SSDModelUtil::getDecodeResult(float *branchScores, float *branchBoxData) {
  290. std::string result = "";
  291. NormalBox tmpBox[1917] = {0};
  292. float mScores[1917][81] = {0};
  293. float outBuff[1917][7] = {0};
  294. float scoreWithOneClass[1917] = {0};
  295. int outBoxNum = 0;
  296. YXBoxes decodedBoxes[1917] = {0};
  297. // Copy branch outputs box data to tmpBox.
  298. for (int i = 0; i < 1917; ++i) {
  299. tmpBox[i].y = branchBoxData[i * 4 + 0];
  300. tmpBox[i].x = branchBoxData[i * 4 + 1];
  301. tmpBox[i].h = branchBoxData[i * 4 + 2];
  302. tmpBox[i].w = branchBoxData[i * 4 + 3];
  303. }
  304. // Copy branch outputs score to mScores.
  305. for (int i = 0; i < 1917; ++i) {
  306. for (int j = 0; j < 81; ++j) {
  307. mScores[i][j] = branchScores[i * 81 + j];
  308. }
  309. }
  310. // NMS processing.
  311. ssd_boxes_decode(tmpBox, decodedBoxes);
  312. // const float nms_threshold = 0.6;
  313. const float nms_threshold = 0.3;
  314. for (int i = 1; i < 81; i++) {
  315. std::vector<int> in_indexes;
  316. for (int j = 0; j < 1917; j++) {
  317. scoreWithOneClass[j] = mScores[j][i];
  318. // if (mScores[j][i] > 0.1) {
  319. if (mScores[j][i] > g_thres_map[i]) {
  320. in_indexes.push_back(j);
  321. }
  322. }
  323. if (in_indexes.size() == 0) {
  324. continue;
  325. }
  326. sort(in_indexes.begin(), in_indexes.end(),
  327. [&](int a, int b) { return scoreWithOneClass[a] > scoreWithOneClass[b]; });
  328. std::vector<int> out_indexes;
  329. nonMaximumSuppression(decodedBoxes, scoreWithOneClass, in_indexes, out_indexes,
  330. nms_threshold);
  331. for (int k = 0; k < out_indexes.size(); k++) {
  332. outBuff[outBoxNum][0] = out_indexes[k]; //image id
  333. outBuff[outBoxNum][1] = i; //labelid
  334. outBuff[outBoxNum][2] = scoreWithOneClass[out_indexes[k]]; //scores
  335. outBuff[outBoxNum][3] =
  336. decodedBoxes[out_indexes[k]].xmin * inputImageWidth / 300;
  337. outBuff[outBoxNum][4] =
  338. decodedBoxes[out_indexes[k]].ymin * inputImageHeight / 300;
  339. outBuff[outBoxNum][5] =
  340. decodedBoxes[out_indexes[k]].xmax * inputImageWidth / 300;
  341. outBuff[outBoxNum][6] =
  342. decodedBoxes[out_indexes[k]].ymax * inputImageHeight / 300;
  343. outBoxNum++;
  344. }
  345. }
  346. MS_PRINT("outBoxNum %d", outBoxNum);
  347. for (int i = 0; i < outBoxNum; ++i) {
  348. std::string tmpid_str = std::to_string(outBuff[i][0]);
  349. result += tmpid_str; // image ID
  350. result += "_";
  351. // tmpid_str = std::to_string(outBuff[i][1]);
  352. MS_PRINT("label_classes i %d, outBuff %d",i, (int) outBuff[i][1]);
  353. tmpid_str = label_classes[(int) outBuff[i][1]];
  354. result += tmpid_str; // label id
  355. result += "_";
  356. tmpid_str = std::to_string(outBuff[i][2]);
  357. result += tmpid_str; // scores
  358. result += "_";
  359. tmpid_str = std::to_string(outBuff[i][3]);
  360. result += tmpid_str; // xmin
  361. result += "_";
  362. tmpid_str = std::to_string(outBuff[i][4]);
  363. result += tmpid_str; // ymin
  364. result += "_";
  365. tmpid_str = std::to_string(outBuff[i][5]);
  366. result += tmpid_str; // xmax
  367. result += "_";
  368. tmpid_str = std::to_string(outBuff[i][6]);
  369. result += tmpid_str; // ymax
  370. result += ";";
  371. }
  372. return result;
  373. }
  374. std::string SSDModelUtil::getDecodeResult(float *branchScores, float *branchBoxData) {
  375. std::string result = "";
  376. NormalBox tmpBox[1917] = {0};
  377. float mScores[1917][81] = {0};
  378. float outBuff[1917][7] = {0};
  379. float scoreWithOneClass[1917] = {0};
  380. int outBoxNum = 0;
  381. YXBoxes decodedBoxes[1917] = {0};
  382. // Copy branch outputs box data to tmpBox.
  383. for (int i = 0; i < 1917; ++i) {
  384. tmpBox[i].y = branchBoxData[i * 4 + 0];
  385. tmpBox[i].x = branchBoxData[i * 4 + 1];
  386. tmpBox[i].h = branchBoxData[i * 4 + 2];
  387. tmpBox[i].w = branchBoxData[i * 4 + 3];
  388. }
  389. // Copy branch outputs score to mScores.
  390. for (int i = 0; i < 1917; ++i) {
  391. for (int j = 0; j < 81; ++j) {
  392. mScores[i][j] = branchScores[i * 81 + j];
  393. }
  394. }
  395. ssd_boxes_decode(tmpBox, decodedBoxes);
  396. const float nms_threshold = 0.3;
  397. for (int i = 1; i < 81; i++) {
  398. std::vector<int> in_indexes;
  399. for (int j = 0; j < 1917; j++) {
  400. scoreWithOneClass[j] = mScores[j][i];
  401. if (mScores[j][i] > g_thres_map[i]) {
  402. in_indexes.push_back(j);
  403. }
  404. }
  405. if (in_indexes.size() == 0) {
  406. continue;
  407. }
  408. sort(in_indexes.begin(), in_indexes.end(),
  409. [&](int a, int b) { return scoreWithOneClass[a] > scoreWithOneClass[b]; });
  410. std::vector<int> out_indexes;
  411. nonMaximumSuppression(decodedBoxes, scoreWithOneClass, in_indexes, out_indexes,
  412. nms_threshold);
  413. for (int k = 0; k < out_indexes.size(); k++) {
  414. outBuff[outBoxNum][0] = out_indexes[k]; //image id
  415. outBuff[outBoxNum][1] = i; //labelid
  416. outBuff[outBoxNum][2] = scoreWithOneClass[out_indexes[k]]; //scores
  417. outBuff[outBoxNum][3] =
  418. decodedBoxes[out_indexes[k]].xmin * inputImageWidth / 300;
  419. outBuff[outBoxNum][4] =
  420. decodedBoxes[out_indexes[k]].ymin * inputImageHeight / 300;
  421. outBuff[outBoxNum][5] =
  422. decodedBoxes[out_indexes[k]].xmax * inputImageWidth / 300;
  423. outBuff[outBoxNum][6] =
  424. decodedBoxes[out_indexes[k]].ymax * inputImageHeight / 300;
  425. outBoxNum++;
  426. }
  427. }
  428. MS_PRINT("outBoxNum %d", outBoxNum);
  429. for (int i = 0; i < outBoxNum; ++i) {
  430. std::string tmpid_str = std::to_string(outBuff[i][0]);
  431. result += tmpid_str; // image ID
  432. result += "_";
  433. // tmpid_str = std::to_string(outBuff[i][1]);
  434. MS_PRINT("label_classes i %d, outBuff %d",i, (int) outBuff[i][1]);
  435. tmpid_str = label_classes[(int) outBuff[i][1]];
  436. result += tmpid_str; // label id
  437. result += "_";
  438. tmpid_str = std::to_string(outBuff[i][2]);
  439. result += tmpid_str; // scores
  440. result += "_";
  441. tmpid_str = std::to_string(outBuff[i][3]);
  442. result += tmpid_str; // xmin
  443. result += "_";
  444. tmpid_str = std::to_string(outBuff[i][4]);
  445. result += tmpid_str; // ymin
  446. result += "_";
  447. tmpid_str = std::to_string(outBuff[i][5]);
  448. result += tmpid_str; // xmax
  449. result += "_";
  450. tmpid_str = std::to_string(outBuff[i][6]);
  451. result += tmpid_str; // ymax
  452. result += ";";
  453. }
  454. return result;
  455. }
  456. ```