You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.en.md 12 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278
  1. ## Demo_image_classification
  2. The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) and MindSpore Lite image classification models to perform on-device inference, classify the content captured by a device camera, and display the most possible classification result on the application's image preview screen.
  3. ### 运行依赖
  4. - Android Studio 3.2 or later (Android 4.0 or later is recommended.)
  5. - Native development kit (NDK) 21.3
  6. - CMake 3.10.2 [CMake](https://cmake.org/download)
  7. - Android software development kit (SDK) 26 or later
  8. - JDK 1.8 or later [JDK]( https://www.oracle.com/downloads/otn-pub/java/JDK/)
  9. ### 构建与运行
  10. 1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)
  11. ![start_home](images/home.png)
  12. Start Android Studio, click `File > Settings > System Settings > Android SDK`, and select the corresponding SDK. As shown in the following figure, select an SDK and click `OK`. Android Studio automatically installs the SDK.
  13. ![start_sdk](images/sdk_management.png)
  14. (Optional) If an NDK version issue occurs during the installation, manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) (the version used in the sample code is 21.3). Specify the SDK location in `Android NDK location` of `Project Structure`.
  15. ![project_structure](images/project_structure.png)
  16. 2. Connect to an Android device and runs the image classification application.
  17. Connect to the Android device through a USB cable for debugging. Click `Run 'app'` to run the sample project on your device.
  18. ![run_app](images/run_app.PNG)
  19. For details about how to connect the Android Studio to a device for debugging, see <https://developer.android.com/studio/run/device?hl=zh-cn>.
  20. The mobile phone needs to be turn on "USB debugging mode" before Android Studio can recognize the mobile phone. Huawei mobile phones generally turn on "USB debugging model" in Settings > system and update > developer Options > USB debugging.
  21. 3. 在Android设备上,点击“继续安装”,安装完即可查看到设备摄像头捕获的内容和推理结果。
  22. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result.
  23. ![result](images/app_result.jpg)
  24. ## Detailed Description of the Sample Program
  25. This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html).
  26. ### Sample Program Structure
  27. ```
  28. app
  29. ├── src/main
  30. │ ├── assets # resource files
  31. | | └── mobilenetv2.ms # model file
  32. │ |
  33. │ ├── cpp # main logic encapsulation classes for model loading and prediction
  34. | | |
  35. | | ├── MindSporeNetnative.cpp # JNI methods related to MindSpore calling
  36. │ | └── MindSporeNetnative.h # header file
  37. │ |
  38. │ ├── java # application code at the Java layer
  39. │ │ └── com.huawei.himindsporedemo
  40. │ │ ├── gallery.classify # implementation related to image processing and MindSpore JNI calling
  41. │ │ │ └── ...
  42. │ │ └── widget # implementation related to camera enabling and drawing
  43. │ │ └── ...
  44. │ │
  45. │ ├── res # resource files related to Android
  46. │ └── AndroidManifest.xml # Android configuration file
  47. ├── CMakeList.txt # CMake compilation entry file
  48. ├── build.gradle # Other Android configuration file
  49. ├── download.gradle # MindSpore version download
  50. └── ...
  51. ```
  52. ### Configuring MindSpore Lite Dependencies
  53. When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/tutorial/en/master/build.html) to generate the MindSpore Lite version. 
  54. ```
  55. android{
  56. defaultConfig{
  57. externalNativeBuild{
  58. cmake{
  59. arguments "-DANDROID_STL=c++_shared"
  60. }
  61. }
  62. ndk{
  63. abiFilters'armeabi-v7a', 'arm64-v8a'
  64. }
  65. }
  66. }
  67. ```
  68. Create a link to the `.so` library file in the `app/CMakeLists.txt` file:
  69. ```
  70. # ============== Set MindSpore Dependencies. =============
  71. include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp)
  72. include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/third_party/flatbuffers/include)
  73. include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
  74. include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include)
  75. include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/ir/dtype)
  76. include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/schema)
  77. add_library(mindspore-lite SHARED IMPORTED )
  78. add_library(minddata-lite SHARED IMPORTED )
  79. set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
  80. ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libmindspore-lite.so)
  81. set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
  82. ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
  83. # --------------- MindSpore Lite set End. --------------------
  84. # Link target library.
  85. target_link_libraries(
  86. ...
  87. # --- mindspore ---
  88. minddata-lite
  89. mindspore-lite
  90. ...
  91. )
  92. ```
  93. * In this example, the download.gradle File configuration auto download MindSpore Lite version, placed in the 'app / src / main/cpp/mindspore_lite_x.x.x-minddata-arm64-cpu' directory.
  94. Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
  95. MindSpore Lite version [MindSpore Lite version]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so)
  96. ### Downloading and Deploying a Model File
  97. In this example, the download.gradle File configuration auto download `mobilenetv2.ms `and placed in the 'app / libs / arm64-v8a' directory.
  98. Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
  99. mobilenetv2.ms [mobilenetv2.ms]( https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)
  100. ### Compiling On-Device Inference Code
  101. Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference.
  102. The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`.
  103. 1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
  104. - Load a model file. Create and configure the context for model inference.
  105. ```cpp
  106. // Buffer is the model data passed in by the Java layer
  107. jlong bufferLen = env->GetDirectBufferCapacity(buffer);
  108. char *modelBuffer = CreateLocalModelBuffer(env, buffer);
  109. ```
  110. - Create a session.
  111. ```cpp
  112. void **labelEnv = new void *;
  113. MSNetWork *labelNet = new MSNetWork;
  114. *labelEnv = labelNet;
  115. // Create context.
  116. mindspore::lite::Context *context = new mindspore::lite::Context;
  117. context->thread_num_ = num_thread;
  118. // Create the mindspore session.
  119. labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
  120. delete(context);
  121. ```
  122. - Load the model file and build a computational graph for inference.
  123. ```cpp
  124. void MSNetWork::CreateSessionMS(char* modelBuffer, size_t bufferLen, std::string name, mindspore::lite::Context* ctx)
  125. {
  126. CreateSession(modelBuffer, bufferLen, ctx);
  127. session = mindspore::session::LiteSession::CreateSession(ctx);
  128. auto model = mindspore::lite::Model::Import(modelBuffer, bufferLen);
  129. int ret = session->CompileGraph(model);
  130. }
  131. ```
  132. 2. Convert the input image into the Tensor format of the MindSpore model.
  133. Convert the image data to be detected into the Tensor format of the MindSpore model.
  134. ```cpp
  135. // Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
  136. BitmapToMat(env, srcBitmap, matImageSrc);
  137. // Processing such as zooming the picture size.
  138. matImgPreprocessed = PreProcessImageData(matImageSrc);
  139. ImgDims inputDims;
  140. inputDims.channel = matImgPreprocessed.channels();
  141. inputDims.width = matImgPreprocessed.cols;
  142. inputDims.height = matImgPreprocessed.rows;
  143. float *dataHWC = new float[inputDims.channel * inputDims.width * inputDims.height]
  144. // Copy the image data to be detected to the dataHWC array.
  145. // The dataHWC[image_size] array here is the intermediate variable of the input MindSpore model tensor.
  146. float *ptrTmp = reinterpret_cast<float *>(matImgPreprocessed.data);
  147. for(int i = 0; i < inputDims.channel * inputDims.width * inputDims.height; i++){
  148. dataHWC[i] = ptrTmp[i];
  149. }
  150. // Assign dataHWC[image_size] to the input tensor variable.
  151. auto msInputs = mSession->GetInputs();
  152. auto inTensor = msInputs.front();
  153. memcpy(inTensor->MutableData(), dataHWC,
  154. inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
  155. delete[] (dataHWC);
  156. ```
  157. 3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
  158. - Perform graph execution and on-device inference.
  159. ```cpp
  160. // After the model and image tensor data is loaded, run inference.
  161. auto status = mSession->RunGraph();
  162. ```
  163. - Obtain the output data.
  164. ```cpp
  165. auto names = mSession->GetOutputTensorNames();
  166. std::unordered_map<std::string,mindspore::tensor::MSTensor *> msOutputs;
  167. for (const auto &name : names) {
  168. auto temp_dat =mSession->GetOutputByTensorName(name);
  169. msOutputs.insert(std::pair<std::string, mindspore::tensor::MSTensor *> {name, temp_dat});
  170. }
  171. std::string retStr = ProcessRunnetResult(msOutputs, ret);
  172. ```
  173. - Perform post-processing of the output data.
  174. ```cpp
  175. std::string ProcessRunnetResult(std::unordered_map<std::string,
  176. mindspore::tensor::MSTensor *> msOutputs, int runnetRet) {
  177. std::unordered_map<std::string, mindspore::tensor::MSTensor *>::iterator iter;
  178. iter = msOutputs.begin();
  179. // The mobilenetv2.ms model output just one branch.
  180. auto outputTensor = iter->second;
  181. int tensorNum = outputTensor->ElementsNum();
  182. MS_PRINT("Number of tensor elements:%d", tensorNum);
  183. // Get a pointer to the first score.
  184. float *temp_scores = static_cast<float * >(outputTensor->MutableData());
  185. float scores[RET_CATEGORY_SUM];
  186. for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
  187. if (temp_scores[i] > 0.5) {
  188. MS_PRINT("MindSpore scores[%d] : [%f]", i, temp_scores[i]);
  189. }
  190. scores[i] = temp_scores[i];
  191. }
  192. // Score for each category.
  193. // Converted to text information that needs to be displayed in the APP.
  194. std::string categoryScore = "";
  195. for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
  196. categoryScore += labels_name_map[i];
  197. categoryScore += ":";
  198. std::string score_str = std::to_string(scores[i]);
  199. categoryScore += score_str;
  200. categoryScore += ";";
  201. }
  202. return categoryScore;
  203. }
  204. ```