You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.en.md 18 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388
  1. # MindSpore Lite Skeleton Detection Demo (Android)
  2. This sample application demonstrates how to use the MindSpore Lite API and skeleton detection model to perform inference on the device, detect the content captured by the device camera, and display the continuous objective detection result on the image preview screen of the app.
  3. ## Running Dependencies
  4. - Android Studio 3.2 or later (Android 4.0 or later is recommended.)
  5. - NDK 21.3
  6. - CMake 3.10
  7. - Android software development kit (SDK) 26 or later
  8. ## Building and Running
  9. 1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)
  10. ![start_home](images/home.png)
  11. Start Android Studio, click `File > Settings > System Settings > Android SDK`, and select the corresponding SDK. As shown in the following figure, select an SDK and click `OK`. Android Studio automatically installs the SDK.
  12. ![start_sdk](images/sdk_management.png)
  13. If an Android Studio configuration error occurs, solve it by referring to the following solution table in item 4.
  14. 2. Connect to an Android device and run the skeleton detection sample application.
  15. Connect to the Android device through a USB cable for debugging. Click `Run 'app'` to run the sample project on your device.
  16. > During the building, Android Studio automatically downloads dependencies related to MindSpore Lite and model files. Please wait.
  17. ![run_app](images/run_app.PNG)
  18. For details about how to connect the Android Studio to a device for debugging, see <https://developer.android.com/studio/run/device>.
  19. 3. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result.
  20. ![install](images/install.jpg)
  21. The following figure shows the output of the skeletal detection model.
  22. The blue points are used to detect facial features and limb bone movement trends. The confidence score of this inference is 0.98/1, and the inference delay is 66.77 ms.
  23. ![sult](images/posenet_detection.png)
  24. 4. The following table lists solutions to Android Studio configuration errors.
  25. | | Error | Solution |
  26. | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
  27. | 1 | Gradle sync failed: NDK not configured. | Specify the NDK installation directory in the local.properties file: ndk.dir={NDK installation directory} |
  28. | 2 | Requested NDK version did not match the version requested by ndk.dir | Manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) and specify the SDK location in the `Android NDK location` field (see the following figure). |
  29. | 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | Choose `Help` > `Checkout for Updates` on the toolbar to update the version. |
  30. | 4 | SSL peer shut down incorrectly | Rebuild. |
  31. ![project_structure](images/project_structure.png)
  32. ## Detailed Description of the Sample Application
  33. The skeleton detection sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html) to complete model inference.
  34. ### Sample Application Structure
  35. ```text
  36. ├── app
  37. │   ├── build.gradle # Other Android configuration file.
  38. │   ├── download.gradle # During app building, the .gradle file automatically downloads the dependent library files and model files from the Huawei server.
  39. │   ├── proguard-rules.pro
  40. │   └── src
  41. │   ├── main
  42. │   │   ├── AndroidManifest.xml # Android configuration file.
  43. │   │   ├── java # Application code at the Java layer.
  44. │   │   │   └── com
  45. │   │   │   └── mindspore
  46. │   │   │   └── posenetdemo # Image processing and inference process implementation.
  47. │   │   │   ├── CameraDataDealListener.java
  48. │   │   │   ├── ImageUtils.java
  49. │   │   │   ├── MainActivity.java
  50. │   │   │   ├── PoseNetFragment.java
  51. │   │   │   ├── Posenet.java #
  52. │   │   │   └── TestActivity.java
  53. │   │   └── res # Resource files related to Android.
  54. │   └── test
  55. └── ...
  56. ```
  57. ### Downloading and Deploying the Model File
  58. Download the model file from MindSpore Model Hub. The objective detection model file used in this sample application is `posenet_model.ms`, which is automatically downloaded during app building using the `download.gradle` script and stored in the `app/src/main/assets` project directory.
  59. > If the download fails, manually download the model file [posenet_model.ms](https://download.mindspore.cn/model_zoo/official/lite/posenet_lite/posenet_model.ms)
  60. ### Writing On-Device Inference Code
  61. In the skeleton detection demo, the Java API is used to implement on-device inference. Compared with the C++ API, the Java API can be directly called in the Java Class and does not need to implement the related code at the JNI layer. Therefore, the Java API is more convenient.
  62. - The following example identifies body features such as nose and eyes, obtains their locations, and calculates the confidence score to implement bone detection.
  63. ```java
  64. public enum BodyPart {
  65. NOSE,
  66. LEFT_EYE,
  67. RIGHT_EYE,
  68. LEFT_EAR,
  69. RIGHT_EAR,
  70. LEFT_SHOULDER,
  71. RIGHT_SHOULDER,
  72. LEFT_ELBOW,
  73. RIGHT_ELBOW,
  74. LEFT_WRIST,
  75. RIGHT_WRIST,
  76. LEFT_HIP,
  77. RIGHT_HIP,
  78. LEFT_KNEE,
  79. RIGHT_KNEE,
  80. LEFT_ANKLE,
  81. RIGHT_ANKLE
  82. }
  83. public class Position {
  84. int x;
  85. int y;
  86. }
  87. public class KeyPoint {
  88. BodyPart bodyPart = BodyPart.NOSE;
  89. Position position = new Position();
  90. float score = 0.0f;
  91. }
  92. public class Person {
  93. List<KeyPoint> keyPoints;
  94. float score = 0.0f;
  95. }
  96. ```
  97. The inference code process of bone detection demo is as follows. For details about the complete code, see `src/main/java/com/mindspore/posenetdemo/Posenet.java`.
  98. 1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
  99. - Loading a model: Read a MindSpore Lite model from the file system and parse it.
  100. ```java
  101. // Load the .ms model.
  102. model = new Model();
  103. if (!model.loadModel(mContext, "posenet_model.ms")) {
  104. Log.e("MS_LITE", "Load Model failed");
  105. return false;
  106. }
  107. ```
  108. - Creating a configuration context: Create the configuration context `MSConfig` and save some basic configuration parameters required by the session for guiding graph building and execution.
  109. ```java
  110. // Create and init config.
  111. msConfig = new MSConfig();
  112. if (!msConfig.init(DeviceType.DT_CPU, NUM_THREADS, CpuBindMode.MID_CPU)) {
  113. Log.e("MS_LITE", "Init context failed");
  114. return false;
  115. }
  116. ```
  117. - Creating a session: Create `LiteSession` and call the `init` method to configure the `MSConfig` obtained in the previous step to the session.
  118. ```java
  119. // Create the MindSpore lite session.
  120. session = new LiteSession();
  121. if (!session.init(msConfig)) {
  122. Log.e("MS_LITE", "Create session failed");
  123. msConfig.free();
  124. return false;
  125. }
  126. msConfig.free();
  127. ```
  128. - Load the model file and build a computational graph for inference.
  129. ```java
  130. // Complile graph.
  131. if (!session.compileGraph(model)) {
  132. Log.e("MS_LITE", "Compile graph failed");
  133. model.freeBuffer();
  134. return false;
  135. }
  136. // Note: when use model.freeBuffer(), the model can not be complile graph again.
  137. model.freeBuffer();
  138. ```
  139. 2. Input data. Currently, Java supports two types of data: `byte[]` and `ByteBuffer`. Set the data of the input tensor.
  140. - Before data is input, the bitmap that stores image information needs to be interpreted, analyzed, and converted.
  141. ```java
  142. /**
  143. * Scale the image to a byteBuffer of [-1,1] values.
  144. */
  145. private ByteBuffer initInputArray(Bitmap bitmap) {
  146. final int bytesPerChannel = 4;
  147. final int inputChannels = 3;
  148. final int batchSize = 1;
  149. ByteBuffer inputBuffer = ByteBuffer.allocateDirect(
  150. batchSize * bytesPerChannel * bitmap.getHeight() * bitmap.getWidth() * inputChannels
  151. );
  152. inputBuffer.order(ByteOrder.nativeOrder());
  153. inputBuffer.rewind();
  154. final float mean = 128.0f;
  155. final float std = 128.0f;
  156. int[] intValues = new int[bitmap.getWidth() * bitmap.getHeight()];
  157. bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
  158. ```
  159. ```java
  160. int pixel = 0;
  161. for (int y = 0; y < bitmap.getHeight(); y++) {
  162. for (int x = 0; x < bitmap.getWidth(); x++) {
  163. int value = intValues[pixel++];
  164. inputBuffer.putFloat(((float) (value >> 16 & 0xFF) - mean) / std);
  165. inputBuffer.putFloat(((float) (value >> 8 & 0xFF) - mean) / std);
  166. inputBuffer.putFloat(((float) (value & 0xFF) - mean) / std);
  167. }
  168. }
  169. return inputBuffer;
  170. }
  171. ```
  172. - Input data through `ByteBuffer`.
  173. ```java
  174. long estimationStartTimeNanos = SystemClock.elapsedRealtimeNanos();
  175. ByteBuffer inputArray = this.initInputArray(bitmap);
  176. List<MSTensor> inputs = session.getInputs();
  177. if (inputs.size() != 1) {
  178. return null;
  179. }
  180. Log.i("posenet", String.format("Scaling to [-1,1] took %.2f ms",
  181. 1.0f * (SystemClock.elapsedRealtimeNanos() - estimationStartTimeNanos) / 1_000_000));
  182. MSTensor inTensor = inputs.get(0);
  183. inTensor.setData(inputArray);
  184. long inferenceStartTimeNanos = SystemClock.elapsedRealtimeNanos();
  185. ```
  186. 3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
  187. - Use `runGraph` for model inference.
  188. ```java
  189. // Run graph to infer results.
  190. if (!session.runGraph()) {
  191. Log.e("MS_LITE", "Run graph failed");
  192. return null;
  193. }
  194. lastInferenceTimeNanos = SystemClock.elapsedRealtimeNanos() - inferenceStartTimeNanos;
  195. Log.i(
  196. "posenet",
  197. String.format("Interpreter took %.2f ms", 1.0f * lastInferenceTimeNanos / 1_000_000)
  198. );
  199. ```
  200. - Obtain the inference result by the output tensor.
  201. ```java
  202. // Get output tensor values.
  203. List<MSTensor> heatmaps_list = session.getOutputsByNodeName("Conv2D-27");
  204. if (heatmaps_list == null) {
  205. return null;
  206. }
  207. MSTensor heatmaps_tensors = heatmaps_list.get(0);
  208. float[] heatmaps_results = heatmaps_tensors.getFloatData();
  209. int[] heatmapsShape = heatmaps_tensors.getShape(); //1, 9, 9 ,17
  210. float[][][][] heatmaps = new float[heatmapsShape[0]][][][];
  211. for (int x = 0; x < heatmapsShape[0]; x++) { // heatmapsShape[0] =1
  212. float[][][] arrayThree = new float[heatmapsShape[1]][][];
  213. for (int y = 0; y < heatmapsShape[1]; y++) { // heatmapsShape[1] = 9
  214. float[][] arrayTwo = new float[heatmapsShape[2]][];
  215. for (int z = 0; z < heatmapsShape[2]; z++) { //heatmapsShape[2] = 9
  216. float[] arrayOne = new float[heatmapsShape[3]]; //heatmapsShape[3] = 17
  217. for (int i = 0; i < heatmapsShape[3]; i++) {
  218. int n = i + z * heatmapsShape[3] + y * heatmapsShape[2] * heatmapsShape[3] + x * heatmapsShape[1] * heatmapsShape[2] * heatmapsShape[3];
  219. arrayOne[i] = heatmaps_results[n]; //1*9*9*17 ??
  220. }
  221. arrayTwo[z] = arrayOne;
  222. }
  223. arrayThree[y] = arrayTwo;
  224. }
  225. heatmaps[x] = arrayThree;
  226. }
  227. ```
  228. ```java
  229. List<MSTensor> offsets_list = session.getOutputsByNodeName("Conv2D-28");
  230. if (offsets_list == null) {
  231. return null;
  232. }
  233. MSTensor offsets_tensors = offsets_list.get(0);
  234. float[] offsets_results = offsets_tensors.getFloatData();
  235. int[] offsetsShapes = offsets_tensors.getShape();
  236. float[][][][] offsets = new float[offsetsShapes[0]][][][];
  237. for (int x = 0; x < offsetsShapes[0]; x++) {
  238. float[][][] offsets_arrayThree = new float[offsetsShapes[1]][][];
  239. for (int y = 0; y < offsetsShapes[1]; y++) {
  240. float[][] offsets_arrayTwo = new float[offsetsShapes[2]][];
  241. for (int z = 0; z < offsetsShapes[2]; z++) {
  242. float[] offsets_arrayOne = new float[offsetsShapes[3]];
  243. for (int i = 0; i < offsetsShapes[3]; i++) {
  244. int n = i + z * offsetsShapes[3] + y * offsetsShapes[2] * offsetsShapes[3] + x * offsetsShapes[1] * offsetsShapes[2] * offsetsShapes[3];
  245. offsets_arrayOne[i] = offsets_results[n];
  246. }
  247. offsets_arrayTwo[z] = offsets_arrayOne;
  248. }
  249. offsets_arrayThree[y] = offsets_arrayTwo;
  250. }
  251. offsets[x] = offsets_arrayThree;
  252. }
  253. ```
  254. - Process the output node data, and obtain the return value `person` of the skeleton detection demo to implement the function.
  255. In `Conv2D-27`, the `height`, `weight`, and `numKeypoints` parameters stored in `heatmaps` can be used to obtain the `keypointPosition` information.
  256. In `Conv2D-28`, `offset` indicates the position coordinate offset, which can be used together with `keypointPosition` to obtain `confidenceScores` and determine the model inference result.
  257. Use `keypointPosition` and `confidenceScores` to obtain `person.keyPoints` and `person.score` to obtain the model's return value `person`.
  258. ```java
  259. int height = ((Object[]) heatmaps[0]).length; //9
  260. int width = ((Object[]) heatmaps[0][0]).length; //9
  261. int numKeypoints = heatmaps[0][0][0].length; //17
  262. // Finds the (row, col) locations of where the keypoints are most likely to be.
  263. Pair[] keypointPositions = new Pair[numKeypoints];
  264. for (int i = 0; i < numKeypoints; i++) {
  265. keypointPositions[i] = new Pair(0, 0);
  266. }
  267. for (int keypoint = 0; keypoint < numKeypoints; keypoint++) {
  268. float maxVal = heatmaps[0][0][0][keypoint];
  269. int maxRow = 0;
  270. int maxCol = 0;
  271. for (int row = 0; row < height; row++) {
  272. for (int col = 0; col < width; col++) {
  273. if (heatmaps[0][row][col][keypoint] > maxVal) {
  274. maxVal = heatmaps[0][row][col][keypoint];
  275. maxRow = row;
  276. maxCol = col;
  277. }
  278. }
  279. }
  280. keypointPositions[keypoint] = new Pair(maxRow, maxCol);
  281. }
  282. // Calculating the x and y coordinates of the keypoints with offset adjustment.
  283. int[] xCoords = new int[numKeypoints];
  284. int[] yCoords = new int[numKeypoints];
  285. float[] confidenceScores = new float[numKeypoints];
  286. for (int i = 0; i < keypointPositions.length; i++) {
  287. Pair position = keypointPositions[i];
  288. int positionY = (int) position.first;
  289. int positionX = (int) position.second;
  290. yCoords[i] = (int) ((float) positionY / (float) (height - 1) * bitmap.getHeight() + offsets[0][positionY][positionX][i]);
  291. xCoords[i] = (int) ((float) positionX / (float) (width - 1) * bitmap.getWidth() + offsets[0][positionY][positionX][i + numKeypoints]);
  292. confidenceScores[i] = sigmoid(heatmaps[0][positionY][positionX][i]);
  293. }
  294. Person person = new Person();
  295. KeyPoint[] keypointList = new KeyPoint[numKeypoints];
  296. for (int i = 0; i < numKeypoints; i++) {
  297. keypointList[i] = new KeyPoint();
  298. }
  299. float totalScore = 0.0f;
  300. for (int i = 0; i < keypointList.length; i++) {
  301. keypointList[i].position.x = xCoords[i];
  302. keypointList[i].position.y = yCoords[i];
  303. keypointList[i].score = confidenceScores[i];
  304. totalScore += confidenceScores[i];
  305. }
  306. person.keyPoints = Arrays.asList(keypointList);
  307. person.score = totalScore / numKeypoints;
  308. return person;
  309. ```