Fashion-MNIST recognition based on a machine learning algorithm, powered by MCXN947.
The model is trained on the Fashion-MNIST dataset, which can recognize 10 classes of fashion products:
"T-shirt", "trouser", "pullover", "dress", "coat", "sandal", "shirt", "sneaker", "bag", "ankle boot".
The machine learning algorithm is accelerated by the NPU inside, and the inference during is smaller than 10ms, this demo can be used in toy doll recognition products.
- Download SDK_2_14_0_FRDM-MCXN947
- Download and install MCUXpresso IDE V11.9.0 or later.
- MCUXpresso for Visual Studio Code: This example supports MCUXpresso for Visual Studio Code, for more information about how to use Visual Studio Code please refer here.
- 3.5" TFT LCD module by NXP (P/N PAR-LCD-S035)
- Camera module: OV7670
- FRDM-MCXN947(SCH-90818_REV B) board
- Type-C USB cable
Rework for camera pins on FRDM-MCXN947 because the camera is not the first function. Please change SJ16, SJ26, and SJ27 from the A side to the B side.
Board before rework.
Board after rework.
Here is the detail.
Attach the LCD shield (J1: Pins 5-28 skip first 4 pins) to FRDM (J8). Attach the Camera shield to the FRDM (J9: Pins 5-23; skip first 4 pins), as shown below:
Connect the debug port on board with the laptop.
Import project into MCUXpresso, click 'Import project from Application Code Hub', search 'fashion mnist recognition on mcxn947' example, and clone to the local workspace.
Build the project, after compile complete, use the GUI Flash Tool (2 in the following figure) to write the program to the board.
In VS code, select the 'MCUXpresso For VScode' plugin, and click 'Application Code Hub' in the QUICKSTART PANEL. search 'fashion mnist recognition on mcxn947' example, clone to the local workspace.
Build the project, after compile complete, flash the board.
The model was trained on MNIST Fashion dateset to fashion accesories:"T-shirt", "trouser", "pullover", "dress", "coat", "sandal", "shirt", "sneaker", "bag", "ankle boot".
Search images for "fashion mnist ankle boot" on google, bing or baidu, print the images onto A4 paper and cut into cards shown as:
Or download the test images into mobile device.
Reset the board, the preview of camera is showing in the top of LCD (if the preview is blank it's because of the voltage mismatch between Camera module and FRDM board, please reset the board).
Present the cards or your mobile device to the camera, ensuring that the image is centered in the preview window. The type of object will then be displayed at the bottom of the LCD screen.
The model training section is not included in this project, if you want to train your custom dataset, please refer to the eIQ toolkit for the model training.
eIQ use guide, please refer to https://community.nxp.com/t5/eIQ-Machine-Learning-Software/tkb-p/eiq%40tkb
Please contact NXP for additional support.
Questions regarding the content/correctness of this example can be entered as Issues within this GitHub repository.
Warning: For more general technical questions regarding NXP Microcontrollers and the difference in expected funcionality, enter your questions on the NXP Community Forum
Version | Description / Update | Date |
---|---|---|
1.0 | Initial release on Application Code Hub | January 30th 2024 |