This example is based on the machine learning algorithm, powered by MCXN947, which can label the images from the camera and show the type of the object at the bottom of the LCD.
The model is trained on dataset CIFAR10, it supports 10 classes of images:
"airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck".
- Download SDK_2_14_0_FRDM-MCXN947
- Download and install MCUXpresso IDE V11.9.0 or later.
- MCUXpresso for Visual Studio Code: This example supports MCUXpresso for Visual Studio Code, for more information about how to use Visual Studio Code please refer here.
- 3.5" TFT LCD module by NXP (P/N PAR-LCD-S035)
- Camera module: OV7670
- FRDM-MCXN947(SCH-90818_REV B) board
- Type-C USB cable
Rework for camera pins on FRDM-MCXN947 because the camera is not the first function. Please change SJ16, SJ26, and SJ27 from the A side to the B side.
Board before rework.
Board after rework.
Here is the detail.
Attach the LCD shield (J1: Pins 5-28 skip first 4 pins) to FRDM (J8). Attach the Camera shield to the FRDM (J9: Pins 5-23; skip first 4 pins), as shown below:
Connect the debug port on board with the laptop.
Import project into MCUXpresso, click 'Import project from Application Code Hub', search 'label cifar10 images on mcxn947' example, and clone to the local workspace.
Build the project, after compile complete, use the GUI Flash Tool (2 in the following figure) to write the program to the board.
In VS code, select the 'MCUXpresso For VScode' plugin, and click 'Application Code Hub' in the QUICKSTART PANEL. Search the 'label cifar10 images on mcxn947' example, and clone it to the local workspace. After a while, the project is shown in the 'Projects'.
Build the project, and after compile complete, flash the board.
Reset the board, the preview of camera is showing in the top of LCD (if the preview is blank it's because of the voltage mismatch between Camera module and FRDM board, please reset the board).
Show a bird picture (print the image on a paper or use you mobilephone display the test image) to the camera, the type of object is showing at the bottom of the LCD.
The model training section is not included in this project, if you want to train your custom dataset, please refer to the eIQ toolkit for the model training.
eIQ use guide, please refer to https://community.nxp.com/t5/eIQ-Machine-Learning-Software/tkb-p/eiq%40tkb
Please contact NXP for additional support.
Questions regarding the content/correctness of this example can be entered as Issues within this GitHub repository.
Warning: For more general technical questions regarding NXP Microcontrollers and the difference in expected funcionality, enter your questions on the NXP Community Forum
Version | Description / Update | Date |
---|---|---|
1.0 | Initial release on Application Code Hub | January 30th 2024 |