This contains the code needed to inference a model on the ZCU104 board with image enhancement.
Note: the only image enhancement tested was histogram equalization, for this project image enhancement and histogram equalization are used interchangeably.
There are three main command line options:
--evaluate
: This will run the model on the validation split of the dataset and output the accuracy and save the raw predictions toresults/some_name.json
. The accuracy can be calculated using theeval.py
.--predict
: This will run the model on the images in the given directory and save additional images for visual comparison.- Not specifing the
evaluate
orpredict
option will run the model on the HDMI input and output the results to the HDMI output in real-time.
The rest of the listed commands may apply to one or more of the above options.
--model
: The model to inference.--method
: How processing should be done, either sequential or in parallel (multiprocessing).--overlay
: The overlay to use. This should be the name of the overlay file without the extension.--class_file
: The file containing the class names. This can be left default for ImageNet and COCO.--frame_size
The size of the HDMI input source. Best to leave this default.--fps
: The frames per second to run the model at. Best to leave this default.--max_queue_size
: Only applies when running LEAP in multiprocessing mode. This determins how many frames are buffered at each step of the pipeline.--save_dir
: The directory to save the results to when running the evaluation.--disable_dpu
: Disables the DPU for real-time HDMI in/out.--disable_ie
: Disables image enhancement for real-time HDMI in/out.
Note: These instructions you are using Ubuntu for your host machine. If your not using some flavor Linux - Good Luck your on your own.
-
Please download the custom PYNQ image from here and flash it to an SD card. This custom image has the CMA expanded to 1GB instead of 512MB.
-
Boot up the ZCU104 and connect to it via USB JTAG port. Download and install
minicom
via runningapt install minicom
. Then runsudo minicom -D /dev/ttyUSB1
to connect to the ZCU104. You may need to press enter to pull up a login screen.
Note: You can connect to the ZCU104 via minicom even when the main power is off. This is useful for debugging the boot process.
- Log in with the username
xilinx
and passwordxilinx
. Then run the following command to login as root:sudo su
. All the runtime code relies on being run as root, as it needs to directly access the hardware. SSH does not allow for logging in as root as configured but can by changed by running a command and changing a config file. To do this run the following commands as root:
passwd
sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
systemctl restart sshd
-
Connect the ZCU104 to the internet. This can be done by either connecting the ZCU104 to a router via an ethernet cable or by sharing your computers connection via ethernet. To share your computers connection go to settings -> network -> click the gear icon next to your connection -> click the IPv4 tab -> change method to "Shared to other computers". Please note that the ZCU104 IP will change. To find the ZCU104 IP run
ifconfig
and look for the IP address of theeth0:
interface. For Ubuntu host the IP is usually10.42.0.x
. -
SSH into the ZCU104 by running
ssh root@<ZCU104 IP>
. The password is whatever you set it to in step 3. -
Add the following to /root/.bashrc:
echo ". /etc/profile.d/xrt_setup.sh" >> /root/.bashrc
echo ". /etc/profile.d/pynq_venv.sh" >> /root/.bashrc
echo "cd /home/xilinx/jupyter_notebooks/" >> /root/.bashrc
- Make sure your PYNQ board is up to date:
apt update
apt upgrade
- Make sure that the OpenCV-Python library (via apt not pip) is installed:
apt install python3-opencv
-
Restart and reconnect to the ZCU104 by running
reboot
. -
Install the PYNQ-DPU Python library:
cd $PYNQ_JUPYTER_NOTEBOOKS
pip3 install pynq-dpu --no-build-isolation
pynq get-notebooks pynq-dpu -p .
- Clone LEAP and cd into it:
git clone https://github.com/jjsuperpower/LEAP
cd LEAP/runtime
- Install the required Python libraries:
pip3 install -r requirements.txt
-
Place your xmodel file in the
models/
directory. The code was tested with resnet50 and YOLOv3. Several more models can be found here. Not all models will work out of the box, some may require modifications to the code written inmodels/model_wrapper.py
, especially if they used different datasets than ImageNet or COCO. -
Copy the overlay files to overlay folder. You should have three key files:
overlay.bit
,overlay.hwh
, andoverlay.xclbin
. These files are generated by Vitis/Vivado, see LEAP/HW_Design/README.md for more information. An example overlay is provided in a release of this repo.
- Test on an image (file):
You should see several images to added to the
testing/
directory. The000000397133.jpg
image is the original image,000000397133_trfm.png
is the image after it has been darkened,000000397133_ie.png
is after image enhancement (histogram equalization) is applied to the darkened image.
mkdir testing
cp ../doc/imgs/000000397133.jpg testing/
python3 main.py --model yolov3 --predict testing/
- Test on HDMI in/out: Connect the the ZCU104 bottom HDMI port to a video source and The top HDMI port to a monitor. Then run the following command:
python3 main.py --model resnet50
- (Optional) Add datasets to
datasets/
directory. The code was tested with ImageNet and COCO. These should each be put in individual folders,datasets/imagenet/
anddatasets/coco/
respectively. Other types of datasets will require modifications to this project.
Only the validation splits of the datasets are needed for the --evaluate option. The training splits are not needed.
📦runtime -- LEAP runtime code
┣ 📂datasets -- Where datasets are stored
┃ ┣ 📜README.md
┃ ┣ 📜base.py -- Contains dataset abstract class
┃ ┣ 📜coco_ds.py -- COCO dataset wrapper
┃ ┣ 📜imgnet_ds.py -- ImageNet dataset wrapper
┣ 📂models
┃ ┣ 📜README.md
┃ ┣ 📜model_wrappers.py -- Contains wrapper for models that include preprocessing,
┃ ┃ postprocessing, and on-screen display (OSD)
┣ 📂overlays -- Where FPGA images are stored
┣ 📂results -- Default directory for saving raw results
┣ 📜README.md -- This file
┣ 📜eval.py -- Calculates from raw results
┣ 📜hdmi.py -- HDMI API
┣ 📜hist_eq.py -- API for histogram equalization (image enhancement)
┣ 📜leap.py -- LEAP API
┣ 📜main.py -- Command line parsing
┗ 📜requirements.txt -- Required dependencies