Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request: Walkthrough Installation on a Raspberry Pi 4 #9

Open
nilspupils opened this issue Sep 9, 2021 · 28 comments
Open

Request: Walkthrough Installation on a Raspberry Pi 4 #9

nilspupils opened this issue Sep 9, 2021 · 28 comments

Comments

@nilspupils
Copy link

nilspupils commented Sep 9, 2021

First of all BirdNET is a wonderful project and i am really thankful that Stefan Kahl @kahst made this open source. As a project open to citizen science this should be made approachable for the "average citizen scientist" who happens to come from another field than computer science. The complexity of the setup of BirdNET and Tensorflow happens to be too difficult for most of us.
So could anyone link to or post a detailed walkthrough of the installation? If successful, a system image could be produced for even easier access to BirdNET. I would also ask Stefan Kahl to kindly support this idea.
Thanks! /Nils

@nilspupils
Copy link
Author

nilspupils commented Sep 9, 2021

The idea is to use a precompiled bin from:
https://github.com/PINTO0309/TensorflowLite-bin

@ghost
Copy link

ghost commented Sep 9, 2021

Had always tried to install TF trough this binaries and got this error message:

pi@raspberrypi:~/Development/BirdNET-Lite $ python3 analyze.py --i example/RBnuthatch.wav 
LOADING TF LITE MODEL... DONE!
READING AUDIO DATA... DONE! READ 2 CHUNKS.
ANALYZING AUDIO... Traceback (most recent call last):
  File "analyze.py", line 223, in <module>
    main()
  File "analyze.py", line 215, in main
    detections = analyzeAudioData(audioData, args.lat, args.lon, week, sensitivity, args.overlap, interpreter)
  File "analyze.py", line 157, in analyzeAudioData
    p = predict([sig, mdata], interpreter, sensitivity)
  File "analyze.py", line 120, in predict
    interpreter.invoke()
  File "/home/pi/.local/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 506, in invoke
    self._interpreter.Invoke()
  File "/home/pi/.local/lib/python3.7/site-packages/tflite_runtime/interpreter_wrapper.py", line 118, in Invoke
    return _interpreter_wrapper.InterpreterWrapper_Invoke(self)
RuntimeError: Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.Node number 29 (FlexRFFT) failed to prepare.

@nilspupils
Copy link
Author

nilspupils commented Sep 9, 2021

As @kahst pointed out in another issue:
"BirdNET-Lite uses a non-standard TFLite function (RFFT) to compute spectrograms. This function is only available for certain platforms (Android, iOS, x86) and custom TFLite builds (which you can use e.g. on the Raspberry Pi) that also include the so-called "Special Ops"."
I guess this seems to be the problem here...?

@ghost
Copy link

ghost commented Sep 9, 2021

got also another error -->

RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
Traceback (most recent call last):
  File "analyze.py", line 13, in <module>
    import librosa
  File "/usr/local/lib/python3.7/dist-packages/librosa/__init__.py", line 211, in <module>
    from . import core
  File "/usr/local/lib/python3.7/dist-packages/librosa/core/__init__.py", line 5, in <module>
    from .convert import *  # pylint: disable=wildcard-import
  File "/usr/local/lib/python3.7/dist-packages/librosa/core/convert.py", line 7, in <module>
    from . import notation
  File "/usr/local/lib/python3.7/dist-packages/librosa/core/notation.py", line 8, in <module>
    from ..util.exceptions import ParameterError
  File "/usr/local/lib/python3.7/dist-packages/librosa/util/__init__.py", line 83, in <module>
    from .utils import *  # pylint: disable=wildcard-import
  File "/usr/local/lib/python3.7/dist-packages/librosa/util/utils.py", line 6, in <module>
    import scipy.ndimage
  File "/home/pi/.local/lib/python3.7/site-packages/scipy/ndimage/__init__.py", line 151, in <module>
    from .filters import *
  File "/home/pi/.local/lib/python3.7/site-packages/scipy/ndimage/filters.py", line 37, in <module>
    from . import _nd_image
ImportError: numpy.core.multiarray failed to import

@nilspupils
Copy link
Author

I am also stuck with a librosa error...

@ghost
Copy link

ghost commented Sep 9, 2021

Although mumpy 1.2 is correct installed, I don't understand...

@ghost
Copy link

ghost commented Sep 13, 2021

I tried to build Tensorflow ob RPI4, but the problem was the bazel installation which also stops.

@nilspupils
Copy link
Author

nilspupils commented Sep 13, 2021

There is a detailed installation guide for BirdNET (original, not -Lite), see the issues there. Have you tried it? I wonder what the differences to Birdnet-Lite are....

@ghost
Copy link

ghost commented Sep 13, 2021

All this runs on macos M1, but not on RasPi. Bird.NET (without LITE) don't use Tensorflow which is need here, and (my) main problem with the installation on the RasPi4. Anyone success installing TF and rum on Rasberry Pi --> would be highly interested on the shared the installation instructions for TF-lite.

@nilspupils
Copy link
Author

@nilspupils
Copy link
Author

Good news! mcguirepr89 is working on the BirdNET-Lite installer!
https://github.com/mcguirepr89/BirdNET-system/discussions/23#discussion-3595354
Many thanks!

@mcguirepr89
Copy link

Yes, wish me luck -- the tflite pre-built binaries are new to me and I don't have a laptop with enough storage to cross compile the libraries (I only have a little Chromebook with 5GB storage 😞 and the Bazel build process is not ready for aarch64 😞 ). I have been corresponding with a fellow who has gotten this up and working before, so I will likely use him as a resource and will certainly share helpful information I learn.
Kind regards,
Patrick

@DD4WH
Copy link

DD4WH commented Sep 28, 2021

@Christoph-Lauer: try out the latest installation instruction by @mcguirepr89 for the installation of a BirdNET stand-alone system on the RPi4. I can confirm that it installs neatly and runs perfectly! Nice job, Patrick, @mcguirepr89 !

@mcguirepr89 : Really looking forward to a BirdNET-Lite version! Just a dumb question: would it make sense to prepare the Lite version for a "lighter" OS (32-bit) ? Would that enable us to run the Lite version on less power-hungry RPis (RPi3) and especially less power-hungry OS ? But I may be on the totally wrong path with that question . . . I noticed that the RPi with Aarch64 with BirdNET-system gets really really hot while running . . .

@mcguirepr89
Copy link

Hello, all -- I have BirdNET-Lite up and running now and will be setting up a "system" to accompany it. In the meantime, if you want to just get this project installed on a Raspberry Pi 4B running an AArch64 OS, this script does it:

#!/usr/bin/env bash
# Installs BirdNET-Lite on Raspberry Pi 4B running AArch64 OS
cd ~

sudo apt install swig libjpeg-dev zlib1g-dev python3-dev unzip wget python3-pip curl git cmake make

sudo pip3 install --upgrade pip wheel setuptools

curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=1dlEbugFDJXs-YDBCUC6WjADVtIttWxZA" > /dev/null

CODE="$(awk '/_warning_/ {print $NF}' /tmp/cookie)"

curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${CODE}&id=1dlEbugFDJXs-YDBCUC6WjADVtIttWxZA" -o tflite_runtime-2.6.0-cp37-none-linux_aarch64.whl

sudo pip3 install --upgrade tflite_runtime-2.6.0-cp37-none-linux_aarch64.whl 

sudo pip3 install colorama==0.4.4

sudo pip3 install librosa

sudo apt-get install ffmpeg

git clone https://github.com/kahst/BirdNET-Lite.git

cd BirdNET-Lite/

python3 analyze.py --i 'example/XC558716 - Soundscape.mp3' --lat 35.4244 --lon -120.7463 --week 18

You should be able to make that executable or copy and paste each line in the terminal.

I'll be sure to let you know how the "system" is coming.

Best regards,
Patrick

@nilspupils
Copy link
Author

Excellent Patrick! Awesome you got this!

@mcguirepr89
Copy link

mcguirepr89 commented Oct 10, 2021

Hello again --

I wanted to update that I have put together an all-in-one recording and detection extraction system for Raspberry Pi 4B built on a fork of this repo. Please check it out here

I hope folks interested in using the BirdNET-Lite platform on the Raspberry Pi will find this project helpful and easy-to-use.
My best regards,
Patrick

@PINTO0309
Copy link

It may have already been resolved, so if it's not useful, please ignore it. I have optimized FlexRFFT by replacing it with the standard TFLite operations. I only optimized the model, so the license needs to follow this repository.

birdnet

@PINTO0309
Copy link

Quoted with Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License.

I haven't checked the operation yet.

@nilspupils
Copy link
Author

@PINTO0309: Thanks for this! We have been using your bin for this project, thanks so much for providing them!!!!
@mcguirepr89 : Have you seen this? (Way above my head ;)

@mcguirepr89
Copy link

@nilspupils I did see this, but it is way above my head, too ;) I didn't know what to do with it, so I just made sure to star the repo and started following @PINTO0309 (SUPER active and very prolific person -- a veritable genius from my vantage)

@nilspupils
Copy link
Author

Proper legend in the bin field 😉

@nilspupils
Copy link
Author

@PINTO0309: First of all many thanks for your contribution and putting so much effort into BirdNET to include it into your projects!
As most of us don't understand Tensorflow enough to understand what is the advantage of the new bin, may i ask you to briefly explain what it is for? Perhaps @kahst would like to comment also?
Best, Nils

@PINTO0309
Copy link

PINTO0309 commented Dec 1, 2021

I'm a geek, so I enjoy making various models from around the world usable across frameworks. Therefore, for models that contain special layers, such as this model, I use my own tool to automatically replace the special layers with harmless layers. Models consisting of only standard layers can be inferred on most deep learning frameworks. By doing so, it is possible to generate models that are independent of specific hardware or frameworks. Android, iPhone, MacOS, RaspberryPi, NVIDIA GPUs, Jetson Nano, Google Chrome, Safari, OAK(OpenCV AI Kit), WASM, etc...

Unfortunately, the RFFT2D layer had not yet been implemented outside of TensorFlow, so BirdNet could not be converted for other frameworks such as ONNX or OpenVINO.

For reference, here are the conversion patterns I can handle

  1. PyTorch -> ONNX (NCHW) -> OpenVINO (NCHW)
  2. PyTorch -> ONNX (NCHW) -> OpenVINO (NCHW) -> TensorFlow (NHWC)
  3. PyTorch -> ONNX (NCHW) -> OpenVINO (NCHW) -> TensorFlowLite (NHWC)
  4. PyTorch -> ONNX (NCHW) -> OpenVINO (NCHW) -> TensorRT (NCHW/NHWC)
  5. PyTorch -> ONNX (NCHW) -> OpenVINO (NCHW) -> TF-TRT (NCHW/NHWC)
  6. PyTorch -> ONNX (NCHW) -> OpenVINO (NCHW) -> TensorFlow.js (NHWC)
  7. PyTorch -> ONNX (NCHW) -> OpenVINO (NCHW) -> CoreML (NCHW/NHWC)
  8. PyTorch -> ONNX (NCHW) -> OpenVINO (NCHW) -> Myriad Inference Engine Blob (OpenCV AI Kit, NCHW)
  9. TensorFlowLite (NHWC) -> TensorFlow (NHWC)
  10. TensorFlowLite (NHWC) -> ONNX (NCHW/NHWC)
  11. TensorFlowLite (NHWC) -> CoreML (NCHW/NHWC)
  12. TensorFlowLite (NHWC) -> TensorFlow.js (NHWC)
  13. TensorFlowLite (NHWC) -> TensorRT (NCHW/NHWC)
  14. TensorFlowLite (NHWC) -> TF-TRT (NCHW/NHWC)
  15. TensorFlowLite (NHWC) -> OpenVINO (NCHW)
  16. TensorFlowLite (NHWC) -> Myriad Inference Engine Blob (OpenCV AI Kit, NCHW)

My tools.

  1. https://github.com/PINTO0309/openvino2tensorflow
  2. https://github.com/PINTO0309/tflite2tensorflow

These home-grown tools understand the characteristics of the deep learning framework they are converting to, and have the ability to optimize the model to the limit. It is more optimized than the official model conversion tools provided by Microsoft, Facebook, Intel, and Google. It also avoids some bugs in the official model conversion tool.

e.g.
tensorflow-onnx
onnx-tensorflow
coremltools
openvino-model-optimizer
tflite-converter

However, my tools are always a WIP because it is always improving.

PINTO_model_zoo, which I maintain, commits a large number of models whose training code is not publicly available or whose datasets are not publicly available, and converts them for various frameworks. This is because only the binary file of the model is needed for conversion.

@mcguirepr89
Copy link

mcguirepr89 commented Dec 1, 2021

@PINTO0309 I regret not reaching out sooner -- thanks for the nudge, @nilspupils! my buddy :) -- thank you, @PINTO0309 so much for laying that out and for your prolific contributions AND for modeling (pun intended) such a great ethos towards shared information.

@robinsandfort
Copy link

Quoted with Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License.

I haven't checked the operation yet.

* TFLite non-flex (Float32, Float16, INT8, Dynamic Range Quant), TF-TRT
  https://github.com/PINTO0309/PINTO_model_zoo/tree/main/177_BirdNET-Lite

Thanks for your great work @PINTO0309! Would it be possible to use the Edge TPU Compiler on your BirdNET-Lite version? I run an Coral USB Accelerator on my raspberry and this might increase the speed quite a lot.
Greetings from Austria.

@PINTO0309
Copy link

PINTO0309 commented Dec 7, 2021

@robinsandfort

RFFT2D, Log, Exp, and Pow operations cannot be converted to EdgeTPU. Therefore, it has been found that not all operations are mapped to the EdgeTPU, resulting in frequent offloading of operations between the CPU ⇔ TPU, resulting in a significant performance degradation. This is not only a problem with BirdNet-Lite, it happens with other models as well. I have found that when this happens, it is much faster to inference with the CPU instead of using the EdgeTPU. You need to use aarch64 (64bit) OS instead of armv7l (32bit), but it is faster than using x86_64 (64bit) for inference.

The conditions for fast inference using the CPU are as follows:

  1. Install RaspberryPi OS aarch64 or Ubuntu 18.04/20.04/21.04 aarch64 or Debian aarch64
  2. Use an Integer Quantized (INT8) model or a Dynamic Range Quantized model

INT8 quantized: model_integer_quant.tflite
Dynamic range quantized: model_dynamic_range_quant.tflite

These quantize Float32 to 8-bit Integer, which degrades the accuracy of the inference. There is another way to infer faster while maintaining accuracy.

  1. Install RaspberryPi OS aarch64 or Ubuntu 18.04/20.04/21.04 aarch64 or Debian aarch64
  2. Using the TensorFlowLite runtime with XNNPACK delegate enabled https://github.com/PINTO0309/TensorflowLite-bin
  3. Use an Float32 model

Float32 model: model_float32.tflite

There is too little information about the environment you are using, so I have presented the information assuming that you are using Python.

Supplementary information. I am constantly trying to convert all models to EdgeTPU. Therefore, even if I can convert the model to EdgeTPU, I do not commit to PINTO_model_zoo if the model is not beautiful enough to have meaningful performance. Successful transformation of a model does not exactly match committing the model. In fact, the conversion of BirdNet-Lite to the EdgeTPU model is successful. However, as mentioned earlier, it produces a model that is completely unusable.
https://github.com/PINTO0309/PINTO_model_zoo#12-sound-classifier
Screenshot 2021-12-07 13:54:00

Although it is not BirdNet-Lite, I have committed the results of benchmarking the speed of inference with Float32 + TFLite + XNNPACK here.
https://github.com/PINTO0309/PINTO_model_zoo#3-tflite-model-benchmark

@summelon
Copy link

Working on converting model which has FFT layer from Tensorflow 2.X -> OpenVINO IR.
To those who need a workaround to successfully convert fft/ifft op., this white paper may be helpful.
@PINTO0309 Have you ever tried this method?

@PINTO0309
Copy link

PINTO0309 commented Jun 20, 2022

RFFT2D is supported as a standard feature of TensorFlow Lite as of June 20, 2022. However, this would not be supported by EdgeTPU.
https://github.com/tensorflow/tensorflow/blob/a1d43e94a3cf271bdfa69e46a59794871697de07/tensorflow/lite/schema/schema.fbs#L368

image

My tool. tflite2tensorflow
https://github.com/PINTO0309/tflite2tensorflow/blob/493927470a7f45c04f8f548bd748d5197e17b642/tflite2tensorflow/tflite2tensorflow.py#L4056-L4093

image

OpenVINO does not have an RFFT implementation. Therefore, a custom layer must be coded in C++. I am not really interested in implementing custom layers.
https://docs.openvino.ai/latest/openvino_docs_operations_specifications.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants