This project is a proof of concept for using multi-modal LLMs to integrate with NVRs and perform narration, explanation on top of object detection and tracking.
- Install python3 and activate the venv:
python3 -m venv venv
- Install requirements:
pip install -r requirements.txt
- Edit configs in frigate/ subdirectory and bring up Frigate with docker compose:
cd frigate && docker-compose up -d frigate nvr
- Run docker compose to bring the frontend and backend up:
docker-compose up -d
- Run the jupyter notebook in analyze.ipynb