TattooTrace integrates advanced machine learning models like YOLOv5 and CLIP to analyze and identify tattoos from images and videos. Leveraging the robust backend of Immich, this tool aims to support efforts to combat and understand human trafficking dynamics.
- Docker Desktop
- npm
- Makefile (for UNIX-like operating systems)
-
Clone the YOLOv5 Repository
git clone https://github.com/ultralytics/yolov5 cd yolov5
-
Install Dependencies
pip install -r requirements.txt
-
Download Pre-trained Weights Download the YOLOv5s model weights Link
-
Inference
- To perform inference on images:
cd python detect.py --source /path/to/images --weights best.pt --conf 0.25
- To perform inference on video:
python detect.py --source /path/to/video.mp4 --weights best.pt --conf 0.25
-
Output
cd yolov5 Tattoo Recognition/detected/exp6
-
Frontend Setup
cd clip/frontend/clip-app npm install npm start
This will start the frontend application at
http://localhost:3000
. -
Backend Setup
cd clip/backend/ python app.py
-
Environment Setup Ensure Docker Desktop is installed and running on your system. Navigate to the main Immich directory:
cd immich
-
Configuration Modify
docker/.env
to set theUPLOAD_LOCATION
environment variable. -
Model Weights Download the model weights and place them in:
/immich/machine-learning/app/models/weight/
- Download link for weight file: [https://drive.google.com/file/d/14VpsSrTkOxp0MTzN7uaVJp8uAZ167l-z/view?usp=sharing]
-
Build and Run From the Immich directory, execute:
make dev
Access the development instance at
http://localhost:3000
.
- Tattoo Detection: Utilizes YOLOv5 to detect tattoos with a configurable minimum confidence score.
- Tattoo Recognition: Using the Clip model to recognize and analyze a given tattoo with its meaning.
- Tattoo Clustering: To cluster similar tattoos based on the CLIP output from the given prompt.
- User Interaction: Simple user interface to trigger tattoo detection.
- The user can press the button to trigger the tattoo detection function.
- Use YOLOv5 to detect tattoos in the photos.
- Adjust the setting for the minimum confidence score for the model.
- NTU Dataset: Contains 32,145 images of tattoos without labels.
- Labeled Tattoo Images: Collected 1,000 images with bounding box labels.
- Utilized for tattoo detection with bounding box regression.
- Model trained on labelled tattoo dataset.
- YOLOv5 GitHub Repository
- Used for recognizing and interpreting the context and significance of tattoos.
- Demonstrates impressive zero-shot performance.
- CLIP Model Information
- Expand Object Detection: Plan to upgrade from YOLOv5 to State-of-the-art model for enhanced accuracy and speed.
- CLIP Integration: Improve tattoo recognition and contextual understanding by integrating CLIP into the Immich framework.
- Enhanced Clustering Algorithms: Develop sophisticated algorithms to categorize and analyze tattoos more effectively.
- Video Timestamps Display: Implement functionality to display video timestamps post-clustering analysis.
Contributions are welcome! Please refer to the CONTRIBUTING.md for guidelines on how to contribute effectively to this project.