Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Facerecognition #645

Closed
wants to merge 9 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 47 additions & 0 deletions face-recognition/Readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Face Recognition Project

## Overview
This repository is part of the ML-CaPsule project, specifically focusing on face recognition using machine learning techniques.

## Features
- **Face Detection**: Identifies faces in images and video streams.
- **Face Recognition**: Matches detected faces with known faces.
- **Real-time Processing**: Capable of processing video feeds for real-time face recognition.
- **Facial-Attribute Analysis** : Labels the face with {race, gender, age and emotion}

## Requirements
- Python >3.9
- deepface
- #there are some explicit dependencies
- requests>=2.27.1
- numpy>=1.14.0
- pandas>=0.23.4
- gdown>=3.10.1
- tqdm>=4.30.0
- Pillow>=5.2.0
- opencv-python>=4.5.5.64
- tensorflow>=1.9.0
- keras>=2.2.0
- Flask>=1.1.2
- mtcnn>=0.1.0
- retina-face>=0.0.1
- fire>=0.4.0
- gunicorn>=20.1.0

## Installation
1. Clone the repository:
```bash
git clone https://github.com/Raghucharan16/ML-CaPsule.git
cd ML-CaPsule/face-recognition
```
2. Install the required packages:
```bash
pip install -r requirements.txt
```

## Usage
1. run the deepface.ipynb cells for required operation.


## Contributing
Contributions are welcome! Please open an issue or submit a pull request.
111 changes: 111 additions & 0 deletions face-recognition/deepface.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "68d0e7bb",
"metadata": {},
"outputs": [],
"source": [
"from deepface import DeepFace\n",
"detector_backends = [ 'opencv', 'retinaface',\n",
" 'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'centerface'] # or 'skip' (default is opencv)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "74c9e27e",
"metadata": {},
"outputs": [],
"source": [
"#Faces Detection\n",
"faces=DeepFace.extract_faces(img_path=\"img.jpg\", #select the image [it can have multiple faces]\n",
" detector_backend = 'mtcnn', #[select the backend among mtcnn, opencv, ssd, dlib]\n",
" enforce_detection = False, #pass this argument to disable multiple face detection\n",
" align = True, #pass this argument to align faces\n",
" margin = 10) #add margin for face extraction\n",
"print(faces)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3c46a578",
"metadata": {},
"outputs": [],
"source": [
"#Facial Recognition\n",
"models = [\n",
" \"VGG-Face\", \n",
" \"Facenet\", \n",
" \"Facenet512\", \n",
" \"OpenFace\", \n",
" \"DeepFace\", \n",
" \"DeepID\", \n",
" \"ArcFace\", \n",
" \"Dlib\", \n",
" \"SFace\",\n",
" \"GhostFaceNet\",\n",
"]\n",
"#You can adjust the threshold according to your use case. Print the result and see the distance values. Then, you can decide the optimal threshold for your project.\n",
"#you can use any of the these models for verify and find methods for recognition\n",
"fr_result = DeepFace.verify(\n",
" img1_path = \"img1.jpg\",\n",
" img2_path = \"img2.jpg\",\n",
")\n",
"print(fr_result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e0cd963f",
"metadata": {},
"outputs": [],
"source": [
"#Deepface's find method\n",
"dfs = DeepFace.find(\n",
" img_path = \"img1.jpg\",\n",
" db_path = \"PATH_TO_YOUR_DB\"\n",
")\n",
"print(dfs) #you can print the result to see the distance values\n",
"\n",
"#Facial Analysis\n",
"objs = DeepFace.analyze(\n",
" img_path = \"img1.jpg\", \n",
" actions = ['age', 'gender', 'race', 'emotion'],\n",
")\n",
"print(objs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8ab050cb",
"metadata": {},
"outputs": [],
"source": [
"#Facial Embeddings\n",
"embedding_objs = DeepFace.represent(\n",
" img_path = \"img1.jpg\",\n",
" model_name = models[2],\n",
")\n",
"#These can be used for clustering, finding similarity between faces, vector operations, by storing in a vector database for faster retrieval, etc."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "env",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Binary file added face-recognition/img1.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added face-recognition/img2.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions face-recognition/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
deepface
Loading