The purpose of this project is to create a Deep Learning model and ISL detection technology involves the use of computer vision and artificial intelligence to recognize and interpret sign language gestures made by individuals who are deaf or hard of hearing.
The dataset consists of classes:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z
- Total 2244 images for training and 213 images for validation present inclasses.
- Create a bounding boxes with the help of label-img And makesense.ai website according to YoloV5.
- Prepare folder structure that can be accept by YoloV5.
- Cloning the YoloV5 file from official repository.
- Changing the directory of yolov5
- Installing the dependencies
- Download all versions pre-trained weights.
- Go to yolov5/data/.
- Open data.yaml
- Edit the following inside it:
- Training and Validation file path
- Number of classes and Class names.
- Set images size 640 with batch of 8.
- Train model around 50 epochs .
- Visualise the training metrics with the help of tensorboard.
ISL detection technology involves the use of computer vision and artificial intelligence to recognize and interpret sign language gestures made by individuals who are deaf or hard of hearing.