A project to recognize sign language using OpenCV and Convolutional neural network
- OpenCV
- Numpy
- Keras
- sklearn
- Google Colab
You can create your own dataset and train your model or use our pretrained model to recognizer letters. The dataset consists of 19200 images i.e. 800 images per letter except 'J' and 'Z'.
- Specify the path for storing images in capture.py
- execute the following command
python capture.py
- Enter the letter for which you want to capture the images.
- Place your hand inside the green rectangle
- Press 'C' to start the capturing process
- Repeat step 2-5 for all the letters except 'J' and 'Z'
- Specify the path for parent folder of images in upload_array.py
- Execute the following command
python upload_array.py
- Upload the generated .npy files on Google Drive
- Run recogModel.ipynb
- Download the h5 file from your Google Rrive
- Specify the path of h5 file in recog.py
- Execute the following command
python recog.py
- Press 'C' to start recognition
- Press 'Q' to QUIT