This project focuses on developing and implementing deep learning models to detect tooth decay using smartphone microphotography. The goal is to enhance the accessibility of dental diagnostics, particularly for populations with limited access to traditional diagnostic tools. The models were trained and evaluated on a dataset collected through smartphone microphotography, providing a cost-effective solution for early detection of tooth decay.
The dataset was collected via smartphone microphotography, capturing detailed images of tooth tissue. These images were then analyzed to detect signs of decay, a critical step in improving dental health diagnostics and making them more accessible to underserved populations.
State-of-the-art deep learning models were applied, including:
- Detectron2
- YOLOv5
- NanoDet
- FCOS
- MobileNetV3
A comparative analysis of these models was conducted to evaluate their performance in detecting tooth decay. The focus was on assessing the accuracy and effectiveness of each model to identify the most promising approach.
The following models were implemented to analyze the microphotography dataset:
- Detectron2
- YOLOv5
- NanoDet
- FCOS
- MobileNetV3
- PyTorch: For deep learning model implementation.
- TensorFlow: For experimenting with different models and training.
- Google Colab: Used as the primary environment for running the models and conducting experiments.
This project provided hands-on experience in computer vision, deep learning model optimization, and performance evaluation. The insights gained contribute to the development of innovative solutions in medical imaging, particularly in improving accessibility to dental diagnostics.
This README provides a clear, structured overview of your project, making it easy for others to understand the objectives, techniques, and outcomes.