SolarSense is an IoT and machine learning (ML) solution designed to detect and notify users when solar panels require cleaning. By leveraging a Raspberry Pi camera, an onboard TensorFlow Lite model, and AWS cloud services, SolarSense provides an automated system for maintaining optimal solar panel efficiency.
- Overview
- System Architecture
- API Service
- IoT Client
- Model Training & Deployment
- Deployment
- Future Enhancements
- Contributing
- License
Solar panels accumulate dirt over time, reducing efficiency. SolarSense automates the detection process by:
- Capturing images using a Raspberry Pi camera
- Running a TensorFlow Lite ML model locally to classify images as clean or dirty
- Automatically deploying updated models via AWS S3 and MQTT
- Sending alerts via AWS SNS when cleaning is required
The system consists of three key components:
- IoT Client: Runs on a Raspberry Pi, captures images, and performs inference.
- API Service: Handles notifications and integrates with AWS IoT and SNS.
- Model Training & Deployment: Uses AWS S3 and MQTT to update IoT devices automatically.
The API is implemented as an AWS Lambda function triggered by AWS IoT. It sends notifications via AWS SNS when a dirty solar panel is detected.
- The IoT device publishes an MQTT message to AWS IoT Core.
- AWS IoT triggers the Lambda function.
- The Lambda function sends a notification via AWS SNS.
import { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda";
import { SNS } from "aws-sdk";
import dotenv from "dotenv";
dotenv.config();
const sns = new SNS();
export const handler = async (
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> => {
// Function implementation...
};
The IoT client runs on a Raspberry Pi and is responsible for:
- Capturing images using the Raspberry Pi camera.
- Running inference using TensorFlow Lite.
- Publishing results to AWS IoT Core via MQTT.
- CameraService: Captures images using the Raspberry Pi camera.
- ImageProcessor: Preprocesses images for inference.
- ModelService: Loads and runs the TensorFlow Lite model for classification.
- MQTTClient: Handles MQTT connectivity and message publishing.
- predict.py: Orchestrates image capture, inference, and MQTT messaging.
- Capture image using OpenCV.
- Preprocess the image (resize, normalize, etc.).
- Run inference using TensorFlow Lite.
- If classified as dirty, publish an MQTT message to AWS IoT Core.
The computer vision model is based on MobileNetV2 and trained using TensorFlow & Keras.
It is designed to classify images of solar panels as clean or dirty.
The model leverages transfer learning, using MobileNetV2 as the feature extractor:
- Base Model: MobileNetV2 (pretrained on ImageNet, frozen weights).
- Global Average Pooling Layer: Reduces feature map size.
- Dense Layer (1024 neurons, ReLU activation): Learns solar panel-specific patterns.
- Dropout Layer: Prevents overfitting (value set via configuration).
- Output Layer (1 neuron, Sigmoid activation): Performs binary classification.
- Dataset Preparation: Images are preprocessed and split into training, validation, and test sets.
- Model Training: The model is trained using an Adam optimizer with a binary cross-entropy loss function.
- Performance Monitoring: Training metrics (accuracy, loss) are logged using Weights & Biases (WandB).
- Early Stopping & Checkpointing: Ensures the best-performing model is saved.
- Evaluation: The model is tested on unseen data to measure accuracy, precision, recall, and F1-score.
- The trained model is converted to TensorFlow Lite (TFLite) for efficient edge deployment.
- The optimized model is uploaded to AWS S3 for centralized distribution if it is the champion model on the test set.
- IoT devices receive automatic updates via MQTT messaging when a new model is available.