-
Notifications
You must be signed in to change notification settings - Fork 13
5. Program Architecture
+---------------+
Input ---> | Functionality | ---> Output
+---------------+
Modules are "modular" obviously, meaning they can be strung together and arranged (one's output becomes another's input), and/or swapped seamlessly as desired without breaking the program.
The Capture and Control module can be swapped between Realtime and Simulation versions (derived classes) implementing the same interface and with the same output.
- CaptureSim -> Vision -> ControlSim: for pure software testing/development.
- Capture -> Vision -> ControlSim: for testing the Camera (no hardware glove required, but feedback is still provided).
- CaptureSim -> Vision -> Control: for testing the Glove/Audible feedback (images/video loaded from disk, no camera required).
Peripheral modules such as Display, as well as individual Vision modules, can be enabled or disabled at will depending on the scenario/needs.
- Display disabled: for actual device use/maximal performance.
- Selected Vision modules disabled: for improving performance/only testing certain module.
Not a module, simply the entry point for the program and creates/launches a Menu.
Not really a module, contains ALL the data passed to Core and all subsequent submodules.
Responsible for configuring and launching all modules, and "connecting" their inputs/outputs together.
- Input: NULL
- Output: All data/results
Interfaces with the hardware camera through its SDK in realtime.
- Input: NULL
- Output: Captured Results (color/depth images)
Simulates camera capture in software by loading images/video from disk.
- Input: NULL
- Output: Capture Results (color/depth images)
Contains all the Computer Vision detection submodules.
- Input: Capture Results
- Output: Detection Results.
Measures distance and creates a distance map from the infrared depth image.
- Input: Vision Input (Capture Results)
- Output: DepthObstacle Results
Detects traffic lights by finding and filtering red blobs.
- Input: Vision Input (Capture Results)
- Output: TrafficLight Results
Detects stop signs [TODO]
- Input: Vision Input (Capture Results)
- Output: StopSign Results
Detects presence of faces using Machine Learning [TODO]
- Input: Vision Input (Capture Results)
- Output: Face Results
Detects presence of vehicles, possibly using Machine Learning [TODO]
- Input: Vision Input (Capture Results)
- Output: Vehicle Results
Communicates with the hardware glove controller by sending finger actuator voltage values.
- Input: Vision Results
- Output: NULL
Outputs text values of values that could have been sent to the glove controller.
- Input: Vision Results
- Output: NULL
Displays the color and depth images with overlay regions and labels of Vision detections.
- Input: Vision Results
- Output: NULL