Skip to content

5. Program Architecture

Patrick Robichaud edited this page Nov 21, 2017 · 1 revision

Basic Module Format

           +---------------+
Input ---> | Functionality | ---> Output
           +---------------+

Modules are "modular" obviously, meaning they can be strung together and arranged (one's output becomes another's input), and/or swapped seamlessly as desired without breaking the program.

Typical Use Cases

The Capture and Control module can be swapped between Realtime and Simulation versions (derived classes) implementing the same interface and with the same output.

  • CaptureSim -> Vision -> ControlSim: for pure software testing/development.
  • Capture -> Vision -> ControlSim: for testing the Camera (no hardware glove required, but feedback is still provided).
  • CaptureSim -> Vision -> Control: for testing the Glove/Audible feedback (images/video loaded from disk, no camera required).

Peripheral modules such as Display, as well as individual Vision modules, can be enabled or disabled at will depending on the scenario/needs.

  • Display disabled: for actual device use/maximal performance.
  • Selected Vision modules disabled: for improving performance/only testing certain module.

Modules

Main

Not a module, simply the entry point for the program and creates/launches a Menu.

Menu

Not really a module, contains ALL the data passed to Core and all subsequent submodules.

Core

Responsible for configuring and launching all modules, and "connecting" their inputs/outputs together.

  • Input: NULL
  • Output: All data/results

Capture

Interfaces with the hardware camera through its SDK in realtime.

  • Input: NULL
  • Output: Captured Results (color/depth images)

CaptureSim

Simulates camera capture in software by loading images/video from disk.

  • Input: NULL
  • Output: Capture Results (color/depth images)

Vision

Contains all the Computer Vision detection submodules.

  • Input: Capture Results
  • Output: Detection Results.

DepthObstacle

Measures distance and creates a distance map from the infrared depth image.

  • Input: Vision Input (Capture Results)
  • Output: DepthObstacle Results

TrafficLight

Detects traffic lights by finding and filtering red blobs.

  • Input: Vision Input (Capture Results)
  • Output: TrafficLight Results

StopSign

Detects stop signs [TODO]

  • Input: Vision Input (Capture Results)
  • Output: StopSign Results

Face

Detects presence of faces using Machine Learning [TODO]

  • Input: Vision Input (Capture Results)
  • Output: Face Results

Vehicle

Detects presence of vehicles, possibly using Machine Learning [TODO]

  • Input: Vision Input (Capture Results)
  • Output: Vehicle Results

Control

Communicates with the hardware glove controller by sending finger actuator voltage values.

  • Input: Vision Results
  • Output: NULL

ControlSim

Outputs text values of values that could have been sent to the glove controller.

  • Input: Vision Results
  • Output: NULL

Display

Displays the color and depth images with overlay regions and labels of Vision detections.

  • Input: Vision Results
  • Output: NULL