Skip to content

DirectAI/ViamIntegration

Repository files navigation

DirectAI Viam Integration (beta)

Module of the Viam vision service to automatically build, deploy, and run inferences on custom object detectors and image classifiers via DirectAI's cloud offering. Since this is a beta test of such a service, the latency may be high and availability of the cloud service is not guaranteed. See here for a demo!

Internally, this module calls DirectAI's deploy_detector and deploy_classifier endpoints with the provided configuration, saves the ID of the deployed models, and then calls the models via their IDs on incoming images. See docs for an overview of DirectAI's publicly available API.

Note that this README closely follows Viam's in-house motion-detector documentation, as described here.

Getting started with Viam & DirectAI

Start by configuring a camera on your robot. Remember the name you give to the camera, it will be important later.

Note

Before configuring your vision service, you must create a robot.

Before calling DirectAI's API via Viam, you have to grab your client credentials. Generate them by clicking Get API Access on the DirectAI website. Save them in an Access JSON on the same machine that's running your Viam Server.

{
  "DIRECTAI_CLIENT_ID": "UE9S0AG9KS4F3",
  "DIRECTAI_CLIENT_SECRET": "L23LKkl0d5<M0R3S4F3"
}

Configuration

Navigate to the Config tab of your robot’s page in the Viam app. Click on the Services subtab and click Create service. Select the vision type, then select the directai-beta model. Enter a name for your service and click Create.

On the new component panel, copy and paste the Example Detector or Classifier Attributes. Note that you can deploy classifier & detector attributes simultaneously if you'd like. Ensure that the Access JSON path that you provide in your Config is absolute, not relative. (e.g. /Users/janesmith/Downloads/directai_credential.json, not ~/Downloads/directai_credential.json)

Example Detector Attributes

{
  "access_json": "ABSOLUTE_PATH_TO_ACCESS_JSON_FILE",
  "deployed_detector": {
    "detector_configs": [
      {
        "name": "nose",
        "examples_to_include": [
          "nose"
        ],
        "detection_threshold": 0.1
      },
      {
        "examples_to_include": [
          "mouth"
        ],
        "examples_to_exclude": [
          "mustache"
        ],
        "detection_threshold": 0.1,
        "name": "mouth"
      },
      {
        "examples_to_include": [
          "eye"
        ],
        "detection_threshold": 0.1,
        "name": "eye"
      },
    ],
    "nms_threshold": 0.1
  }
}

Example Classifier Attributes

{
  "access_json": "ABSOLUTE_PATH_TO_ACCESS_JSON_FILE",
  "deployed_classifier": {
    "classifier_configs": [
      {
        "name": "happy",
        "examples_to_include": [
          "happy person"
        ],
      },
      {
        "name": "sad",
        "examples_to_include": [
          "sad person"
        ]
      }
    ]
  }
}

Attributes

The following attributes are available for directai:viamintegration:directai-beta vision services:

Name Type Inclusion Description
access_json string Required A string that indicates an absolute path on your local machine to a JSON including DirectAI Client Credentials. See description in Example Access JSON section.
deployed_classifier json Optional A JSON that contains a classifier_configs key and corresponding list of classifier configurations. Each classifier is defined by a name, a list of text examples_to_include, and a list of text examples_to_exclude. See Example Classifier Attributes.
deployed_detector json Optional A JSON that contains detector_configs (list of detector configurations) and an nms_threshold. Each detector is defined by a name, a list of text examples_to_include, a list of text examples_to_exclude, and a detection_threshold. For more information on NMS and Detection thresholds, check out the DirectAI docs. See Example Detector Attributes.

Note

For more information, see Configure a Robot.

Usage

This module implements the following methods of the vision service API:

  • GetDetections()
  • GetDetectionsFromCamera()
  • GetClassifications()
  • GetClassificationsFromCamera()

The module behavior differs slightly for classifications and detections.

When returning classifications, the module will return a list of dictionaries. The list length will be equal to length of the classifier_configs list provided in the deployed classifier attributes. Each dictionary will include class_name and confidence key-value pair in decreaising confidence order.

When returning detections, the module will return a list of detections with bounding boxes that encapsulate the movement. Each detection will be of the following form:

{
  "confidence": float,
  "class_name": string,
  "x_min": float,
  "y_min": float,
  "x_max": float,
  "y_max": float
}

Visualize

Once the directai:viamintegration:directai-beta modular service is in use, configure a transform camera to see classifications or detections appear in your robot's field of vision. View the transform camera from the Control tab.

Next Steps

To write code to use the motion detector output, use one of the available SDKs.

Please join DirectAI's discord, contact us at [email protected], or schedule time on our calendly if you have any questions or feedback!

About

Deploy DirectAI's computer vision models on your Viam Robot

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •