-
Notifications
You must be signed in to change notification settings - Fork 7
Software Library Specifications
The library is implemented as a Python class library and its objectives are as follows:
-
To provide a unified interface for multiple agent models.
-
To specify joints with joint names and their positions/motions with numerical values normalized from -1 to +1, without knowledge of the IDs and the motion range for joints on the agent model.
APIs are shown with class.method names. Items whose class name is Robot can be used for both HSR and R2D2.
PyLIS/gym-foodhunting/gym_foodhunting/foodhunting/gym_foodhunting.py
-
Robot Class
-
HSR Class
-
R2D2 Class
Locomotion is implemented by rotating the left and right wheels with values between -1.0 and 1.0.
E.g., the robot moves forward by specifying 1.0 both to the left and right,
turns left with -1.0 to the left & 1.0 to the right,
and turns right with 1.0 to the left and -1.0 to the right.
-
API for setting wheel velocity
-
HSR.setWheelVelocity
-
R2D2.setWheelVelocity
-
-
API for setting the velocity for a joint
- Robot.setJointVelocity
-
API for setting the position for a joint
- Robot.setJointPosition
-
API for setting the torque for a joint (not supported)
- Robot. scaleJointForce
-
HSR
-
Setting the base roll position
- HSR.setBaseRollPosition
-
Setting the torso lift position
- HSR.setTorsoLiftPosition
-
Setting the head position
- HSR.setHeadPosition
-
Setting the arm position
- HSR.setArmPosition
-
Setting the wrist position
- HSR.setWristPosition
-
Setting the gripper position
- HSR.setGripperPosition
-
-
R2D2
-
Setting the head position
- R2D2.setHeadPosition
-
Setting the gripper position
- R2D2.setGripperPosition
-
-
Variables for color images and depth images from the wide angle camera on the head
-
Image arrays:
-
RGB color image
-
Depth image
-
Segmentation mask buffer (not supported on PyLIS)
-
-
Variables for image width and height
-
Robot.CAMERA_PIXEL_WIDTH
-
Robot.CAMERA_PIXEL_HEIGHT
-
-
API for camera images
- Robot. getCameraImage
-
-
APIs for getting sensory information
-
Camera input API
-
Renders information obtained from Robot.getCameraImage (the camera input method) into tensors to be processed as observation in deep reinforcement learning.
-
Robot.getObservation
-
-
APIs for getting absolute position/direction
Acquires the absolute position and absolute direction of the robot.
The absolute direction is returned in quaternion, which can be converted to the Euler form with the getEulerFromQuaternion PyBullet API.- Robot.getPositionAndOrientation
-
APIs for getting joint positions
-
HSR
-
Getting the base roll position
- HSR.getBaseRollPosition
-
Getting the torso lift position
- HSR.getTorsoLiftPosition
-
Getting the head position
- HSR.getHeadPosition
-
Getting the arm position
- HSR.getArmPosition
-
Getting the wrist position
- HSR.getWristPosition
-
Getting the gripper position
- HSR.getGripperPosition
-
-
R2D2
-
Getting the head position
- R2D2.getHeadPosition
-
Getting the gripper position
- R2D2.getGripperPosition
-
-
-
Collision detect APIs
Determines whether the robot collides (contacts) with another object.-
Robot.isContact
-
HSR.isContact
-
-