Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents

What is RapidSense?

RapidSense is RTR’s premier perception toolkit. Through its usage, user’s can expand RTR’s state-of-the-art motion planning solution into unstructured environments. The following document, walks a new user through the process of setting up a RapidSense project and the hardware components in a cell.

Value proposition: Enable the ability for robotics robots to understand and adjust to an unstructured and dynamically changing environment with the use of sensors. When RapidSense is paired with RapidPlan, unmodeled obstacles can be avoided and goal-directed motions computed at runtime, allowing for process variation and environmental changes to be autonomously managed by the system.

...

RapidSense Platform

RapidSense is a platform upon which perception applications can be are built. At its core, RapidSense detects anything (objects) in the scene and updates the dynamic scene model (i.e., DSM) providing collision avoidance in an unstructured environment. RapidSense utilizes and fuses multiple sensor information together to then analyze and directly interact with the RapidPlan robot feeds and analyzes scene information directly interacting with RapidPlan’s motion planning software to enable enabling the robot(s) to react to the environment in real-time.

...

System Requirements

Basic RapidSense hardware and software requirements are listed below:

Hardware Requirements:

1. Sensors

Supported Sensors

  • Lower-resolution/budget applications: Intel RealSense D455

  • High-resolution applications: Photoneo (coming soon)

  • Safety-applications: SICK safeVisionary2 (coming soon)

Note: The architecture is built to currently support 2 cameras (Photoneo for high-resolution applications requiring tight tolerances ~2-10mm and willing to spend more or Intel RealSense cameras for a budget-friendly solution where resolution can be relaxed more).

Camera Mounting

Occlusions are the parts of the environment that are not in view of the camera due to an obstruction. The goal in placing your cameras, therefore, is to minimize occlusions. What positions will give the cameras as complete a view as possible of the robot setup at all times?

Camera placement also depends on the robot application. For example, for a pick and place application, ask yourself, where will the robot move? Where will the occlusions be, and for which cameras? 

You should expect this process to be somewhat trial-and-error. 

To get the optimal performance out of the RapidSense system, keep the following guidelines in mind when mounting your cameras:

  1. Make sure to position the cameras in your workspace to maximize the likelihood that obstacles will appear within the rated Field of View (FOV) of the sensor that is selected. Most camera providers provide CAD models of the sensors and include the FOV in the model, which this can be used to visualize and verify the camera ability to see the volume of interest.

  2. The Intel Realsense specifications are for +/- 2% depth accuracy when they’re within 2 meters of an obstacle, so we recommend placing cameras across the workspace so that obstacles are within this distance. 

  3. Space the cameras apart from one another and at different angles to provide different perspectives of the scene. 

  4. Mount the cameras securely and rigidly. If a camera vibrates relative to robot motion, it will return noisy data resulting in false positives for obstacles. It can also cause cameras to shift and invalidate the calibration data.

  5. Connect the camera cables with strain relievers. 

    Image Removed

...

2. Calibration Aruco Tags

Aruco tags will be necessary to acquire in order to perform calibration of the sensors to the RTR system.

At least (2) different Aruco tag patterns are required per application:

  1. One mounted on the robot end effector (TCP location of tag will need to be provided)

  2. One mounted static in the scene

Additional Aruco tags may be required depending on the system configuration.

See section "Calibration Tag Information" for further information.

3.

...

Inc drawio
simple0
zoom1
isSketch0
pageId1293058060
custContentId3026452488
diagramDisplayNamers_platform.drawio
lbox1
hiResPreview0
baseUrlhttps://realtimerobotics.atlassian.net/wiki
diagramNamers_platform.drawio
imgPageId2842034200
pCenter1
aspectFrfmqnz1SmwWdn6JSGvU 1
includedDiagram1
width661
aspectHashd12225d0f07b336515e57e8994b133d8c554eda7
linksauto
tbstyletop
height431