What is RapidSense?
Value proposition: Enable the ability for robotics to understand and adjust to an unstructured and dynamically changing environment with the use of sensors. When RapidSense is paired with RapidPlan, unmodeled obstacles can be avoided and goal-directed motions computed at runtime, allowing for process variation and environmental changes to be autonomously managed by the system.
System Architecture
RapidSense is a platform upon which perception applications can be built. At its core, RapidSense detects anything (objects) in the scene and updates the dynamic scene model (i.e., DSM) providing collision avoidance in an unstructured environment. RapidSense utilizes and fuses multiple sensor information together to then analyze and directly interact with the RapidPlan robot motion planning software to enable the robot to react to the environment.
System Requirements
Basic RapidSense hardware and software requirements are listed below:
Hardware Requirements:
1. Sensors
Supported Sensors
Lower-resolution/budget applications: Intel RealSense D455
High-resolution applications: Photoneo (coming soon)
Safety-applications: SICK safeVisionary2 (coming soon)
Note: The architecture is built to currently support 2 cameras (Photoneo for high-resolution applications requiring tight tolerances ~2-10mm and willing to spend more or Intel RealSense cameras for a budget-friendly solution where resolution can be relaxed more).
Camera Mounting
Occlusions are the parts of the environment that are not in view of the camera due to an obstruction. The goal in placing your cameras, therefore, is to minimize occlusions. What positions will give the cameras as complete a view as possible of the robot setup at all times?
Camera placement also depends on the robot application. For example, for a pick and place application, ask yourself, where will the robot move? Where will the occlusions be, and for which cameras?
You should expect this process to be somewhat trial-and-error.
To get the optimal performance out of the RapidSense system, keep the following guidelines in mind when mounting your cameras:
Make sure to position the cameras in your workspace to maximize the likelihood that obstacles will appear within the rated Field of View (FOV) of the sensor that is selected. Most camera providers provide CAD models of the sensors and include the FOV in the model, which this can be used to visualize and verify the camera ability to see the volume of interest.
The Intel Realsense specifications are for +/- 2% depth accuracy when they’re within 2 meters of an obstacle, so we recommend placing cameras across the workspace so that obstacles are within this distance.
Space the cameras apart from one another and at different angles to provide different perspectives of the scene.
Mount the cameras securely and rigidly. If a camera vibrates relative to robot motion, it will return noisy data resulting in false positives for obstacles. It can also cause cameras to shift and invalidate the calibration data.
Connect the camera cables with strain relievers.
2. Calibration Aruco Tags
Aruco tags will be necessary to acquire in order to perform calibration of the sensors to the RTR system.
At least (2) different Aruco tag patterns are required per application:
One mounted on the robot end effector (TCP location of tag will need to be provided)
One mounted static in the scene
Additional Aruco tags may be required depending on the system configuration.
See section "Calibration Tag Information" for further information.