Info |
---|
The below describes the calibration method for Beta RapidSense release. This requires some manual work. In the future RapidSense release, the process will include a GUI and be much more user friendly. |
Table of Contents |
---|
Overview
The current method uses the robot to calibrate the camerasthe sensors. The robot will need to hold a calibration tag and move through positions with the tag in the sensors field of view during calibration process. The positions will need to be set by the user. A calibration tag will also need to be mounted stationary in the scene in the field of view of the sensors in order for the system to be able to detect if the sensors have shifted out of calibration.
Calibration Requirements
...
Aruco tag mounted off of the robot faceplate
Static Aruco tag placed in the scene which is easily visible to the sensor
Calibration preset which has the with accurate TCP located at the center of the Aruco tag. More details in the cal.json file linked below
cal.json file which has all the camera serial_numbers and it’s corresponding robot target positions for calibration which have the Aruco tag on the robot in the sensor field of view
...
To easily generate the cal.json file and save it in /etc/rapidsense/cal.json, we have there is a calibration_generator.py script which prompts the user for all the necessary information. After the cal.json file has been generated, run the calibration_service.py script and go to localhost:9000/calibration to calibrate all the cameras assuming you have after rapidsense_app and proxy are running. This script will pull in information from the cal.json file and make the robots move to the cal_targets sequentially, extrinsically calibrating all corresponding cameras. The robot will then return to the home location and all the cameras will save the pose of the static marker placed in the scene. After a successful calibration, all the values will be automatically updated in the cal.json file.
...