This page The below describes the calibration method for Beta RapidSense release. This requires some manual work. In the future RapidSense release, the process will include a GUI and be much more user friendly.
Table of Contents |
---|
Overview
The current method uses the robot to calibrate the cameras.
Calibration Requirements
The requirements for calibration are:
Aruco tag mounted on off of the robot faceplate
Static Aruco tag placed in the scene which is easily visible to the camerasensor
Calibration preset which has the TCP located at the center of the aruco Aruco tag. More details in the cal.json file linked below
cal.json file which has all the camera serial_numbers and it’s corresponding targets robot target positions for calibration which have the Aruco tag on the robot in the sensor field of view
Setup
To easily generate the cal.json file and save it in /etc/rapidsense/cal.json, we have a calibration_generator.py script which prompts the user for all the necessary information. After the cal.json file has been generated, run the calibration_service.py script and go to localhost:9000/calibration to calibrate all the cameras assuming you have rapidsense_app and proxy running. This script will pull in information from the cal.json file and make the robots move to the cal_targets sequentially, extrinsically calibrating all corresponding cameras. The robot will then return to the home location and all the cameras will save the pose of the static marker placed in the scene. After a successful calibration, all the values will be automatically updated in the cal.json file.
Architecture
The diagram below gives a quick overview of the architecture of the calibration sequence:
...
Child pages (Children Display) |
---|