Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

This page describes the calibration method as of 09.08.2023.

Overview

The current method uses the robot to calibrate the cameras. The requirements for calibration are:

  • Aruco tag mounted on the robot

  • Static tag placed in the scene which is easily visible to the camera

  • Calibration preset which has the TCP located at the center of the aruco tag. More details in the cal.json file linked below

  • cal.json file which has all the camera serial_numbers and it’s corresponding targets for calibration

Setup

To easily generate the cal.json file and save it in /etc/rapidsense/cal.json, we have a calibration_generator.py script which prompts the user for all the necessary information. After the cal.json file has been generated, run the calibration_service.py script and go to localhost:9000/calibration to calibrate all the cameras assuming you have rapidsense_app and proxy running. This script will pull in information from the cal.json file and make the robots move to the cal_targets sequentially, extrinsically calibrating all corresponding cameras. The robot will then return to the home location and all the cameras will save the pose of the static marker placed in the scene. After a successful calibration, all the values will be automatically updated in the cal.json file.

Architecture

The diagram below gives a quick overview of the architecture of the calibration sequence:

Further details of components can be found in the pages below:

  • No labels