Bartosz Tworek1, Alexandre Bernardino2 and José Santos-Victor2 1 Faculty of Electrical Eng., Automatics, Computer Sc. and Electronics, AGH University of Technology and Science, Krakow, Poland bartosz.tworek@gmail.com 2 Institute for Systems and Robotics, Instituto Superior Técnico, Lisboa, Portugal {alex,jasv}@isr.ist.utl.pt
ARTIGO TÉCNICO
VISUAL HOMING OF ROBOT HEADS WITHOUT ABSOLUTE SENSORS ABSTRACT With the increasing miniaturization of robotic devices, many actuators lack absolute position sensing. In these cases the initial position of the joints is unknown at startup. In this paper we present a vision based method for the automatic homing of serial kinematic structures composed of rotational joints, and having a perspective camera on the end-effector. Examples of such systems are pan-tilt surveillance cameras and most kinds of humanoid robot heads. The method is based on producing motions with known amplitude in one of the joints of the kinematic chain to induce image motion in the camera. The analysis of the induced homography allows the computation of the unknown angle of the other joint. The method can be iterated on more axes to calibrate longer serial chains. It requires calibrated cameras, static objects while homing and short links in the kinematics structure (or, equivalently, far away objects). We have implemented and validated the method in a small humanoid robot head.
1. INTRODUCTION Many off-the-shelf DC motors are equipped with incremental encoders as the main feedback sensor, lacking absolute position sensing. These types of motors are often used in the construction of robots and other automated devices. In these systems it is not possible to know the initial position of the joints at startup and a procedure is necessary to set the robot to a known state, denoted as home or zero position. To address this problem, it is usual to equip the robot with limit switches, or homing switches, that detect when the axes are in particular angular positions. However, due to miniaturization constrains, it may not be possible to install such sensors in the robot. Another possibility is to drive the axes to a mechanical stop and monitor the motor current. When the current exceeds a certain value, then the motor has reached the mechanical limit, whose angle can be known a priori. However, this procedure adds a source of physical stress in the system and may damage the mechanical components in the long term. Even when the above strategies are feasible, they require the careful placement of limit and home switches, and a precise measurement of mechanical limits. Additionally, when attaching the cameras to the endeffector, there are always some misalignments that may degrade the initial calibration procedure. In this paper we propose a solution to this problem for certain types of kinematics structures having cameras at the last joint of the chain. We present a self-homing procedure for system start-up that does not require absolute sensors, neither home/limit switches, nor the need to drive the system to mechanical hard stops. Instead, it performs small prospective motions in the robot joints and observes the image motion induced in the camera. It is assumed that the scene is static and that axes almost intersect (or, equivalently, objects are distant enough from the camera with respect to the length of the kinematics links). In these circumstances the induced image motion only depends on the given motion and the angle between the camera’s optical axis and the rotation axis. By iterating this procedure in the several robot axes, it is therefore possible to automatically determine the wake-up state of the system.
[4]
robótica
There are very few works addressing the problem of visual based homing. Sometimes visual homing denotes the process of driving a system to some known position in the environment (see for instance [9]). In our case we drive the robot to a known kinematic configuration rather that a known position in space. To the best of our knowledge, the only work related to ours is [10] where the home configuration of a robot arm is achieved using images taken from outside cameras. In our case the cameras are “inside” the robotic system being calibrated. The types of kinematic structures we consider are very common both in surveillance cameras and in robot heads. We implement the method and present results in a small humanoid robot head, calibrating its eyes and neck. Notwithstanding, the principle can be easily extended for other serial kinematics structures. This paper is organized as follows. In Section 2 we formulate the problem in terms of the system kinematics and a homography estimation procedure. Then, in Section 3 we present a methodology to estimate the particular homographies arising in this problem. Section 4 is devoted to the presentation of experimental results of the application of the proposed method to the automatic homing of a small robot head. Finally, Section 5 presents the conclusion of the work and directions for future developments.
2. PROBLEM FORMULATION In this section we formulate our problem in terms of a homography estimation problem. A homography is a transformation that is able to explain the relationship between the points observed in an image before and after a rotation of the camera. From the homography it is often possible to recover the rotation angles. Therefore, we are going to analyze the homography arising from the prospective motions applied to the robot, as a function of the initial, unknown, joint angles. Let us consider initially the tilt-pan kinematics structure presented in Fig. 1. A camera is attached first to a pan unit, and then the pan unit is attached to a tilt unit. A similar analysis can be made for other kinematics structures.