ARTIGO TÉCNICO Instituto Superior Técnico {jmessias, jsantos, jestilita, pal}@isr.ist.utl.pt
MONTE CARLO LOCALIZATION BASED ON GYRODOMETRY AND LINE-DETECTION ABSTRACT This paper introduces an application of the probabilistic Monte Carlo Localization algorithm to environments which are structured but whose large dimensions imply that relevant features for the update step are not always available and geometric symmetries lead to ambiguities on determining the robot location. The prediction step adds rate-gyro information to the traditional odometry information, so as to handle the ambiguity problem, and measures distance to the closest ground lines (whose location is known a priori) for the update step, incorporating a new “points-on-lines” detection algorithm to provide reliable sensor data. Results of the method application to real robots from RoboCup Middle Size League are presented. The robots use a novel dioptric system, based on a camera facing down and a fish-eye lens.
I. INTRODUCTION Certain tasks given to autonomous robots require that an estimate of the robot’s posture is available during execution. The process of obtaining such an estimate, assuming that only a map of the robot’s environment is available, is defined as self-localization. In the RoboCup Middle Size League (MSL), selflocalization is necessary at various levels, such as team coordination and behaviour execution. Since the visual aids available to these robots are progressively being removed (the goals, for example, are now identical for both teams and completely white), it is necessary to develop a system that allows for reliable self-localization, using for that purpose no explicit references other than those present in the current RoboCup regulations (such as the field markings). An adequate solution must be computationally efficient, and should be able to deal both with the kidnapping problem and with the existence of ambiguous positions due to field symmetry. Existing solutions to this problem rely on the basic Bayes Filter algorithm. Some of these Bayesian algorithms, such as the Kalman filter, approximate the probability distribution associated to the posture of the robot over the space of all possible postures (also called the belief), as a Gaussian, and propagate its second-order moments to represent it analytically. This approximation, however, is inadequate in some practical applications, where the system’s nonlinearities make this distribution clearly nonGaussian. The Extended Kalman Filter tries to deal with this problem by linearizing the system’s model. In contrast, Monte Carlo localization (MCL) is a particle-filter based implementation that avoids maintaining an analytical description of the belief, and instead represents it by a set of samples [1]. This accommodates for the system’s nonlinearities, and for arbitrary shapes of the belief function that is consequently generated. It can be shown that given a sufficiently large amount of samples, the algorithm always converges to the true posture [2]. A large amount of particles, however, can make the algorithm computationally untractable. The main problem then lies on having a trade-off between the number of particles and the overall speed of the selflocalization algorithm.
[ ]
robótica
In this project, we will present a Monte Carlo based approach to selflocalization that addresses these problems, inspired on previous MCL approaches for RoboCup [3][5] and combined with gyrodometry [7] and line detection. This version of the algorithm was implemented in the ISR/IST RoboCup MSL ISocRob team’s omnidirectional robots [12]. Until now the selflocalization of the omni robots was made based on Kalman Filter, using the odometry information and a goals triangulation technique, which could be detected by their colours (yellow and blue). Since the new rules came into place, the robots will not be able to distinguish the opponent goal from the own goal, and barely will be able to detect both goals simultaneously. The current approach is based on all the information one can extract in any given location, and a gyroscope will be used to resolve ambiguous localization issues. This implementation also presents a novel approach to line detection based on a dioptric system using a fish-eye lens.
II. THE MONTE CARLO LOCALIZATION ALGORITHM Monte Carlo localization follows the basic recursive Markov localization algorithm, which, in turn, derives from the Bayes Filter algorithm. Consider an autonomous robot at time t with posture lt , taking measurements zt localization from its sensors and accepting controls ut at each instant. In this algorithm, the posterior probability distribution p (lt | z1:tt , u1:tt), represented by the belief bel(lt) over the space of all possible postures, is obtained recursively from bel(lt–1) in a two-step process: in the first step, also called ut the prediction step, all of the controls applied to the robot between instants t-1 and t, are taken into account, and through the robot’s motion equations (described in the following sections), an estimate of the posterior bel(lt) is generated; in the update step, the measurements zt are checked against the predictions, multiplying the posterior bel(lt) by the probability of having that measurement at lt , as described in the observation model p (zt | lt), to obtain bel(lt). It is evident that for this algorithm to work some mechanism must exist to describe bel(lt), which may be irregular or multi-modal, in each iteration. This is accomplished by a set of samples (the so-called particles), taken