A new three object triangulation algorithm based on the power center of three circles
INTELSIG Laboratory, Montefiore Institute, University of Liège, Belgium
Abstract
Positioning is a fundamental issue in mobile robot applications that can be achieved in multiple ways. Among these methods, triangulation is a proven technique. As it exists for a long time, many variants of triangulation have been proposed. Which variant is most appropriate depends on the application because some methods ignore the beacon ordering while other have blind spots. Some methods are reliable but at a price of increasing complexity or special cases study. In this paper, we present a simple and new three object triangulation algorithm. Our algorithm works in the whole plane (except when the beacons and the robot are concyclic or colinear), and for any beacon ordering. Moreover, it does not need special cases study and has a strong geometrical meaning. Benchmarks show that our algorithm is faster than existing and comparable algorithms. Finally, a quality measure is intrinsically derived for the triangulation result in the whole plane, which can be used to identify the pathological cases, or as a validation gate in Kalman filters.
1 Introduction
Positioning is a fundamental issue in mobile robot applications. Indeed, in most cases, a mobile robot that moves in its environment has to position itself before it can execute its actions correctly. Therefore the robot has to be equipped with some hardware and software capable to provide a sensory feedback related to its environment
[16]. Positioning methods can be classified into two main groups
[7]: (1)
relative positioning, and (2)
global or
absolute positioning. The first group (also called
dead
reckoning) achieves positioning by odometry which consists to count the number of wheel revolutions to compute the offset relative to a known position. Odometry is very accurate for small offsets but is not sufficient because of the unbounded accumulation of errors over time (due to wheel slippage, imprecision in the wheel circumference, or wheel inter axis)
[7]. Furthermore odometry needs an initial position and fails when the robot is “wakenup” (after a forced reset for example) or is raised and dropped somewhere, since the reference position is unknown or modified. A global positioning system is thus required to recalibrate the robot position periodically.
Relative and global positioning are complementary to each other
[16, 9] and are typically merged together by using a Kalman filter
[14, 6]. In many cases, global positioning is ensured by beaconbased triangulation or trilateration. Triangulation is the process of determining the location of a point by measuring angles to it from known points, while trilateration methods involve the determination of absolute or relative locations of points by measurement of distances. Because of the large variety of angle measurement systems, triangulation has emerged as a widely used, robust, accurate, and flexible technique
[10]. Another advantage of triangulation versus trilateration is that the robot can compute its orientation (or heading) in addition to its position, so that the complete
pose of the robot can be found. The process of determining the robot
pose from three beacon angle measurements is termed
Three Object Triangulation [1]. Fig.
1↓ illustrates the process of triangulation. In the remainder of this paper, we concentrate on three object triangulation methods.
1.1 Related works
Various triangulation algorithms may be found in
[16, 15, 5, 1, 4, 10, 11, 14, 13, 6, 2, 3]. These algorithms can be classified into four groups: (1)
Geometric Triangulation, (2)
Geometric Circle Intersection, (3)
Iterative methods (Iterative Search, NewtonRaphson, etc), and (4)
Multiple Beacons Triangulation. The first group could be named
Trigonometric Triangulation because it makes an intensive use of trigonometric computations. The second group computes the parameters (radius and center) of two (of the three) circles passing through the beacons and the robot, then it computes the intersection between these two circles. The first and second groups are typically used as a solution of the three object triangulation problem. The third group linearizes the trigonometric relations to converge to the robot position after some iterations, from a starting point (usually the last know robot position). The fourth group addresses the more general problem of finding the robot pose from multiple angle measurements (usually corrupted by errors). It appears that the second group (
Geometric Circle Intersection) is the most popular for solving the three object triangulation problem
[13, 3].
These algorithms all have advantages and drawbacks, and the method has to take the requirements of a particular application into account, which leads to make some compromises. For example, if the setup contains three beacons only or if the robot platform has a low computing power, methods of the first and second groups are the best candidates. Methods of the third and fourth groups are appropriate if the application must handle multiple beacons and if it can accommodate to a higher computational cost. The main drawback of the third group is the convergence issue (existence or uniqueness of the solution)
[1]. The main drawback of the fourth group is the computational cost
[16, 15].
The drawbacks of the first and second group are usually a lack of precision in the following points: (1) the beacon ordering needed to get the correct solution, (2) the consistency of the methods when the robot is located outside the triangle defined by the three beacons, (3) the strategy to follow when falling into some particular geometrical cases (typically mathematical undeterminations when solving trigonometric equations with an argument equal to
0 or
π, division by
0, etc), and (4) the quality measure of the computed position. Simple methods of the first and second groups usually fail to propose an answer to all these raised issues. To work in the whole plane and for any beacon ordering (for instance
[11]), they have to consider a set of special geometrical cases, resulting in a lack of clarity in the method. Finally, none of these algorithms gives a realistic quality measure of the computed position.
1.2 Overview
Our paper presents a new three object triangulation algorithm that works in the whole plane (except when the beacons and the robot are concyclic or colinear), and for any beacon ordering. Moreover it uses a minimal number of trigonometric computations and, finally, it leads to a natural and quantitative quality measure of the computed position.
The paper is organized as follows. Our triangulation algorithm is described in Section
2↓. Section
3↓ presents simulation results. Then, we conclude the paper in Section
4↓.
2 Description of a new three object triangulation algorithm
Our algorithm belongs to the second group, that is:
Geometric Circle Intersection. It first computes the parameters of the three circles passing through the robot and the three pairs of beacons. Then it computes the intersection of these three circles, by using all the three circles, not only two of them.
Our algorithm relies on two assumptions: (1) the beacons are distinguishable (a measured angle can be associated to a given beacon), and (2) the angle measurements from the beacons are taken separately, and relatively to some reference angle
θ, usually the robot heading (see Fig.
1↑). Note that the second hypothesis simply states that angles are given by a rotating angular sensor (goniometer). Such sensors are common in mobile robot positioning using triangulation
[8, 7, 5, 2, 3, 17]. By convention, in the following, we consider that angles are measured counterclockwise (CCW), like angles on the trigonometric circle. Changing the rotating direction to clockwise (CW) requires a minimal changes of our algorithm.
2.1 First part of the algorithm: the circle parameters
In a first step, we have to find the locus of points
R that
see two fixed points
B_{1} and
B_{2} with a constant angle
α_{12}, in the 2D plane. It is a wellknown result that this locus is an arc of the circle passing through
B_{1} and
B_{2}, whose radius depends on the distance between
B_{1} and
B_{2}, and
α_{12} (Proposition 21 of Book III of
Euclid’s Elements). More precisely, this locus is composed of two arcs of circle, which are the reflection of each other through the line joining
B_{1} and
B_{2} (see the left drawing of Fig.
2↓).
A robot that measures an angle
α_{12} between two beacons without any caution can stand on either of these two arcs. It would be the case if the beacons were not distinguishable or if the angular sensor was not capable to measure angles larger than
π. To avoid this ambiguity, we impose that, as shown in the righthand drawing of Fig.
2↑, the measured angle between two beacons
B_{1} and
B_{2}, denoted
α_{12}, is always computed as
α_{12} = α_{2} − α_{1} (this choice is natural for a CCW rotating sensor). This is consistent with our measurement considerations and it removes the ambiguity about the locus. For now the locus is a single circle passing through
R,
B_{1}, and
B_{2}. But it requires that beacons are indexed and that the robot is capable to guess the index of any beacon.
The circle equation may be derived by using the complex representation of 2D points (
Argand diagram). The idea consists to express that the complex argument of
(B_{2} − R) is equal to the complex argument of
(B_{1} − R), plus
α, or:
arg⎧⎩(B_{2} − R)/(B_{1} − R)⎫⎭
=
α
⇒ arg{(B_{2} − R)(B_{1} − R)}
=
α
Then replacing
R by
(x + iy),
B_{1} by
(x_{1} + iy_{1}), and
B_{2} by
(x_{2} + iy_{2}), we have that
arg{(x_{2} + iy_{2} − x − iy)(x_{1} − iy_{1} − x + iy)e^{ − iα}}
=
0
⇒ Im{[(x_{2} − x) + i(y_{2} − y)][(x_{1} − x) + i(y − y_{1})][cosα − isinα]}
=
0
⇒ − sinα(x_{2} − x)(x_{1} − x) + sinα(y_{2} − y)(y − y_{1})
+ cosα(x_{2} − x)(y − y_{1}) + cosα(y_{2} − y)(x_{1} − x)
=
0
where
i = √( − 1). After many simplifications, we find the locus
(x − x_{12})^{2} + (y − y_{12})^{2}
=
R^{2}_{12}
which is a circle whose center
{x_{12}, y_{12}} is located at
x_{12} = ((x_{1} + x_{2}) + cotα(y_{1} − y_{2}))/(2), y_{12} = ((y_{1} + y_{2}) − cotα(x_{1} − x_{2}))/(2)
and whose squared radius equals
R^{2}_{12}
=
((x_{1} − x_{2})^{2} + (y_{1} − y_{2})^{2})/(4 sin^{2}α)
These equations may also be found in
[13]. The replacement of
α by
π + α in the above equations yields the same circle parameters (Fig.
2↑, right), which is consistent with our measurement considerations. For an angular sensor turning in CW direction, one have to change the sign of
cotα in the center coordinates equations. In the following, we use these notations:

B_{i} is the beacon #i, with coordinates {x_{i}, y_{i}},

R is the robot position, with coordinates {x_{R}, y_{R}},

α_{i} is the measured angle for beacon B_{i} with respect to the robot orientation,

α_{ij} = α_{j} − α_{i} is the bearing angle between beacons B_{i} and B_{j},

T_{ij} = cot(α_{ij}),

C_{ij} is the circle passing through B_{i}, B_{j}, and R,

c_{ij} is the center of C_{ij}, with coordinates {x_{ij}, y_{ij}}:

R_{ij} is the radius of C_{ij}, derived from:
All the previous quantities are valid for i = 1, 2, 3 and j = (i) mod 3 + 1. The special cases (α_{ij} = 0 or α_{ij} = π) are discussed later.
2.2 Second part of the algorithm: the circles intersection
From the previous section, each bearing angle
α_{ij} between beacons
B_{i} and
B_{j} constraints the robot to be on a circle
C_{ij}, passing through
B_{i},
B_{j}, and
R (Fig.
3↓). The parameters of the circles are given by Equ.
5↓ and
2↑. Note that we are in the case of a trilateration problem with virtual beacons (the circle centers) and virtual range measurements (the circle radii). Common methods use two of the three circles to compute the intersections (when they exist), one of which is the robot position, the second being the common beacon of the two circles. This requires to solve a quadratic system and to choose the correct solution for the robot position
[13]. Moreover the choice of the two circles is arbitrary and usually static, whereas this choice should depend on the measured angles and beacons configuration.
Hereafter, we propose a novel method to compute this intersection, by using all the three circles, and reducing the problem to a linear problem. To understand this simple method, we first have to remind the notion of the
power center (or
radical center) of three circles. The
power center of three circles is the unique point of equal
power with respect to these circles
[12]. The
power of a point
p relative to a circle
C is defined as:
where
{x, y} are the coordinates of point
p,
{x_{c}, y_{c}} are the circle center coordinates and
R is the circle radius. The power of a point is null onto the circle, negative inside the circle and positive outside the circle. It defines a sort of distance of a point relative to a circle. The
power line (or
radical axis) of two circles is the locus of points having the same power with respect to each circle
[12]. It is perpendicular to the line joining the circle centers and passes through the circle intersections, when they exist. When considering three circles, the three power lines, defined by the three pairs of circles are concurring in the power center
[12]. Fig.
4↓ shows the power center of three circles for various configurations. The power center is always defined, except when at least two of the three circle centers are equal, or when the circle centers are colinear (parallel power lines).
The third case of Fig.
4↑ (righthand drawing) is remarkable as it perfectly matches to our triangulation problem (Fig.
3↑). Indeed, the power center of three concurring circles corresponds to their unique intersection. In our case, we are sure that the circles are concurring since
α_{31} = − (α_{12} + α_{23}) by construction (only two of the three bearing angles are independent). It has the advantage that this intersection may be computed by intersecting the power lines, which is a linear problem. The power line of two circles is obtained by equating the power of the points relatively to each circle (Equ.
3↑). In our problem, the power line of
C_{12} and
C_{23} is given by:
(x − x_{12})^{2} + (y − y_{12})^{2} − R^{2}_{12}
=
(x − x_{23})^{2} + (y − y_{23})^{2} − R^{2}_{23}
⇒ x (x_{12} − x_{23}) + y (y_{12} − y_{23})
=
(x^{2}_{12} + y^{2}_{12} − R^{2}_{12})/(2) − (x^{2}_{23} + y^{2}_{23} − R^{2}_{23})/(2)
⇒ x (x_{12} − x_{23}) + y (y_{12} − y_{23})
=
k_{12} − k_{23}
where we introduce a new quantity
k_{ij} which only depends on
C_{ij} parameters (
k_{ij} is the power of the origin relatively to
C_{ij}, divided by two):
In our triangulation problem, we have to intersect the three power lines, that is to solve this linear system:
x (x_{12} − x_{23}) + y (y_{12} − y_{23})
=
k_{12} − k_{23}
x (x_{23} − x_{31}) + y (y_{23} − y_{31})
=
k_{23} − k_{31}
x (x_{31} − x_{12}) + y (y_{31} − y_{12})
=
k_{31} − k_{12}
As can be seen, any of these equations may be obtained by adding the two others, which is way to prove that the three power lines coincide in a unique point: the power center. The coordinates of the power center, that is the robot position is given by:
The denominator
D, common to
x_{R} and
y_{R} is equal to:
which is the signed area between the circle centers, multiplied by
2. This result confirms that the power center exists, that is
D ≠ 0, if the circle centers are not colinear. The special case (
D = 0) is discussed later.
2.3 First (naive) version of the algorithm
A first, but naive version of our algorithm is described by Algorithm
1↓.

compute the three cot(.): T_{ij} = cot(α_{ij}),

compute the six circle centers coordinates {x_{ij}, y_{ij}} by Equ. 1↑,

compute the three squared radii R^{2}_{ij} by Equ. 2↑,

compute the three parameters k_{ij} by Equ. 4↑,

compute the denominator D by Equ. 6↑. Return with an error if D = 0.

compute the robot position {x_{R}, y_{R}} by Equ. 5↑ and return.
Algorithm 1 First version of the algorithm.
This method is correct but it is possible to simplify it with some considerations about the involved equations. First, note that the squared radii
R^{2}_{ij} only appear in the parameters
k_{ij}. If we replace the expression of
R^{2}_{ij} (Equ.
2↑) in the expression of
k_{ij} (Equ.
4↑), we find, after many simplifications that:
which is much simpler than Equ.
2↑ and
4↑ (no squared terms anymore). In addition, the
^{1}⁄_{2} factor involved in the circle centers coordinates (Equ.
1↑) as well as in the parameters
k_{ij} (Equ.
4↑) disappears in the robot position coordinates (Equ.
5↑). This factor can thus be omitted. For now, we use these modified circle center coordinates
{x’_{ij}, y’_{ij}}:
and parameters
k’_{ij}:
The algorithm can be rewritten as follows:

compute the three cot(.): T_{ij} = cot(α_{ij}),

compute the modified circle centers coordinates {x’_{ij}, y’_{ij}} by Equ. 8↑,

compute the modified parameters k’_{ij} by Equ. 9↑,

compute the denominator D by Equ. 6↑. Return with an error if D = 0.

compute the robot position {x_{R}, y_{R}} by Equ. 5↑ and return.
Algorithm 2 Second version of the algorithm.
2.4 Final version of the algorithm
The most important simplification consists in translating the world coordinate frame into one the beacons, that is solving the problem relatively to one beacon and then add the beacon coordinates to the computed robot position. In the following, we arbitrarily choose
B_{2} as the origin (
B_{2} = {0, 0}). The other beacon coordinates become:
B_{1} = {x_{1} − x_{2}, y_{1} − y_{2}} = {x’_{1}, y’_{1}} and
B_{3} = {x_{3} − x_{2}, y_{3} − y_{2}} = {x’_{3}, y’_{3}}. Since
x’_{2} = 0 and
y’_{2} = 0, we have
k’_{12} = 0,
k’_{23} = 0. Also, we can compute the value of one
cot(.) by referring to the two other
cot(.) because the three angles are not independent (
α_{31} = − (α_{12} + α_{23})):
The final algorithm is given in Algorithm
3↓.
Given the three beacon coordinates
{x_{i}, y_{i}} and the three angle measurements
α_{i}:

compute the modified beacon coordinates:
x’_{1} = x_{1} − x_{2}, y’_{1} = y_{1} − y_{2}, x’_{3} = x_{3} − x_{2}, y’_{3} = y_{3} − y_{2}

compute the three cot(.):
T_{12} = cot(α_{2} − α_{1}), T_{23} = cot(α_{3} − α_{2}), T_{31} = (1 − T_{12}T_{23})/(T_{12} + T_{23})

compute the modified circle center coordinates {x’_{ij}, y’_{ij}}:
x’_{12} = x’_{1} + T_{12} y’_{1}, y’_{12} = y’_{1} − T_{12} x’_{1}
x’_{23} = x’_{3} − T_{23} y’_{3}, y’_{23} = y’_{3} + T_{23} x’_{3}
x’_{31} = (x’_{3} + x’_{1}) + T_{31} (y’_{3} − y’_{1}), y’_{31} = (y’_{3} + y’_{1}) − T_{31} (x’_{3} − x’_{1})

compute k’_{31}:
k’_{31} = x’_{1}x’_{3} + y’_{1}y’_{3} + T_{31}(x’_{1}y’_{3} − x’_{3}y’_{1})

compute the denominator D (if D = 0, return with an error):
D = (x’_{12} − x’_{23})(y’_{23} − y’_{31}) − (y’_{12} − y’_{23})(x’_{23} − x’_{31})

compute the robot position {x_{R}, y_{R}} and return:
x_{R} = x_{2} + (k’_{31}(y’_{12} − y’_{23}))/(D), y_{R} = y_{2} + (k’_{31}(x’_{23} − x’_{12}))/(D)
Algorithm 3 Final version of the algorithm.
2.5 Discussion
This algorithm is very simple, while keeping a strong geometrical meaning (each involved quantity has a physical meaning). Moreover, it does not contain any conditional statement, which increases its readability and eases its implementation. Furthermore, computations are limited to basic arithmetic computations and two cot(.) computations. Among these computations, we have to take care of the cot(.) and the division by D. If a bearing angle α_{ij} between two beacons is equal to 0 or π, that is the robot stands on the line B_{i}B_{j}, the cot(α_{ij}) is infinite. The corresponding circle degenerates as the line B_{i}B_{j} (infinite radius and center coordinates). The robot position is the intersection of the remaining power line and the line B_{i}B_{j}. It can be shown that the mathematical limit lim_{Tij → ±∞}{x_{R}, y_{R}} exists and corresponds to this situation. The algorithm could deal with these special cases but it is not necessary. In practice, we have to avoid Inf or NaN values in floating point computations. This can be done by limiting the cot(.) value to a minimum or maximum value, corresponding to a small angle that is far below the measurement precision. In practice, we limit the value of the cot(.) to ±10^{8}, which corresponds to an angle of about ±10^{ − 8} [rad]; this is indeed far below the existing angular sensor precisions.
The denominator
D is equal to
0 when the circle centers are colinear. For non colinear beacons, this situation occurs when the beacons and the robot are concyclic (they all stand on the same circumference, termed the
critic circumference). In that case, the three circles are equal as well as their centers, which causes
D = 0. For colinear beacons, this situation is encountered when the beacons and the robot all stand on this line. For these cases, it is impossible to compute the robot position. This is referred as
the restrictions common to all three object triangulation, whatever the algorithm used
[10, 13, 3]. The value of
D, computed in the final algorithm, is the signed area delimited by the circle centers, multiplied by
8. It is quiet natural to use the absolute value of
D as a quality measure of the computed position. Indeed
D decreases to
0 when approaching the critic circumference (almost colinear circle center, almost parallel power lines). In the next section, we show that
^{1}⁄_{D} is a good approximation of the position error. In practice,
D can be used as a validation gate after the triangulation algorithm or when using a Kalman filter with triangulation. Finally, it should be noted that the robot orientation may be determined by using any beacon
B_{i} and its corresponding angle measurement
α_{i}, once the robot position is known.
3 Simulations
In order to validate our algorithm, we have performed some simulations in a square shaped area (
4 × 4 [m^{2}]), with three non colinear beacons. For each point in this area, we compute the exact angles
α_{i} seen by the robot (the robot orientation is arbitrary set to
0 degree). Then we add Gaussian noise to these angles, with zero mean, and with two different standard deviations (
σ = 0.1 ° and
σ = 1 °). The noisy angles are then used as inputs of our algorithm to compute the estimated position. The position error
Δd_{R} is the Euclidean distance between the exact and estimated position. The orientation error
Δθ_{R} is the difference between the exact and estimated orientation. The experiment is repeated several times to compute the mean of the position error
E{Δd_{R}} and the mean of the orientation error
E{Δθ_{R}}. The means of the position and orientation error are drawn in Figure
5↓. The beacon positions are represented by white circles. The left column is the result for
σ = 0.1 °, the right column is the result for
σ = 1 °. The first, second and third rows show the position error, the orientation error, and the quality measure
^{1}⁄_{D} respectively.
Our simulation results are consistent with common three object triangulation algorithms
[11, 13]. In particular, we can easily spot the critic circumference where errors are large. From these graphics, one can see that
^{1}⁄_{D} has a similar shape than the position or orientation error, up to a constant multiplicative factor. It can be proven (but this beyond the scope of this paper), by a detailed sensitivity analysis of the robot position with respect to angles, that
Δd_{R}≃(1)/(D) Δα f(.)
where
Δd_{R} is the position error,
Δα is the angle error, and
f(.) is some function of all the other parameters. This explain why
^{1}⁄_{D} can be used as an approximation of the position error, up to a constant multiplicative factor.
We have also compared the computational time of our algorithm against two other algorithms. The first algorithm is the
Generalized Geometric Triangulation from Esteves et al.
[11]. The second is the
Geometric Circle Intersection from FontLlagunes et al.
[13]. We chose these algorithms because they work in the whole plane and for any beacon ordering, like ours. The simulations were performed in the same square shaped area, with a resolution of
0.5 [mm]. It appears that our algorithm is about
40 % faster than Esteves et al.
[11], and
20 % faster than FontLlagunes et al.
[13], for exactly the same precision.
4 Conclusions
This paper presents a new and simple three object triangulation algorithm based on the power center of three circles. As it exists for a long time, many variants of triangulation have been proposed. Which variant is most appropriate depends on the application because some methods ignore the beacon ordering while other have blind spots. Some methods are reliable but at a price of increasing complexity or special cases study. Our algorithm works in the whole plane (except when the beacons and the robot are concyclic or colinear), and for any beacon ordering. It does not need special cases study (no conditional statement), which makes it clear. Moreover it has a strong geometrical meaning (each involved quantity has a physical meaning), while keeping simple. Furthermore it uses only basic arithmetic computations, and two
cot(.) computations. Benchmarks show that our algorithm is faster than existing and comparable algorithms. Finally it naturally gives a quality measure of the triangulation result in the whole plane. Simulation intuitively show that
(1)/(D) is a natural and efficient criterion to estimate the precision of the positioning. To our knowledge, algorithms of the same family do not provide such a criterion. This quality measure can be used to identify the pathological cases (critic circumference), or as a validation gate in Kalman filters based on triangulation.
References
[1] C. Cohen, F. Koss. A Comprehensive Study of Three Object Triangulation. Mobile Robots VII, 1831:95106, 1993.
[2] C. Lee, Y. Chang, G. Park, J. Ryu, S.G. Jeong, S. Park, J.W. Park, H.C. Lee, K.S. Hong, M.H. Lee. Indoor positioning system based on incident angles of infrared emitters. Conference of the IEEE Industrial Electronics Society (IECON), 3:22182222, 2004.
[3] C.D. McGillem, T.S. Rappaport. A Beacon Navigation Method for Autonomous Vehicles. IEEE Transactions on Vehicular Technology, 38(3):132139, 1989.
[4] E.D. Demaine, A. LópezOrtiz, J.I. Munro. Robot Localization without Depth Perception. Proceedings of Scandinavian Workshop on Algorithm Theory (SWAT). Lecture Notes in Computer Science 2368, Springer:177194, 2002.
[5] E.Z. Casanova, S.D. Quijada, J.G. GarciaBermejo, J.R. González. A New Beaconbased System for the Localization of Moving Objects. IEEE International Conference on Mechatronics and Machine Vision in Practice, 2002.
[6] H. Hu, D. Gu. Landmarkbased Navigation of Industrial Mobile Robots. International Journal of Industry Robot, 27(6):458467, 2000.
[7] J. Borenstein, H. Everett, L. Feng, D. Wehe. Mobile Robot Positioning  Sensors and Techniques. Journal of Robotic Systems, 14(4):231249, 1997.
[8] J. Borenstein, H. Everett, L. Feng. Where am I? Systems and Methods for Mobile Robot Positioning. 1996.
[9] J. Borenstein, L. Feng. UMBmark: A Benchmark Test for Measuring Odometry Errors in Mobile Robots. SPIE, 1995.
[10] J. Esteves, A. Carvalho, C. Couto. Generalized Geometric Triangulation Algorithm for Mobile Robot Absolute SelfLocalization. International Symposium on Industrial Electronics (ISIE), 1:346351, 2003.
[11] J. Esteves, A. Carvalho, C. Couto. Position and Orientation Errors in Mobile Robot Absolute SelfLocalization Using an Improved Version of the Generalized Geometric Triangulation Algorithm. IEEE International Conference on Industrial Technology (ICIT):830835, 2006.
[12] J.D. Eiden. Géometrie analytique classique. Calvage & Mounet, 2009.
[13] J.M. FontLlagunes, J.A. Batlle. Consistent triangulation for mobile robot localization using discontinuous angular measurements. Robotics and Autonomous Systems, 57(9):931942, 2009.
[14] J.M. FontLlagunes, J.A. Batlle. Mobile Robot Localization. Revisiting the Triangulation Methods. International Federation of Automatic Control (IFAC). Symposium on Robot Control, 8, 2006.
[15] K. Briechle, U. Hanebeck. Localization of a Mobile Robot Using Relative Bearing Measurements. IEEE Transactions on Robotics and Automation, 20(1):3644, 2004.
[16] M. Betke, L. Gurvits. Mobile Robot Localization Using Landmarks. IEEE Transactions on Robotics and Automation, 13(2):251263, 1997.
[17] V. Pierlot, M. Van Droogenbroeck. A simple and low cost angle measurement system for mobile robot positioning. Workshop on Circuits, Systems and Signal Processing (ProRISC):251254, 2009.