>>Lane Detection

### Lane Detection

This work is the collaboration result between ROVIS and Elektrobit Automotive.

In this work, a road structure estimation system, capable of robustly segmenting and tracking lanes imaged using a camera mounted on a vehicle, has been developed. The algorithm is based on a camera calibration procedure that is able to calculate in real-time de roll, pitch and yaw of the vehicle. Further, the method is able to compute not only the ego-lanes, on which the car is driving, but also the neighbouring lanes.

Fig. 1. Mathematical model of the road.

Mathematically, the road model depicted in Fig. 1 can be represented as a 5-element vector:

$[\phi, \theta, p, w, c]^T$

where $\phi[rad]$ is the pitch angle and it measures the rotation around the X axis, $\theta[rad]$ is the yaw angle and it measures the rotation around the Y axis, $p[m]$ is the lateral offset between the middle of the ego-track and the middle of the ego-vehicle, $w[m]$ is the track width, defined as the distance between the left and right lines of a lane and $c$ is the curvature of the track. First of all, a correspondence between real world $(X, Y, Z)$ coordinates and image $(u, v)$ coordinates needs to be determined.

From the two similar right triangles that have a red hypotenuse, blue and dashed black sides, the following equation can be deduced:

$\frac{(v+f_v sin \phi) cos \phi}{(f_v - v tan \phi) cos \phi} = \frac{h_0}{l-l_0}$

Fig. 2. Different spatial combinations of features used for training the four classifiers. (a) All four facial features. (b,c,d) Cases where only three features are visible in the sample image.

Since the pitch angle $\phi$ is considered small, the following simplifying assumptions can be made:

$v+f_v sin \phi \approx v+f_v \phi$ $f_v - v tan \phi \approx f_v$

Thus, a simple vertical coordinate transformation is obtained:

$v = f_v \bigg[ \frac{h_0}{l-l_0} - \phi \bigg]$

Thus, a simple vertical coordinate transformation is obtained:

$d = R - d_c - \frac{w}{2} + p$ $d_c = \sqrt{R^2 - l^2} = R \sqrt {1 - \bigg(\frac{l}{R}\bigg)^2}$

Under the assumption that the look-ahead distance is much smaller than the radius of curvature of the lane $l \ll R$, the square root could be approximated by:

$\sqrt {1 - \bigg(\frac{l}{R}\bigg)^2} \approx R - \frac{1}{2} cl^2$

Therefore the approximate lateral distance is:

$d = \frac{1}{2} cl^2 - \frac{w}{2} + p$

Next, the two similar right triangles in the image that have red hypotenuses and blue opposite sides are considered, obtaining:

$\frac{(u+f_u sin \theta) cos \theta}{(f_u - u tan \theta) cos \theta} = \frac{d}{l-l_0}$

Again, similarly to the vertical projection case, the yaw angle $\theta$ is considered to be small, leading to:

$\frac{u}{f_u} + \theta = \frac{d}{l-l_0}$ $u = -f_u \bigg[ -\frac{d}{l-l_0} + \theta \bigg]$

Thus:

$u = -f_u \bigg[ -\frac{\frac{1}{2} cl^2 - \frac{w}{2} + p}{l-l_0} + \theta \bigg]$
##### References

Ernst D. Dickmanns and Birger D. Mysliwetz "Recursive 3-D Road and Relative Ego-State Recognition", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, February 1992.