Lucas-Kanade tracker
m (→Implementation) |
m |
||
Line 14: | Line 14: | ||
|- | |- | ||
|} | |} | ||
− | The Lucas | + | The '''Lucas Kanade tracking algorithm''' iteratively tries to minimise the difference between the image and a warped template. The |
− | used for image alignment, tracking, optic flow analysis, and motion estimation. In this example a texture patch in a Space Shuttle | + | technique can be used for image alignment, tracking, optic flow analysis, and motion estimation. In this example a texture patch in a |
− | video is tracked over 324 frames. A 2-D affine transform was used as a model. | + | Space Shuttle video is tracked over 324 frames. A 2-D affine transform was used as a model. |
For the documentation of the mathematics have a look at the web-page of the CMU-project [http://www.ri.cmu.edu/projects/project_515.html "Lucas-Kanade 20 years on"] and at the | For the documentation of the mathematics have a look at the web-page of the CMU-project [http://www.ri.cmu.edu/projects/project_515.html "Lucas-Kanade 20 years on"] and at the |
Revision as of 15:39, 15 February 2008
The Lucas Kanade tracking algorithm iteratively tries to minimise the difference between the image and a warped template. The technique can be used for image alignment, tracking, optic flow analysis, and motion estimation. In this example a texture patch in a Space Shuttle video is tracked over 324 frames. A 2-D affine transform was used as a model.
For the documentation of the mathematics have a look at the web-page of the CMU-project "Lucas-Kanade 20 years on" and at the publication by Baker and Matthews.
Implementation
The crucial parts of the implementation (here: isometric model with three degrees of freedom) are only a few lines of code. An initial parameter vector p
,
an image img
and a template tpl
are required. The tracking algorithm (inverse compositional Lucas-Kanade) is
initialised as follows:
p = Vector[ xshift, yshift, rotation ] w, h = *tpl.shape x, y = xramp( w, h ), yramp( w, h ) sigma = 5.0 gx = tpl.gauss_gradient_x( sigma ) gy = tpl.gauss_gradient_y( sigma ) c = Matrix[ [ 1, 0 ], [ 0, 1 ], [ -y, x ] ] * Vector[ gx, gy ] hs = ( c * c.covector ).collect { |e| e.sum }
A tracking step then is done by applying the following piece of code to each image img
. Usually the tracking step is performed
multiple times on each image to improve the tracking estimate.
field = MultiArray.new( MultiArray::LINT, w, h, 2 ) field[ 0...w, 0...h, 0 ] = x * cos( p[2] ) - y * sin( p[2] ) + p[0] field[ 0...w, 0...h, 1 ] = x * sin( p[2] ) + y * cos( p[2] ) + p[1] diff = img.warp_clipped( field ).to_type( MultiArray::SFLOAT ) - tpl s = c.collect { |e| ( e * diff ).sum } d = hs.inverse * s p += Matrix[ [ cos(p[2]), -sin(p[2]), 0 ], [ sin(p[2]), cos(p[2]), 0 ], [ 0, 0, 1 ] ] * d
A full implementation (more sophisticated) is available as an example application with HornetsEye. You can find a listing of the source code here.
See Also
External Links
- Hornetseye implementation
- CMU project: "Lucas-Kanade 20 years on"
- S. Baker, I. Matthews: Lucas-Kanade 20 Years On: A Unifying Framework, International Journal of Computer Vision, Vol. 56, No. 3, March, 2004, pp. 221-255.
- NASA high definition videos