Lucas-Kanade tracker

(Difference between revisions)
Jump to: navigation, search
m (Implementation)
m (Implementation)
Line 43: Line 43:
 
A tracking step then is done by applying the following piece of code to each image <code>img</code>. Usually the tracking step is performed
 
A tracking step then is done by applying the following piece of code to each image <code>img</code>. Usually the tracking step is performed
 
multiple times on each image to improve the tracking estimate.
 
multiple times on each image to improve the tracking estimate.
  field = MultiArray.new( MultiArray::LINT, w, h, 2 )
+
  field = MultiArray.new( MultiArray::SFLOAT, w, h, 2 )
 
  field[ 0...w, 0...h, 0 ] = x * cos( p[2] ) - y * sin( p[2] ) + p[0]
 
  field[ 0...w, 0...h, 0 ] = x * cos( p[2] ) - y * sin( p[2] ) + p[0]
 
  field[ 0...w, 0...h, 1 ] = x * sin( p[2] ) + y * cos( p[2] ) + p[1]
 
  field[ 0...w, 0...h, 1 ] = x * sin( p[2] ) + y * cos( p[2] ) + p[1]

Revision as of 18:33, 5 June 2008

Tracking of a texture patch in a NASA HD-video with Lucas-Kanade tracker (using 2-D affine model)
Visualisation of Lucas-Kanade template tracking (2-D affine model,2-D homography). Note that the algorithm is sensitive to illumination changes which are not modelled in this implementation
Tracking of a nano-indenter in a TEM-video (using isometric model) with high magnification and low magnification. The indenter is lost where it moves to fast for the tracking algorithm

The Lucas Kanade tracking algorithm iteratively tries to minimise the difference between the image and a warped template. The technique can be used for image alignment, tracking, optic flow analysis, and motion estimation. In this example a texture patch in a Space Shuttle video is tracked over 324 frames. A 2-D affine transform was used as a model.

For the documentation of the mathematics have a look at the web-page of the CMU-project "Lucas-Kanade 20 years on" and at the publication by Baker and Matthews.

Implementation

The crucial parts of the implementation (here: isometric model with three degrees of freedom) are only a few lines of code. An initial parameter vector p, an image img and a template tpl are required. The tracking algorithm (inverse compositional Lucas-Kanade) is initialised as follows:

p = Vector[ xshift, yshift, rotation ]
w, h = *tpl.shape
x, y = xramp( w, h ), yramp( w, h )
sigma = 5.0
gx = tpl.gauss_gradient_x( sigma )
gy = tpl.gauss_gradient_y( sigma )
c = Matrix[ [ 1, 0 ], [ 0, 1 ], [ -y, x ] ] * Vector[ gx, gy ]
hs = ( c * c.covector ).collect { |e| e.sum }

A tracking step then is done by applying the following piece of code to each image img. Usually the tracking step is performed multiple times on each image to improve the tracking estimate.

field = MultiArray.new( MultiArray::SFLOAT, w, h, 2 )
field[ 0...w, 0...h, 0 ] = x * cos( p[2] ) - y * sin( p[2] ) + p[0]
field[ 0...w, 0...h, 1 ] = x * sin( p[2] ) + y * cos( p[2] ) + p[1]
diff = img.warp_clipped_interpolate( field ) - tpl
s = c.collect { |e| ( e * diff ).sum }
d = hs.inverse * s
p += Matrix[ [  cos(p[2]), -sin(p[2]), 0 ],
             [  sin(p[2]),  cos(p[2]), 0 ],
             [          0,          0, 1 ] ] * d

A full implementation is available as an example application with HornetsEye. The implementation does interpolation which is very important for the stability of the Lucas-Kanade tracker. Furthermore the gradient is computed using the surroundings of the initial template to avoid boundary effects. You can find a listing of the source code here.

See Also

External Links

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox