TEM vision software
m |
m |
||
Line 18: | Line 18: | ||
The vision algorithms are configured using a separate program and the configuration is saved in a file using [http://www.ruby-doc.org/core/classes/Marshal.html Ruby marshalling]. A plugin-based architecture, which accepts plugins for recognition and tracking, was implemented which allows one to select and configure '''Normalised Cross-Correlation''', '''Lucas-Kanade tracking''', or '''Connected Component Analysis'''. | The vision algorithms are configured using a separate program and the configuration is saved in a file using [http://www.ruby-doc.org/core/classes/Marshal.html Ruby marshalling]. A plugin-based architecture, which accepts plugins for recognition and tracking, was implemented which allows one to select and configure '''Normalised Cross-Correlation''', '''Lucas-Kanade tracking''', or '''Connected Component Analysis'''. | ||
+ | |||
+ | ==Demonstration== | ||
+ | {|align="center" | ||
+ | |- | ||
+ | |[[Image:Nano-Configure.jpg|thumb|160px|'''1.''' The vision algorithms are configured]]||[[Image:Nano-Calibration.jpg|thumb|160px|'''2.''' The SPM axes are calibrated against the camera image]]||[[Image:Nano-Control.jpg|thumb|160px|'''3.''' Using closed-loop control the nano-indenter is moved along a linear path]] | ||
+ | |- | ||
+ | |} | ||
+ | |||
+ | {|align="center" | ||
+ | |- | ||
+ | |[[Image:Nano-Telemanip.jpg|thumb|200px|Moving the tip using "drag-and-drop" does not require vision feedback. The circle marks the initial position of the mouse-cursor]]||[[Image:Nano-Closed.jpg|thumb|200px|Here vision-based closed-loop control is used to control the position of the tip. The cross-and-circle marks the last known position of the nano-indenter. The cross marks the current nominal position]] | ||
+ | |- | ||
+ | |} | ||
==Future Work== | ==Future Work== | ||
Line 23: | Line 36: | ||
* port to Ruby-1.9 which has native threads | * port to Ruby-1.9 which has native threads | ||
* integrate serial-port interface of JEOL TEM | * integrate serial-port interface of JEOL TEM | ||
+ | * feature-based recognition and tracking (less sensitive to brightness changes) | ||
=See Also= | =See Also= |
Revision as of 19:45, 26 June 2009
|
As part of the Nanorobotics project a TEM vision software was developed. The software makes use of a JEOL 3010 transmission electron microscope with a TVIPS FastScan-F114 camera which is an IIDC/DCAM-compatible firewire camera. The nano-indenter is controlled by a Nanomagnetics SPM controller (the old version of the controller can be accessed with a PCI-DIO24 card).
The software runs under GNU/Linux and it makes use of Damien Douxchamps' libdc1394 to access the camera and Warren Jasper's PCI-DIO24 driver to access the PCI-card which interfaces with the SPM controller.
The software was implemented in Ruby using Qt4-QtRuby, HornetsEye, and a custom Ruby-extension to access the SPM controller via the PCI-DIO24 card. Distributed Ruby and multiple processes were used to work around the problem that Ruby-1.8 does not offer native threads.
The vision algorithms are configured using a separate program and the configuration is saved in a file using Ruby marshalling. A plugin-based architecture, which accepts plugins for recognition and tracking, was implemented which allows one to select and configure Normalised Cross-Correlation, Lucas-Kanade tracking, or Connected Component Analysis.
Contents |
Demonstration
Future Work
Possible future work is
- port to Ruby-1.9 which has native threads
- integrate serial-port interface of JEOL TEM
- feature-based recognition and tracking (less sensitive to brightness changes)
See Also
External Links
- Hardware
- Software
- Related publications
- Jung-Me Park, C. G. Looney, Hui-Chuan Chen: Fast connected component labeling algorithm using a divide and conquer technique, 15th International Conference on Computers and their Applications, March, 2000, pp. 373-6
- J. P. Lewis: Fast Normalized Cross-Correlation, Industrial Light & Magic
- S. Baker, I. Matthews: Lucas-Kanade 20 Years On: A Unifying Framework, International Journal of Computer Vision, Vol. 56, No. 3, March, 2004, pp. 221-255.