VIEW-FINDER

(Difference between revisions)
Jump to: navigation, search
Line 7: Line 7:
 
     <div style="width:320px;">
 
     <div style="width:320px;">
 
       <embed style="width:320px; height:242px;" id="VideoPlayback" type="application/x-shockwave-flash"
 
       <embed style="width:320px; height:242px;" id="VideoPlayback" type="application/x-shockwave-flash"
         src="http://vision.eng.shu.ac.uk/~cmsgc1//flvplayer.swf" width="320" height="242"
+
         src="http://vision.eng.shu.ac.uk/chliveros/flv/flvplayer.swf" width="320" height="242"
         flashvars="file=http://vision.eng.shu.ac.uk/chliveros/flv/UniROMA_Carbone/SLAM_Roma.mp4"
+
         flashvars="file=http://vision.eng.shu.ac.uk/~cmsgc1/flv/UniROMA_Carbone/SLAM_Roma.flv&image=http://vision.eng.shu.ac.uk/~cmsgc1/flv/UniROMA_Carbone/slam_roma.jpg&displayheight=242"
 
         pluginspage="http://www.macromedia.com/go/getflashplayer"/>
 
         pluginspage="http://www.macromedia.com/go/getflashplayer"/>
 
       <div class="thumbcaption" >
 
       <div class="thumbcaption" >
         A prototype of the ViewFinder SLAM procedure based on an SIR-RB particle filter implementation.
+
         A prototype of the ViewFinder SLAM procedure based on an SIR-RB particle filter implementation. Ladar and odometry data collected via a Pioneer robot using the URG-04LX laser range finder and <a href="http://playerstage.sourceforge.net/">Player</a> software.
 
       </div>
 
       </div>
 
     </div>
 
     </div>
 
   </div>
 
   </div>
 
</html>
 
</html>
<html>
+
|-
 +
|<html>
 
   <div class="thumb tright">
 
   <div class="thumb tright">
 
     <div style="width:320px;">
 
     <div style="width:320px;">

Revision as of 16:26, 28 January 2009

Viewfinder impression
A prototype of the ViewFinder SLAM procedure based on an SIR-RB particle filter implementation. Ladar and odometry data collected via a Pioneer robot using the URG-04LX laser range finder and Player software.
Nitrogen gas evolution in a room-fire scenario (simulated with NIST's FDS-SMV)

Contents

The VIEW-FINDER project

Official VIEW-FINDER page: "Vision and Chemi-resistor Equipped Web-connected Finding Robots"

In the event of an emergency, due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image/ladar data is collected and forwarded to an advanced base station.


Objective

Viewfinder Logo
EU flagCORDIS logoIST logo

The VIEW-FINDER project is a European-Union, Framework-VI funded program (project no: 045541).

The VIEW-FINDER project is an Advanced Robotics project, consisting of 9 European partners, including South Yorkshire Fire and Rescue Services, that seeks to use an autonomous robotic system to establish ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) to assist fire rescue personnel. A base station will combine gathered information with information retrieved from the large scale GMES-information bases.

This project will address key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and autonomous robot navigation. The project is coordinated by MERI at Sheffield Hallam University.

The developed VIEW-FINDER system will be a complete semi-autonomous system; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Central Operation Control (CoC) will assign tasks to the robots and just monitor their execution. However Central Operation Control has the means to renew task assignments or detail tasks of an individual robot. System-human interactions at the COC will be facilitated through multi modal interfaces, in which graphical displays play an important but not exclusive role.

Although the robots are basically able to operate autonomously, human operators will be enabled to monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers there in.

The project will end in November 2009.

Partners

Coodinator

Academic Research Partners

Industrial partners

See Also

External Links

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox