VIEW-FINDER
m |
m |
||
Line 9: | Line 9: | ||
|[[Image:VF__indoor.JPG|thumb|320px| ATRV-Jr of PIAP roaming free in SHU labs; successfull acquisition and communication tests with at least four different streams of data]] | |[[Image:VF__indoor.JPG|thumb|320px| ATRV-Jr of PIAP roaming free in SHU labs; successfull acquisition and communication tests with at least four different streams of data]] | ||
|- | |- | ||
− | |[[Image: | + | | [[Image:Vf basestation.jpeg|thumb|220px| The View Finder base station as presented at the final review.]] |
|- | |- | ||
|[[Image:Small ViewFinder29.JPG|thumb|220px| Janusz (PIAP) checking remote operation of the robot and image compression / acquisition.]] | |[[Image:Small ViewFinder29.JPG|thumb|220px| Janusz (PIAP) checking remote operation of the robot and image compression / acquisition.]] | ||
|- | |- | ||
|[[Image:VF added.JPG|thumb|220px| Lazaros (DUTH), Giovanni (IES), Andrea (UoR) and Janusz (PIAP): in preparation for first launch.]] | |[[Image:VF added.JPG|thumb|220px| Lazaros (DUTH), Giovanni (IES), Andrea (UoR) and Janusz (PIAP): in preparation for first launch.]] | ||
− | |- | + | |- |
− | + | |[[Image:Small ViewFinder33.JPG|thumb|220px| George (SHU) and Andrea (UoR) checking SLAM, Laser and tilt processes: acquisition and communication...]] | |
|- | |- | ||
|} | |} |
Revision as of 17:34, 31 January 2010
Contents |
The VIEW-FINDER project
In the event of an emergency, after a fire or other crisis event has occured, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be accessed / entered safely by human emergency workers. This was the context under which the project was initiated.
Notifications and Announcements
- Successful completion - 19th Jan 2010: The project was successfully completed. The EU review panel congratulated the project for the integration efforts and the presented overall solution.
- Advisory: You are advised to visit the official VIEW-FINDER page: "Vision and Chemi-resistor Equipped Web-connected Finding Robots".
- Disclaimer: In this page you will be viewing pre-dominantly the indoor scenario even though we have tried for most of the information provided herein to be from the project as a whole. To the best of our knowledge the information provided herein are correct at the time of publication.
- Final dissemination event: IARP workshop RISE 2010 at Sheffield Hallam University on 20-21 January 2010. Further details made available here.
General Description
|
VIEW-FINDER was a field (mobile) robotics project (European-Union, Framework-VI: Project Number 045541), consisting of 9 European partners, that investigated the use of semi-autonomous mobile robot platforms to establish ground safety in the aftermath of fire incidents. The project was coordinated by the Materials and Engineering Research Institute at Sheffield Hallam University and officially ended on 30th November 2009, final review, reports and demos took place on the 18th Jan. 2010. The review that took place on the 19th Jan 2010 judged the project as 'successful with praise on the integration work and overall solution'.
The objective of the VIEW-FINDER project was to develop robots which have the primary task of gathering data to assist the human interveners in taking informed decisions prior to entering either an indoor or outdoor area. Thus the primary aim was to gather data (visual, environmental and chemical) to assist fire rescue personnel after a disaster has occured. A base station combined the gathered information with information retrieved from the large scale GMES-information bases. Issues addressed, related to: 2.5D map building, localisation and reconstruction; interfacing local command information with external sources; autonomous robot navigation and human-robot interfaces (base-station).
Partners PIAP, UoR, SHU and IES were pre-dominantly involved in the indoor scenario and RMA, DUTH predominately involved in the outdoor scenario; with SAS and SyFire being involved in both.
System description
The developed VIEW-FINDER system was a semi-autonomous system; the individual robot-sensors operate autonomously within the limits of the task assigned to them. That is, they autonomously navigate from two assigned points by planning their path and avoid obstacles whilst inspecting the area. Henceforth, the remote central operations control unit assigns tasks to the robots and monitors their execution, with the ability to intervene at any given time. Inasmuch, central operations control has the means to renew task assignments or provide further details on tasks of the ground robot. System-human interactions at the central operations control were facilitated through multi modal interfaces, in which graphical displays play an important but not exclusive role.
Although the robots had the ability to operate autonomously, human operators monitor the robots' processes and send high level task requests, as well as low level commands, through the human-computer interface to some nodes of the ground system. The human-computer interface (base station) had to ensure that a human supervisor and human intervener on the ground, are provided with a reduced yet relevant overview of the area under investigation including the robots and human rescue workers therein.
The project comprised of two scenarios: indoor and outdoor, with a corresponding robot platform for each scenario. The indoor scenario used a heavily modified ATRV junior robot platform that used to be available from iRobot and a purpose built outdoor robot based on the RoboSoft mobile platforms.
The indoor robot was equipped with two laser range finders, one of which was attached to a tilt unit for providing 3D acquisition, a front sonar array, a pan-tilt-zoom camera, a chemical sensor array and a long range wireless communication device. Apart from the existing robot processing unit (Ubuntu 8.04), there were added another two processing units (one with winXP and one with Ubuntu 8.10). The low level control of the robot and behaviour (e.g. obstacle avoidance and navigation) was achieved via the pre-existing processing unit whilst the data processing for the purposes of mapping and localisation were placed on the additional linux unit. The winXP unit was used whenever windows specific (proprietary) software items had to be deployed. All robot collected data were forwarded to an ergonomically designed base station.
The software platform for the indoor robot (both WinXP and Ubuntu 8.10) was a hybrid that used Player 2.1.2 and Corba as the robot hardware communication layer and IES Mailman for wireless data transmission layer (UDP/IP; packing, fragmentation) to the base station. The outdoor robot used solely WinXP and Corba through the RMA partner's modification layer known as Coroba.
Project Partners
Coordinator
- SHU: Sheffield Hallam University, Materials and Engineering Research Institute (MERI, MMVL), Sheffield, United Kingdom
Academic Research Partners
- RMA: Royal Military Academy - Patrimony, Belgium
- DUTH: Democritus University of Thrace - Xanthi, Greece
- UoR: Sapienza University of Rome, Italy
- PIAP: Industrial Research Institute for Automation and Measurements, Poland
Industrial partners
- SAS: Space Applications Services, Belgium
- IES: Intelligence for Environment and Security SRL - IES Solutions SRL, Italy
- SyFire: South Yorkshire Fire and Rescue Service, United Kingdom
- GA: Galileo Avionica -S.P.A., Italy
Project outputs and dependencies
Selected Public Reports
- coming soon
Videos and demonstrations
|
|
|
|
|
coming soon |
Selected Publications
- L. Nalpantidis and A. Gasteratos, “Stereo Vision for Robotic Applications in the Presence of Non-ideal Lighting Conditionsâ€, Image Vis. Comput. (2009), doi:10.1016/j.imavis.2009.11.011, In press.
- L. Nalpantidis, G.C. Sirakoulis, and A. Gasteratos. Review of Stereo Vision Algorithms: from Software to Hardware. International Journal of Optomechatronics, 2:435-462, 2008.
- A. Gasteratos, "Active Camera Stabilization with a Fuzzy-grey Controller", European Journal of Mechanical and Environmental Egineering, Vol 2009-2, pp 18-20, 2009.
- J. Bedkowski, and A. Maslowski (2009), An nVIDIA CUDA application in the Cognitive Supervision and Control of a mobile robot system. IARP/EURON Workshop in Robots for Risky Interventions and Environmental surveillance (Brussels, Belgium): 1-12
- A. Carbone, A. Finzi, A. Orlandini and F. Pirri (2008). Model-based control architecture for attentive robots in rescue scenarios. Autonomous Robots 24: 87-120
- L. Alboul, B. Amavasai, G. Chliveros and J. Penders (2007). Mobile robots for information gathering in a large-scale fire incident. IEEE 6th (SMC UK-RI) Conference on Cybernetic Systems (Dublin, Ireland): 122-127
Software that has proven useful
- System architecture/integration
- Mapping and Localisation
See Also
- Project publicity (UK)
- Other projects
- Source of Inspiration and Robot platforms
- Journal of Field Robotics
- Earthquake Rescue Robots
- CHRYSOR
- PIAP Military INSPECTOR
- Mobile Robots Inc. Seekur Jr [1][2]
- Other related links
- EU Cordis project entry
- NASA Three-Robot System for Traversing Steep Slopes
- BBC news: robotic firefighting
- Channel 4 News: Rise of the Machines?
- Robot mining trucks
- Robots can jump...
- DEFACTO fire-training software
- NIST Christmas tree fire video
- NIST's FDS-SMV fire dynamics simulator (also see comparison of real and simulated fire)