VIEW-FINDER

From MMVLWiki
(Difference between revisions)
Jump to: navigation, search
m (Videos and results)
m (Videos and demonstrations)
 
(247 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
{|align="right"
 
{|align="right"
 
|-
 
|-
|[[Image:SmallViewFinder1.JPG|thumb|320px|Indoor scenario: the ATRV-Jr iRobot platform of partner PIAP with integrated sensors from partners UoR, SHU and IES. The final system employs two processing units (on-board robot PC and dual-core laptop) managing the following sensory information: sonar array, image monocular and pan-tilt camera, tilt and laser range finder, odometry and chemical readings, in semi-autonomous modes (navigation and remote operation). At any given time we have at least 4 streams of wireless transmitted data.]]
+
|[[Image:Robots 2 small.jpg|thumb|320px|Trials of indoor / outdoor robots at the Royal Military Academy, Belgium (Dec. 2009): on the foreground the Robudem outdoor robot of partner RMA can be seen and in the background (far-end) the ATRV-Jr iRobot indoor platform (close-ups in next pictures) of partner PIAP can also be seen entering a hangar.]]
 
|-
 
|-
|[[Image:VF__indoor.JPG|thumb|320px| ATRV-Jr of PIAP roaming free in SHU labs; successfull acquisition and communication tests with at least four different streams of data]]
+
|[[Image:SmallViewFinder1.JPG|thumb|320px|Indoor scenario: the ATRV-Jr iRobot platform of partner PIAP with integrated sensors from partners UoR, SHU and IES. The final system employs two processing units (on-board robot PC and dual-core laptop) managing the following sensory information: sonar array, image monocular and pan-tilt camera, tilt and laser range finder, odometry and chemical readings, in semi-autonomous modes (navigation and remote operation). At any given time there were at least 4 streams of wireless transmitted data.]]
 
|-
 
|-
|[[Image:Small ViewFinder33.JPG|thumb|220px| George (SHU) and Andrea (UoR) checking SLAM, Laser and tilt processes: acquisition and communication...]]
+
|[[Image:Vf steam.jpeg|thumb|320px|Some of the members of the VF team: pictured here are those present at the final demonstrations (Jan. 2010) and review at SyFire training centre.]]
 
|-
 
|-
|[[Image:Small ViewFinder29.JPG|thumb|220px| Janusz (PIAP) checking remote operation of the robot and image compression / acquisition.]]
+
|[[Image:VFindoortrial.jpg|thumb|220px| ATRV-Jr at review trials (Jan 2010); successful acquisition and communication tests with at least four different streams of data]]
 
|-
 
|-
|[[Image:VF added.JPG|thumb|220px| Lazaros (DUTH), Giovanni (IES), Andrea (UoR) and Janusz (PIAP): in preparation for first launch.]]
+
| [[Image:Vf basestation.jpeg|thumb|220px]] [[Image:Basestation.jpg|thumb|220px|The View Finder base station as presented at the final review (Jan 2010).]]  
 
|-
 
|-
|[[Image:SmallViewFinder3.JPG|thumb|290px|Hungry process observations and network traffic!!!]]
+
|[[Image:Small ViewFinder29.JPG|thumb|220px| Janusz (PIAP) checking remote operation of the robot and image compression / acquisition (Oct 2009).]]
 +
|-
 +
|[[Image:VF added.JPG|thumb|220px| Lazaros (DUTH), Giovanni (IES), Andrea (UoR) and Janusz (PIAP): in preparation for first integrated test launch (Oct 2009).]]
 +
|-
 +
|[[Image:Boss.jpg|thumb|220px| Project coordinator ''Dr J. Penders''(SHU) and Neil Bough (SyFire) with indoor robot platform (Oct 2009).]]
 +
|-
 +
|[[Image:Small ViewFinder33.JPG|thumb|220px| George (SHU) and Andrea (UoR) on SLAM, Laser and tilt processes: acquisition and communication.]]
 
|-
 
|-
 
|}
 
|}
  
  
=Notification=
 
  
Go to the [https://view-finder-project.eu Official VIEW-FINDER page]: "Vision and Chemi-resistor Equipped Web-connected Finding Robots".
 
  
In this page you will be viewing pre-dominantly the indoor scenario even though most of the information provided herein are generic.
+
=The VIEW-FINDER project=
  
 +
The View-Finder project was a successfully completed EU-FP6 project (grant number [http://cordis.europa.eu/fetch?CALLER=PROJ_ICT&ACTION=D&CAT=PROJ&RCN=80499 045541]) in Advanced Robotics (IST-2005-2.6.1), executed from 2006-12-01 to 2009-11-30. The final demonstrations took place in January 2010 at [http://www.syfire.gov.uk/432.asp South Yorkshire Fire and Rescue training] grounds (Sheffield, UK). The final project report is available [http://shura.shu.ac.uk/2171/2/VFFinal_ReportJune.pdf here].
  
 +
==Summary==
 +
 +
In the event of an emergency, after a fire or other crisis event has occured, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be accessed / entered safely by human emergency workers. This was the context under which the project was initiated.
 +
 +
* '''[[MMVLWiki:General_disclaimer|Disclaimer]]''': In this page you will be viewing pre-dominantly the indoor scenario even though we have tried for most of the information provided herein to be from the project as a whole. To the best of our knowledge the information provided herein are correct at the time of publication. However, the views or claims expressed in external links provided herein are not directly endorsed by Sheffield Hallam University. If you find any mistakes or omission please contact MMVL (details at the main MMVL wiki-page).
 +
 +
* '''Advisory''': You are advised to visit the '''official''' [https://www.view-finder-project.eu/project-results/results VIEW-FINDER] page: "Vision and Chemi-resistor Equipped Web-connected Finding Robots".
 +
 +
* '''Pictures''': all pictures present in this page have been taken from members of the MMVL and the consortium as a whole. Pictures from members of the consortium, may have different copyright notices than the ones provided by the MMVL. External source of project pictorial items are available either [https://www.view-finder-project.eu/project-results/results VIEW-FINDER here] or [http://www.dis.uniroma1.it/~carbone/pages/Pictures/Pages/Viewfinder_2007-2011.html#grid here].
 +
 +
==Notifications and Announcements==
 +
 +
 +
 +
* '''Final project report - 15 July, 2010''': the final project report, amended and updated, is available [http://shura.shu.ac.uk/2171/2/VFFinal_ReportJune.pdf here].
 +
 +
* '''Successful completion - 05 July, 2010: ''' According to the European Union panel's Review Report the project contribution is 'sufficient' and the overall testing, experimentation and results dissemination are 'good'. The works on depth recovery and point-cloud rendering were deemed to be 'promising' and 'novel', though 'ongoing'. Minor changes on the final report were recommended.
 +
 +
* '''Successful demonstrations - 19th Jan 2010''': The project demonstrations were deemed 'sufficient' and overall 'successful'. The EU review panel congratulated the project integration efforts and the presented solution.
 +
 +
* '''Final dissemination event''': IARP workshop RISE 2010 at Sheffield Hallam University on 20-21 January 2010.
  
=The VIEW-FINDER project=
 
  
In the event of an emergency, due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, stereo/mono image and ladar data are collected and forwarded to an ergonomically designed base station. The project comprises of two scenarios: indoor and outdoor.
 
  
 +
==General Description==
  
==Objective==
 
 
{| align="left"
 
{| align="left"
 
|-
 
|-
Line 35: Line 59:
 
|-
 
|-
 
|}
 
|}
The [https://view-finder-project.eu/ VIEW-FINDER] project is a European-Union, Framework-VI funded program ([http://cordis.europa.eu/fetch?CALLER=PROJ_IST&ACTION=D&RCN=80499 project no: 045541]).
 
  
The [[VIEW-FINDER]] project is an Advanced Robotics project, consisting of 9 European partners, including South Yorkshire Fire and Rescue Services, that seeks to use a semi-autonomous robotic system to establish ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) to assist fire rescue personnel after a disaster has occured. A base station will combine gathered information with information retrieved from the large scale GMES-information bases.
+
[[VIEW-FINDER]] was a field (mobile) robotics project (European-Union, Framework-VI: Project Number 045541), consisting of 9 European partners, that investigated the use of semi-autonomous mobile robot platforms to establish ground safety in the aftermath of fire incidents. The project was coordinated by the [http://www.shu.ac.uk/research/meri/ Materials and Engineering Research Institute] at Sheffield Hallam University and officially ended on 30th November 2009, final review, reports and demos took place on the 18th Jan. 2010. The review that took place on the 19th Jan 2010 judged the project as ''''successful''''.
  
This project will address key issues related to 2.5D map building, localisation and reconstruction; interfacing local command information with external sources; autonomous robot navigation and human-robot interfaces (base-station). Partners PIAP, UoR, SHU and IES are pre-dominantly involved in the indoor scenario and RMA, DUTH predominately involved in the outdoor scenario; with SAS and SyFire being involved in both.
+
The objective of the VIEW-FINDER project was to develop robots which have the primary task of gathering data to assist the human interveners in taking informed decisions prior to entering either an indoor or outdoor area. Thus the primary aim was to gather data (visual, environmental and chemical) to assist fire rescue personnel after a disaster has occured. A base station combined the gathered information with information retrieved from the large scale GMES-information bases. Issues addressed, related to: 2.5D map building, localisation and reconstruction; interfacing local command information with external sources; autonomous robot navigation and human-robot interfaces (base-station).  
  
The developed VIEW-FINDER system will be a complete semi-autonomous system; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Central operations Control (CoC) will assign tasks to the robots and just monitor their execution. However, Central operations Control has the means to renew task assignments or detail tasks of the ground robot. System-human interactions at the CoC will be facilitated through multi modal interfaces, in which graphical displays play an important but not exclusive role.
+
Partners PIAP, UoR, SHU and IES were pre-dominantly involved in the indoor scenario and RMA, DUTH predominately involved in the outdoor scenario; with SAS and SyFire being involved in both.
  
Although the robots are basically able to operate autonomously, human operators will be enabled to monitor their operations and send high level task requests as well as low level commands through the interface to some nodes of the ground system. The human-computer interface (base station) has to ensure that a human supervisor and human interveners on the ground, are provided with a reduced yet relevant overview of the area under investigation including the robots and human rescue workers therein.
 
  
The project is coordinated by the [http://www.shu.ac.uk/research/meri/ Materials and Engineering Research Institute] at Sheffield Hallam University and will officially end in November 2009.
 
  
==Partners==
+
==System description==
 +
 
 +
The developed VIEW-FINDER system was a semi-autonomous system; the individual robot-sensors operate autonomously within the limits of the task assigned to them. That is, they autonomously navigate from two assigned points by planning their path and avoid obstacles whilst inspecting the area. Henceforth, the remote central operations control unit assigns tasks to the robots and monitors their execution, with the ability to intervene at any given time. Inasmuch, central operations control has the means to renew task assignments or provide further details on tasks of the ground robot. System-human interactions at the central operations control were facilitated through multi modal interfaces, in which graphical displays play an important but not exclusive role.
 +
 
 +
Although the robots had the ability to operate autonomously, human operators monitor the robots' processes and send high level task requests, as well as low level commands, through the human-computer interface to some nodes of the ground system. The human-computer interface (base station) had to ensure that a human supervisor and human intervener on the ground, are provided with a reduced yet relevant overview of the area under investigation including the robots and human rescue workers therein.
 +
 
 +
The project comprised of two scenarios: indoor and outdoor, with a corresponding robot platform for each scenario. The indoor scenario used a heavily modified ATRV junior robot platform that used to be available from [http://www.irobot.com/uk/government_industrial.cfm iRobot] and a purpose built outdoor robot based on the [http://www.robosoft.com/eng/categorie.php?id=1009 RoboSoft] mobile platforms.
 +
 
 +
The indoor robot was equipped with two laser range finders, one of which was attached to a tilt unit for providing 3D acquisition, a front sonar array, a pan-tilt-zoom camera, a chemical sensor array and a long range wireless communication device. Apart from the existing robot processing unit (Ubuntu 8.04), there were added another two processing units (one with winXP and one with Ubuntu 8.10). The low level control of the robot and behaviour (e.g. obstacle avoidance and navigation) was achieved via the pre-existing processing unit whilst the data processing for the purposes of mapping and localisation were placed on the additional linux unit. The winXP unit was used whenever windows specific (proprietary) software items had to be deployed. All robot collected data were forwarded to an ergonomically designed base station.
 +
 
 +
The software platform for the robots was as follows:
 +
 
 +
* indoor robot (both WinXP and Ubuntu 8.10) was a hybrid that used Player 2.1.2 and Corba as the robot hardware communication layer and IES Mailman for wireless data transmission layer (UDP/IP; packing, fragmentation) to the base station
 +
 
 +
* outdoor robot used solely WinXP and Corba through the RMA partner's modification layer known as Coroba
 +
 
 +
In both cases, great effort was dedicated on the integration of hardware and software modules.
 +
 
 +
 
 +
 
 +
==Project Partners==
 +
 
 +
 
 
'''Coordinator'''
 
'''Coordinator'''
 
* SHU: Sheffield Hallam University, Materials and Engineering Research Institute (MERI, [[MMVL]]), Sheffield, United Kingdom  
 
* SHU: Sheffield Hallam University, Materials and Engineering Research Institute (MERI, [[MMVL]]), Sheffield, United Kingdom  
 +
** Work predominantly in the indoor scenario (management, integration, mapping, chemical sensors)
  
 
'''Academic Research Partners'''
 
'''Academic Research Partners'''
* RMA: [http://www.rma.ac.be/ Royal Military Academy] - Patrimony, Belgium
+
* RMA (Outdoor scenario): [http://www.rma.ac.be/ Royal Military Academy] - Patrimony, Belgium
 +
** Work predominantly on the outdoor scenario (robot platform, architecture, navigation, localisation)
 +
 
 
* DUTH: [http://robotics.pme.duth.gr/ Democritus University of Thrace] - Xanthi, Greece
 
* DUTH: [http://robotics.pme.duth.gr/ Democritus University of Thrace] - Xanthi, Greece
* UoR: [http://www.dis.uniroma1.it/~alcor/home/index.php3 Sapienza University of Rome], Italy
+
** Work predominantly in the outdoor scenario (stereo vision, compression, navigation)
* PIAP: [http://www.antiterrorism.eu/ Industrial Research Institute for Automation and Measurements] - PIAP, Poland
+
 
 +
* UoR: [http://www.dis.uniroma1.it/~alcor/ Sapienza University of Rome], Italy
 +
** Work predominantly in the indoor scenario (localisation and mapping, communication, integration)
 +
 
 +
* PIAP: [http://www.antiterrorism.eu/ Industrial Research Institute for Automation and Measurements], Poland
 +
** Work predominantly in the indoor scenario (robot platform, architecture, integration, navigation)
  
 
'''Industrial partners'''
 
'''Industrial partners'''
 
* SAS: [http://www.spaceapplications.com/ Space Applications Services], Belgium
 
* SAS: [http://www.spaceapplications.com/ Space Applications Services], Belgium
 +
** Work on both indoor/outdoor scenarios (management, integration, base station)
 +
 
* IES: [http://www.i4es.it/ Intelligence for Environment and Security SRL] - IES Solutions SRL, Italy
 
* IES: [http://www.i4es.it/ Intelligence for Environment and Security SRL] - IES Solutions SRL, Italy
 +
** Work predominantly in the indoor scenario (communications protocol software and wireless transmission hardware)
 +
 
* SyFire: [http://www.syfire.gov.uk/ South Yorkshire Fire and Rescue Service], United Kingdom
 
* SyFire: [http://www.syfire.gov.uk/ South Yorkshire Fire and Rescue Service], United Kingdom
* GA: [http://www.galileoavionica.it/ Galileo Avionica] -S.P.A., Italy
+
** Work on both indoor/outdoor scenarios (logistics)
  
=See Also=
+
* GA: Galileo Avionica -S.P.A., Italy
* [https://view-finder-project.eu Official VIEW-FINDER page]
+
** Work predominantly in the indoor scenario (management)
  
* [[GUARDIANS]]
+
=Project outputs and dependencies=
  
* [http://www.robot.uji.es/research/events/rise08 EURON/IARP workshop], Jan 2008
+
* Project deliverables are available [https://www.view-finder-project.eu/project-results/list-of-public-deliverables here]; public deliverables can be accessed therein.
  
* [[Image:New.gif]] RISE 2010 Conference - January 20-21 2010, Sheffield Hallam University. Call, and further details available to download [http://www.falconguard.info/CALL-RISE_2010(2).doc here].
+
* [http://www.woodheadpublishing.com/en/book.aspx?bookID=2041 Using robots in hazardous environments] [[Image:vf-g-book.gif|40px|Book]]
  
=External Links, and items we have used=
 
  
 +
==Selected Publications==
 +
 +
<!--
 +
* L. Alboul and G. Chliveros (2010). A System for Reconstruction from Point Clouds in 3D: Simplification and Mesh Representation, Proceedings of the International Conference on Control, Automation, Robotics and Vision (ICARCV 2010), Dec 2010 Singapore.
 +
-->
 +
 +
* L. Alboul and G. Chliveros (2010). A System for Reconstruction from Point Clouds in 3D: Simplification and Mesh Representation, Proceedings of the International Conference on Control, Automation, Robotics and Vision (ICARCV 2010), Dec 2010 Singapore.
 +
 +
* G. De Cubber, D. Doroftei, S.A. Berrabah and H. Sahli (2010). Combining Dense structure from Motion and Visual SLAM in a Behavior-based Robot Control Architecture, International Journal of Advanced Robotics Systems 6(1): 11-23.
 +
 +
* L. Nalpantidis and A. Gasteratos (2010). Stereo vision for robotic applications in the presence of non-ideal lighting conditions, Image and Vision Computing 28: 940-951.
 +
 +
* L. Nalpantidis and A. Gasteratos(2010). Biologically and Psychophysically Inspired Adaptive Support Weights Algorithm for Stereo Correspondence, Robotics and Autonomous Systems 58: 457-464.
 +
 +
* G. Echeverria and L. Alboul (2009). Shape-preserving mesh simplification based on curvature measures from the Polyhedral Gauss Map, International Journal for Computational Vision and Biomechanics 1(2): 55-68
 +
 +
* J. Bedkowski and A. Maslowski (2009). NVIDIA CUDA Application in the Cognitive Supervision and Control of the Multi robot System Methodology for the supervision and control of the multi robotic system with CUDA application, Handbook on Emerging sensor and Robotics Technologies for Risky Interventions and Humanitarian de-mining (book series: Mobile Service Robotics, Woodhead Publishing).
 +
 +
* U. Delprato, M. Cristaldi and G. Tusa (2009). A light-weight communication protocol for tele-operated Robots in risky emergency operations, Handbook ‘Emerging sensor and Robotics Technologies for Risky Interventions and Humanitarian de-mining (book series: Mobile Service Robotics, Woodhead Publishing Company).
 +
 +
* G. De Cubber (2008), Dense 3D structure and motion estimation as an aid for robot navigation, Journal of Automation, Mobile Robotics & Intelligent Systems 2(4): 14-18
 +
 +
* D. Doroftei, E. Colon and G. De Cubber (2008), A behaviour-based control and software architecture for the visually guided robudem outdoor mobile robot, Journal of Automation, Mobile Robotics & Intelligent Systems, 2(4): 19-24
 +
 +
* S. A. Berrabah and E. Colon (2008). Vision – based mobile robot navigation, Journal of Automation, Mobile Robotics & Intelligent Systems, 2(4): 7-13
 +
 +
* A. Carbone, A. Finzi, A. Orlandini and F. Pirri (2008). Model-based control architecture for attentive robots in rescue scenarios. Autonomous Robots  24: 87-120
 +
 +
* L. Alboul, B. Amavasai, G. Chliveros and J. Penders (2007). Mobile robots for information gathering in a large-scale fire incident. IEEE 6th (SMC UK-RI) Conference on Cybernetic Systems (Dublin, Ireland): 122-127.
 +
 +
* A. Carbone, D. Ciacelli, A. Finzi and F. Pirri (2007). Autonomous Attentive Exploration in Search and Rescue Scenarios. In: Attention in Cognitive Systems; theories and systems from an interdisciplinary viewpoint, pp. 431 - 446  <!-- http://dx.doi.org/10.1007/978-3-540-77343-6_28 -->
 +
 +
<!--
 +
 +
==Selected Public Reports==
 +
 +
* coming soon: final report and more (selected) deliverables
 +
** [http://vision.eng.shu.ac.uk/penders/Viewfinder/Viewfinder_Deliverables_All_10012010/D3%203%20-%20Report%20on%20Communication%20Devices.pdf Communication capabilities]
 +
** [http://vision.eng.shu.ac.uk/penders/Viewfinder/Viewfinder_Deliverables_All_10012010/D6%206%20and%20D6%207.pdf 2D-SLAM: upgrading and evaluation]
 +
** [http://vision.eng.shu.ac.uk/penders/Viewfinder/Viewfinder_Deliverables_All_10012010/D6%205%20text.pdf Mapping description]
 +
 +
<!-- http://vision.eng.shu.ac.uk/penders/Viewfinder/ -->
 +
 +
==Software that has proven useful==
  
 
* System architecture/integration
 
* System architecture/integration
Line 80: Line 178:
 
** [http://viewfinder.i4es.it/index.php/Main_Page IES Wireless Solution Mailman (PWD protected)]
 
** [http://viewfinder.i4es.it/index.php/Main_Page IES Wireless Solution Mailman (PWD protected)]
 
** [http://www.rawmaterialsoftware.com/juce/ JUCE: Framework for writing user interfaces]
 
** [http://www.rawmaterialsoftware.com/juce/ JUCE: Framework for writing user interfaces]
 +
** [http://www.swig.org/ SWIG: The Simplified Wrapper and Interface Generator]
  
*
 
  
* Mapping and Localisation
+
<!-- add here more from Robotnik -->
 +
 
 +
* Mapping / Localisation and Reconstruction
 
** [http://www.openslam.org A plethora of resources on the Simultaneous Localisation and Mapping Problem]
 
** [http://www.openslam.org A plethora of resources on the Simultaneous Localisation and Mapping Problem]
** [http://kos.informatik.uni-osnabrueck.de/3Dscans/ The Robotic 3D Scan Repository]
+
<!-- [http://www.jstatsoft.org/ Journal of Statistical Software] -->
** [http://babel.isa.uma.es/mrpt/index.php/Main_Page Mobile Robot (c++) Programming Toolkit]
+
<!-- ** [http://kos.informatik.uni-osnabrueck.de/3Dscans/ The Robotic 3D Scan Repository] -->
 +
** [http://babel.isa.uma.es/mrpt/index.php/Main_Page Mobile Robot (C++) Programming Toolkit]
 
** [http://www.cgal.org/ Computational Geometry Algorithms Library]
 
** [http://www.cgal.org/ Computational Geometry Algorithms Library]
 +
** [http://opencv.willowgarage.com/wiki/ OpenCV: Computer Vision Library at Willow Garage]
 +
** [http://wiki.octave.org/wiki.pl?CodaStandalone GNU Octave: integrating m-functions in C++]
 +
** [http://www.imagemagick.org/Magick++/Documentation.html Magick++ Library: Image manipulation and compression]
  
*
 
  
* Sources of Inspiration - Robotics
+
<!-- add here more from Robotnik -->
** [http://www.journalfieldrobotics.org/Home.html Journal of Field Robotics]
+
** [http://news.bbc.co.uk/1/hi/sci/tech/8069435.stm Earthquake Rescue Robots]
+
** [http://www.robowatch.de/index.php?id=303 CHRYSOR]
+
** [http://www.antiterrorism.eu/remote_operated_vehicle.php Military INSPECTOR]
+
  
*
+
==Videos and demonstrations==
  
* Publicity
 
** [http://www.sciencemuseum.org.uk/antenna/firefightingrobots/ GUARDIANS and VIEW-FINDER exhibit in London science museum]
 
** [http://www.thestar.co.uk/news/Robot-feeling-the-heat.5502210.jp VIEW-FINDER project on local newspaper]
 
 
*
 
 
* Other related
 
** [http://news.bbc.co.uk/1/hi/technology/8172739.stm BBC news: robotic firefighting]
 
** [http://www.channel4.com/news/articles/science_technology/rise+of+the+machines/3286757 Channel 4 News: Rise of the Machines?]
 
** [http://dsc.discovery.com/news/2008/11/06/monster-robot-truck.html Robot mining trucks]
 
** [http://www.examiner.com/x-8134-SF-Gadgets-Examiner~y2009m9d15-Military-robot-can-jump-over-25-foot-walls Robots can jump...]
 
** [http://viterbi.usc.edu/news/news/2007/computer-science-thesis.htm DEFACTO fire-training software]
 
** [http://www.youtube.com/watch?v=2hleYtLNIOw NIST Christmas tree fire video]
 
** NIST's [http://www.fire.nist.gov/fds/ FDS-SMV fire dynamics simulator] (also see [http://www.fire.nist.gov/fds/comparison_examples.html comparison of real and simulated fire])
 
 
=Videos and results=
 
  
 
{| style="background-color:;" "border="2"
 
{| style="background-color:;" "border="2"
Line 125: Line 208:
 
         pluginspage="http://www.macromedia.com/go/getflashplayer"/>
 
         pluginspage="http://www.macromedia.com/go/getflashplayer"/>
 
       <div class="thumbcaption" >
 
       <div class="thumbcaption" >
         A prototype of the ViewFinder SLAM procedure based on an SIR-RB particle filter implementation: corridor at the ALCOR Lab, Sapienza University of Rome. Ladar and odometry data collected via a Pioneer robot using the URG-04LX laser range finder and <a href="http://playerstage.sourceforge.net/">Player</a> software platform.
+
         A prototype of the ViewFinder SLAM procedure based on an SIR-RB particle filter implementation: ladar and odometry data collected via the <a href="http://playerstage.sourceforge.net/">Player</a> software platform.
 
       </div>
 
       </div>
 
     </div>
 
     </div>
 
</html>
 
</html>
 +
 
|<html> <div style="width:320px;"> <object width="320" height="242"><param name="movie" value="http://www.youtube.com/v/jtI6uJfv0W0&color1=0xb1b1b1&color2=0xcfcfcf&hl=en&feature=player_embedded&fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowScriptAccess" value="always"></param><embed src="http://www.youtube.com/v/jtI6uJfv0W0&color1=0xb1b1b1&color2=0xcfcfcf&hl=en&feature=player_embedded&fs=1" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="320" height="242"></embed></object>
 
|<html> <div style="width:320px;"> <object width="320" height="242"><param name="movie" value="http://www.youtube.com/v/jtI6uJfv0W0&color1=0xb1b1b1&color2=0xcfcfcf&hl=en&feature=player_embedded&fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowScriptAccess" value="always"></param><embed src="http://www.youtube.com/v/jtI6uJfv0W0&color1=0xb1b1b1&color2=0xcfcfcf&hl=en&feature=player_embedded&fs=1" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="320" height="242"></embed></object>
 
<div class="thumbcaption" >
 
<div class="thumbcaption" >
ATRV robot is controlled through the base-station interface. Different views and control means are shown.  
+
A video demonstration of the base-station controlling the ATRV-Jr robot through the Human-Machine interface. Different views and control means are shown.  
 
</div>
 
</div>
 
</div>
 
</div>
 
</html>
 
</html>
|align="right"| More to add here
+
 
 +
|-
 +
|coming soon
 +
 
 +
 
 +
|<html>
 +
      <div style="width:320px;">
 +
          <object style="height: 242px; width: 320px"><param name="movie" value="http://www.youtube.com/v/YVj7_yKCznY"><param name="allowFullScreen" value="true"><param name="allowScriptAccess" value="always"><embed src="http://www.youtube.com/v/YVj7_yKCznY" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="320" height="242"></object>
 +
<div class="thumbcaption" >
 +
        Final demonstration for the outdoor scenario: general views and description of the full rescue scenario. A glimpse of the indoor ATRV-jr robot (PIAP) can also be seen.
 +
      </div>
 +
    </div>
 +
</html>
  
 
|-
 
|-
Line 145: Line 241:
 
         pluginspage="http://www.macromedia.com/go/getflashplayer"/>
 
         pluginspage="http://www.macromedia.com/go/getflashplayer"/>
 
       <div class="thumbcaption" >
 
       <div class="thumbcaption" >
         Nitrogen gas evolution in a room-fire scenario (simulated with NIST's <a href="http://www.fire.nist.gov/fds/">FDS-SMV</a>)
+
         Nitrogen gas evolution in a room-fire scenario (simulated with NIST's <a href="http://www.fire.nist.gov/fds/">FDS-SMV</a>); a vertical and horizontal plane are only shown with the fire start indicated by a yellow patch.     
      </div>
+
</div>
 
     </div>
 
     </div>
 
</html>
 
</html>
 +
 
|<html>  
 
|<html>  
      <div style="width:320px;">
+
      <div style="width:320px;">
          <object width="320" height="242"><param name="movie" value="http://www.youtube.com/v/NpGCgpR_Ys0&color1=0xb1b1b1&color2=0xcfcfcf&hl=en&feature=player_embedded&fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowScriptAccess" value="always"></param><embed src="http://www.youtube.com/v/NpGCgpR_Ys0&color1=0xb1b1b1&color2=0xcfcfcf&hl=en&feature=player_embedded&fs=1" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="320" height="242"></embed></object>
+
          <object width="320" height="242"><param name="movie" value="http://www.youtube.com/v/WKd1_14OEbw?fs=1&amp;hl=en_GB"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/WKd1_14OEbw?fs=1&amp;hl=en_GB" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="320" height="242"></embed></object>
<div class="thumbcaption" >
+
         South Yorkshire Fire & Rescue: the Realistic Fire Training Building (RFTB), a new state of the art training facility located at SYFR's Training and Development Centre.
         Outdoor trials at the Royal Military Academy outdoor robot (RMA) and Base Station (SAS); a glimpse of the ATRV-jr (PIAP) can also be seen.
+
 
       </div>
 
       </div>
 
     </div>
 
     </div>
 
</html>
 
</html>
| align="right"| X-addition
 
  
|-
 
| align="right"| X-addition
 
| align="right"| X-addition
 
| align="right"| More to be added here
 
 
|}
 
|}
 +
 +
 +
<!-- do not forget to add the OpenGL shit -->
 +
 +
=See Also=
 +
 +
 +
* Project publicity (UK)
 +
** [http://www.sciencemuseum.org.uk/antenna/firefightingrobots/ GUARDIANS and VIEW-FINDER exhibit in London science museum]
 +
** [http://www.thestar.co.uk/news/Robot-feeling-the-heat.5502210.jp VIEW-FINDER project on local newspaper]
 +
** [http://vision.eng.shu.ac.uk/mmvlwiki/index.php/Image:ProEngineering.jpg Article on Professional Engineering]
 +
 +
 +
* Other projects
 +
** [[GUARDIANS]]
 +
** [[DHRS-CIM]]
 +
** [http://www.wedesoft.demon.co.uk/hornetseye-api/files/HornetsEye-txt.html HornetsEye: Computer Vision for the Robotic Age]
 +
** [http://www.ros.org/wiki/ ROS: Robot Operating System by Willow Garage]
 +
 +
 +
* Robot platforms
 +
** [http://news.bbc.co.uk/1/hi/sci/tech/8069435.stm Earthquake Rescue Robots]
 +
** [http://www.robowatch.de/index.php?id=303 '''CHRYSOR''']
 +
** [http://www.antiterrorism.eu/remote_operated_vehicle.php PIAP Military '''INSPECTOR''']
 +
** Mobile Robots Inc. '''Seekur Jr''' [http://www.youtube.com/watch?v=e-q9kU6wN9c&feature=player_embedded ][http://www.youtube.com/watch?v=GNuPPkRXHxE]
 +
** Robotnik [http://www.robotnik.es/en/products/mobile-robots/guardian Guardian rover robot] platform
 +
** RoboSoft [http://www.robosoft.com/eng/popup_video.php?video=1016 RobuROC-4]
 +
 +
 +
* Other related links
 +
** [http://cordis.europa.eu/search/index.cfm?fuseaction=proj.document&PJ_RCN=9059633 EU Cordis project entry]
 +
** [http://www.techbriefs.com/component/content/article/5779 NASA Three-Robot System for Traversing Steep Slopes]
 +
** [http://news.bbc.co.uk/1/hi/technology/8172739.stm BBC news: robotic firefighting]
 +
** [http://www.channel4.com/news/articles/science_technology/rise+of+the+machines/3286757 Channel 4 News: Rise of the Machines?]
 +
** [http://dsc.discovery.com/news/2008/11/06/monster-robot-truck.html Robot mining trucks]
 +
** [http://www.examiner.com/x-8134-SF-Gadgets-Examiner~y2009m9d15-Military-robot-can-jump-over-25-foot-walls Robots can jump...]
 +
** [http://viterbi.usc.edu/news/news/2007/computer-science-thesis.htm DEFACTO fire-training software]
 +
** [http://www.youtube.com/watch?v=2hleYtLNIOw NIST Christmas tree fire video]
 +
** NIST's [http://www.fire.nist.gov/fds/ FDS-SMV fire dynamics simulator] (also see [http://www.fire.nist.gov/fds/comparison_examples.html comparison of real and simulated fire])
  
  
 
[[Category:Projects]]
 
[[Category:Projects]]
 
[[Category:Viewfinder|*VIEW-FINDER]]
 
[[Category:Viewfinder|*VIEW-FINDER]]

Latest revision as of 19:40, 3 November 2010

Trials of indoor / outdoor robots at the Royal Military Academy, Belgium (Dec. 2009): on the foreground the Robudem outdoor robot of partner RMA can be seen and in the background (far-end) the ATRV-Jr iRobot indoor platform (close-ups in next pictures) of partner PIAP can also be seen entering a hangar.
Indoor scenario: the ATRV-Jr iRobot platform of partner PIAP with integrated sensors from partners UoR, SHU and IES. The final system employs two processing units (on-board robot PC and dual-core laptop) managing the following sensory information: sonar array, image monocular and pan-tilt camera, tilt and laser range finder, odometry and chemical readings, in semi-autonomous modes (navigation and remote operation). At any given time there were at least 4 streams of wireless transmitted data.
Some of the members of the VF team: pictured here are those present at the final demonstrations (Jan. 2010) and review at SyFire training centre.
ATRV-Jr at review trials (Jan 2010); successful acquisition and communication tests with at least four different streams of data
Vf basestation.jpeg
The View Finder base station as presented at the final review (Jan 2010).
Janusz (PIAP) checking remote operation of the robot and image compression / acquisition (Oct 2009).
Lazaros (DUTH), Giovanni (IES), Andrea (UoR) and Janusz (PIAP): in preparation for first integrated test launch (Oct 2009).
Project coordinator Dr J. Penders(SHU) and Neil Bough (SyFire) with indoor robot platform (Oct 2009).
George (SHU) and Andrea (UoR) on SLAM, Laser and tilt processes: acquisition and communication.



Contents

[edit] The VIEW-FINDER project

The View-Finder project was a successfully completed EU-FP6 project (grant number 045541) in Advanced Robotics (IST-2005-2.6.1), executed from 2006-12-01 to 2009-11-30. The final demonstrations took place in January 2010 at South Yorkshire Fire and Rescue training grounds (Sheffield, UK). The final project report is available here.

[edit] Summary

In the event of an emergency, after a fire or other crisis event has occured, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be accessed / entered safely by human emergency workers. This was the context under which the project was initiated.

  • Disclaimer: In this page you will be viewing pre-dominantly the indoor scenario even though we have tried for most of the information provided herein to be from the project as a whole. To the best of our knowledge the information provided herein are correct at the time of publication. However, the views or claims expressed in external links provided herein are not directly endorsed by Sheffield Hallam University. If you find any mistakes or omission please contact MMVL (details at the main MMVL wiki-page).
  • Advisory: You are advised to visit the official VIEW-FINDER page: "Vision and Chemi-resistor Equipped Web-connected Finding Robots".
  • Pictures: all pictures present in this page have been taken from members of the MMVL and the consortium as a whole. Pictures from members of the consortium, may have different copyright notices than the ones provided by the MMVL. External source of project pictorial items are available either VIEW-FINDER here or here.

[edit] Notifications and Announcements

  • Final project report - 15 July, 2010: the final project report, amended and updated, is available here.
  • Successful completion - 05 July, 2010: According to the European Union panel's Review Report the project contribution is 'sufficient' and the overall testing, experimentation and results dissemination are 'good'. The works on depth recovery and point-cloud rendering were deemed to be 'promising' and 'novel', though 'ongoing'. Minor changes on the final report were recommended.
  • Successful demonstrations - 19th Jan 2010: The project demonstrations were deemed 'sufficient' and overall 'successful'. The EU review panel congratulated the project integration efforts and the presented solution.
  • Final dissemination event: IARP workshop RISE 2010 at Sheffield Hallam University on 20-21 January 2010.


[edit] General Description

Viewfinder Logo
EU flagCORDIS logoIST logo

VIEW-FINDER was a field (mobile) robotics project (European-Union, Framework-VI: Project Number 045541), consisting of 9 European partners, that investigated the use of semi-autonomous mobile robot platforms to establish ground safety in the aftermath of fire incidents. The project was coordinated by the Materials and Engineering Research Institute at Sheffield Hallam University and officially ended on 30th November 2009, final review, reports and demos took place on the 18th Jan. 2010. The review that took place on the 19th Jan 2010 judged the project as 'successful'.

The objective of the VIEW-FINDER project was to develop robots which have the primary task of gathering data to assist the human interveners in taking informed decisions prior to entering either an indoor or outdoor area. Thus the primary aim was to gather data (visual, environmental and chemical) to assist fire rescue personnel after a disaster has occured. A base station combined the gathered information with information retrieved from the large scale GMES-information bases. Issues addressed, related to: 2.5D map building, localisation and reconstruction; interfacing local command information with external sources; autonomous robot navigation and human-robot interfaces (base-station).

Partners PIAP, UoR, SHU and IES were pre-dominantly involved in the indoor scenario and RMA, DUTH predominately involved in the outdoor scenario; with SAS and SyFire being involved in both.


[edit] System description

The developed VIEW-FINDER system was a semi-autonomous system; the individual robot-sensors operate autonomously within the limits of the task assigned to them. That is, they autonomously navigate from two assigned points by planning their path and avoid obstacles whilst inspecting the area. Henceforth, the remote central operations control unit assigns tasks to the robots and monitors their execution, with the ability to intervene at any given time. Inasmuch, central operations control has the means to renew task assignments or provide further details on tasks of the ground robot. System-human interactions at the central operations control were facilitated through multi modal interfaces, in which graphical displays play an important but not exclusive role.

Although the robots had the ability to operate autonomously, human operators monitor the robots' processes and send high level task requests, as well as low level commands, through the human-computer interface to some nodes of the ground system. The human-computer interface (base station) had to ensure that a human supervisor and human intervener on the ground, are provided with a reduced yet relevant overview of the area under investigation including the robots and human rescue workers therein.

The project comprised of two scenarios: indoor and outdoor, with a corresponding robot platform for each scenario. The indoor scenario used a heavily modified ATRV junior robot platform that used to be available from iRobot and a purpose built outdoor robot based on the RoboSoft mobile platforms.

The indoor robot was equipped with two laser range finders, one of which was attached to a tilt unit for providing 3D acquisition, a front sonar array, a pan-tilt-zoom camera, a chemical sensor array and a long range wireless communication device. Apart from the existing robot processing unit (Ubuntu 8.04), there were added another two processing units (one with winXP and one with Ubuntu 8.10). The low level control of the robot and behaviour (e.g. obstacle avoidance and navigation) was achieved via the pre-existing processing unit whilst the data processing for the purposes of mapping and localisation were placed on the additional linux unit. The winXP unit was used whenever windows specific (proprietary) software items had to be deployed. All robot collected data were forwarded to an ergonomically designed base station.

The software platform for the robots was as follows:

  • indoor robot (both WinXP and Ubuntu 8.10) was a hybrid that used Player 2.1.2 and Corba as the robot hardware communication layer and IES Mailman for wireless data transmission layer (UDP/IP; packing, fragmentation) to the base station
  • outdoor robot used solely WinXP and Corba through the RMA partner's modification layer known as Coroba

In both cases, great effort was dedicated on the integration of hardware and software modules.


[edit] Project Partners

Coordinator

  • SHU: Sheffield Hallam University, Materials and Engineering Research Institute (MERI, MMVL), Sheffield, United Kingdom
    • Work predominantly in the indoor scenario (management, integration, mapping, chemical sensors)

Academic Research Partners

  • RMA (Outdoor scenario): Royal Military Academy - Patrimony, Belgium
    • Work predominantly on the outdoor scenario (robot platform, architecture, navigation, localisation)
  • UoR: Sapienza University of Rome, Italy
    • Work predominantly in the indoor scenario (localisation and mapping, communication, integration)

Industrial partners

  • GA: Galileo Avionica -S.P.A., Italy
    • Work predominantly in the indoor scenario (management)

[edit] Project outputs and dependencies

  • Project deliverables are available here; public deliverables can be accessed therein.


[edit] Selected Publications

  • L. Alboul and G. Chliveros (2010). A System for Reconstruction from Point Clouds in 3D: Simplification and Mesh Representation, Proceedings of the International Conference on Control, Automation, Robotics and Vision (ICARCV 2010), Dec 2010 Singapore.
  • G. De Cubber, D. Doroftei, S.A. Berrabah and H. Sahli (2010). Combining Dense structure from Motion and Visual SLAM in a Behavior-based Robot Control Architecture, International Journal of Advanced Robotics Systems 6(1): 11-23.
  • L. Nalpantidis and A. Gasteratos (2010). Stereo vision for robotic applications in the presence of non-ideal lighting conditions, Image and Vision Computing 28: 940-951.
  • L. Nalpantidis and A. Gasteratos(2010). Biologically and Psychophysically Inspired Adaptive Support Weights Algorithm for Stereo Correspondence, Robotics and Autonomous Systems 58: 457-464.
  • G. Echeverria and L. Alboul (2009). Shape-preserving mesh simplification based on curvature measures from the Polyhedral Gauss Map, International Journal for Computational Vision and Biomechanics 1(2): 55-68
  • J. Bedkowski and A. Maslowski (2009). NVIDIA CUDA Application in the Cognitive Supervision and Control of the Multi robot System Methodology for the supervision and control of the multi robotic system with CUDA application, Handbook on Emerging sensor and Robotics Technologies for Risky Interventions and Humanitarian de-mining (book series: Mobile Service Robotics, Woodhead Publishing).
  • U. Delprato, M. Cristaldi and G. Tusa (2009). A light-weight communication protocol for tele-operated Robots in risky emergency operations, Handbook ‘Emerging sensor and Robotics Technologies for Risky Interventions and Humanitarian de-mining (book series: Mobile Service Robotics, Woodhead Publishing Company).
  • G. De Cubber (2008), Dense 3D structure and motion estimation as an aid for robot navigation, Journal of Automation, Mobile Robotics & Intelligent Systems 2(4): 14-18
  • D. Doroftei, E. Colon and G. De Cubber (2008), A behaviour-based control and software architecture for the visually guided robudem outdoor mobile robot, Journal of Automation, Mobile Robotics & Intelligent Systems, 2(4): 19-24
  • S. A. Berrabah and E. Colon (2008). Vision – based mobile robot navigation, Journal of Automation, Mobile Robotics & Intelligent Systems, 2(4): 7-13
  • A. Carbone, A. Finzi, A. Orlandini and F. Pirri (2008). Model-based control architecture for attentive robots in rescue scenarios. Autonomous Robots 24: 87-120
  • L. Alboul, B. Amavasai, G. Chliveros and J. Penders (2007). Mobile robots for information gathering in a large-scale fire incident. IEEE 6th (SMC UK-RI) Conference on Cybernetic Systems (Dublin, Ireland): 122-127.
  • A. Carbone, D. Ciacelli, A. Finzi and F. Pirri (2007). Autonomous Attentive Exploration in Search and Rescue Scenarios. In: Attention in Cognitive Systems; theories and systems from an interdisciplinary viewpoint, pp. 431 - 446


[edit] Software that has proven useful



[edit] Videos and demonstrations

A prototype of the ViewFinder SLAM procedure based on an SIR-RB particle filter implementation: ladar and odometry data collected via the Player software platform.
A video demonstration of the base-station controlling the ATRV-Jr robot through the Human-Machine interface. Different views and control means are shown.
coming soon


Final demonstration for the outdoor scenario: general views and description of the full rescue scenario. A glimpse of the indoor ATRV-jr robot (PIAP) can also be seen.
Nitrogen gas evolution in a room-fire scenario (simulated with NIST's FDS-SMV); a vertical and horizontal plane are only shown with the fire start indicated by a yellow patch.
South Yorkshire Fire & Rescue: the Realistic Fire Training Building (RFTB), a new state of the art training facility located at SYFR's Training and Development Centre.


[edit] See Also




Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox