VIEW-FINDER

(Difference between revisions)
Jump to: navigation, search
m (The VIEW-FINDER project)
(The VIEW-FINDER project)
Line 24: Line 24:
 
In the event of an emergency, due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be entered safely by human emergency workers. The objective of the VIEW-FINDER project was to develop robots which have the primary task of gathering data to assist the human interveners in taking informed decisions prior to entering the area.  
 
In the event of an emergency, due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be entered safely by human emergency workers. The objective of the VIEW-FINDER project was to develop robots which have the primary task of gathering data to assist the human interveners in taking informed decisions prior to entering the area.  
  
The project comprised of two scenarios: indoor and outdoor, with a corresponding robot platform for each scenario. The indoor scenario used a heavily modified ATRV junior robot platform that used to be available from [http://www.irobot.com/uk/government_industrial.cfm iRobot] and a purpose built outdoor robot based on the [http://www.robosoft.com/eng/categorie.php?id=1009 RoboSoft] mobile platforms.  
+
[[VIEW-FINDER]] was a field (mobile) robotics project (European-Union, Framework-VI: Project Number 045541), consisting of 9 European partners, that investigated the use of semi-autonomous mobile robot platforms to establish ground safety in the aftermath of fire incidents. The project was coordinated by the [http://www.shu.ac.uk/research/meri/ Materials and Engineering Research Institute] at Sheffield Hallam University and officially ended on 30th November 2009, with final review, reports and demos scheduled for the 18 and 19 Jan. 2010.
  
The indoor robot was equipped with two laser range finders, one of which was attached to a tilt unit for providing 3D acquisition, a front sonar array, a pan-tilt-zoom camera, a chemical sensor array and a long range wireless communication device. Apart from the existing robot processing unit (Ubuntu 8.04), there were added another two processing units (one with winXP and one with Ubuntu 8.10). The low level control of the robot and behaviour (e.g. obstacle avoidance and navigation) was achieved via the pre-existing processing unit whilst the data processing for the purposes of mapping and localisation were placed on the additional linux unit. The winXP unit was used whenever windows specific (proprietary) software items had to be deployed.
+
The primary aim was to gather data (visual, environmental and chemical) to assist fire rescue personnel after a disaster has occured. A base station combined gathered information with information retrieved from the large scale GMES-information bases. Issues addressed, related to: 2.5D map building, localisation and reconstruction; interfacing local command information with external sources; autonomous robot navigation and human-robot interfaces (base-station). Partners PIAP, UoR, SHU and IES were pre-dominantly involved in the indoor scenario and RMA, DUTH predominately involved in the outdoor scenario; with SAS and SyFire being involved in both.
 
+
All robot collected data were forwarded to an ergonomically designed base station.  
+
  
 
==Notifications and Announcements==
 
==Notifications and Announcements==
Line 38: Line 36:
 
* '''Final dissemination event''': IARP workshop RISE 2010 at Sheffield Hallam University on 20-21 January 2010. Further details available [http://www.falconguard.info/rise10/ here].
 
* '''Final dissemination event''': IARP workshop RISE 2010 at Sheffield Hallam University on 20-21 January 2010. Further details available [http://www.falconguard.info/rise10/ here].
  
==Objective==
+
==System description==
 +
 
 +
The developed VIEW-FINDER system was a semi-autonomous system; the individual robot-sensors operate autonomously within the limits of the task assigned to them. That is, they autonomously navigate from two assigned points by planning their path and avoid obstacles whilst inspecting the area. Henceforth, the remote central operations control unit assigns tasks to the robots and monitors their execution, with the ability to intervene at any given time. Inasmuch, central operations control has the means to renew task assignments or provide further details on tasks of the ground robot. System-human interactions at the central operations control were facilitated through multi modal interfaces, in which graphical displays play an important but not exclusive role.
 +
 
 +
 
 
{| align="left"
 
{| align="left"
 
|-
 
|-
Line 45: Line 47:
 
|}
 
|}
  
[[VIEW-FINDER]] was a field (mobile) robotics project (European-Union, Framework-VI: Project Number 045541), consisting of 9 European partners, that investigated the use of semi-autonomous mobile robot platforms to establish ground safety in the aftermath of fire incidents. The project was coordinated by the [http://www.shu.ac.uk/research/meri/ Materials and Engineering Research Institute] at Sheffield Hallam University and officially ended on 30th November 2009, with final review, reports and demos scheduled for the 18 and 19 Jan. 2010.
+
Although the robots have the ability to operate autonomously, human operators monitor the robots' processes and send high level task requests, as well as low level commands, through the human-computer interface to some nodes of the ground system. The human-computer interface (base station) had to ensure that a human supervisor and human intervener on the ground, are provided with a reduced yet relevant overview of the area under investigation including the robots and human rescue workers therein.
  
The primary aim was to gather data (visual, environmental and chemical) to assist fire rescue personnel after a disaster has occured. A base station combined gathered information with information retrieved from the large scale GMES-information bases. Issues addressed, related to: 2.5D map building, localisation and reconstruction; interfacing local command information with external sources; autonomous robot navigation and human-robot interfaces (base-station). Partners PIAP, UoR, SHU and IES were pre-dominantly involved in the indoor scenario and RMA, DUTH predominately involved in the outdoor scenario; with SAS and SyFire being involved in both.
+
The project comprised of two scenarios: indoor and outdoor, with a corresponding robot platform for each scenario. The indoor scenario used a heavily modified ATRV junior robot platform that used to be available from [http://www.irobot.com/uk/government_industrial.cfm iRobot] and a purpose built outdoor robot based on the [http://www.robosoft.com/eng/categorie.php?id=1009 RoboSoft] mobile platforms.
 +
 
 +
The indoor robot was equipped with two laser range finders, one of which was attached to a tilt unit for providing 3D acquisition, a front sonar array, a pan-tilt-zoom camera, a chemical sensor array and a long range wireless communication device. Apart from the existing robot processing unit (Ubuntu 8.04), there were added another two processing units (one with winXP and one with Ubuntu 8.10). The low level control of the robot and behaviour (e.g. obstacle avoidance and navigation) was achieved via the pre-existing processing unit whilst the data processing for the purposes of mapping and localisation were placed on the additional linux unit. The winXP unit was used whenever windows specific (proprietary) software items had to be deployed.
 +
 
 +
All robot collected data were forwarded to an ergonomically designed base station.  
  
The developed VIEW-FINDER system was a semi-autonomous system; the individual robot-sensors operate autonomously within the limits of the task assigned to them. That is, they autonomously navigate from two assigned points by planning their path and avoid obstacles whilst inspecting the area. Henceforth, the remote central operations control unit assigns tasks to the robots and monitors their execution, with the ability to intervene at any given time. Inasmuch, central operations control has the means to renew task assignments or provide further details on tasks of the ground robot. System-human interactions at the central operations control were facilitated through multi modal interfaces, in which graphical displays play an important but not exclusive role.
 
  
Although the robots have the ability to operate autonomously, human operators monitor the robots' processes and send high level task requests, as well as low level commands, through the human-computer interface to some nodes of the ground system. The human-computer interface (base station) had to ensure that a human supervisor and human intervener on the ground, are provided with a reduced yet relevant overview of the area under investigation including the robots and human rescue workers therein.
 
  
 
==Project Partners==
 
==Project Partners==

Revision as of 11:58, 1 January 2010

Trials of indoor / outdoor robots at the Royal Military Academy, Belgium: on the foreground the Robudem outdoor robot of partner RMA can be seen and in the background (far-end) the ATRV-Jr iRobot indoor platform (close-ups in next pictures) of partner PIAP can also be seen entering a hangar.
Indoor scenario: the ATRV-Jr iRobot platform of partner PIAP with integrated sensors from partners UoR, SHU and IES. The final system employs two processing units (on-board robot PC and dual-core laptop) managing the following sensory information: sonar array, image monocular and pan-tilt camera, tilt and laser range finder, odometry and chemical readings, in semi-autonomous modes (navigation and remote operation). At any given time we have at least 4 streams of wireless transmitted data.
ATRV-Jr of PIAP roaming free in SHU labs; successfull acquisition and communication tests with at least four different streams of data
George (SHU) and Andrea (UoR) checking SLAM, Laser and tilt processes: acquisition and communication...
Janusz (PIAP) checking remote operation of the robot and image compression / acquisition.
Lazaros (DUTH), Giovanni (IES), Andrea (UoR) and Janusz (PIAP): in preparation for first launch.
Hungry process observations and network traffic on the indoor robot's second processing unit.



Contents

The VIEW-FINDER project

In the event of an emergency, due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be entered safely by human emergency workers. The objective of the VIEW-FINDER project was to develop robots which have the primary task of gathering data to assist the human interveners in taking informed decisions prior to entering the area.

VIEW-FINDER was a field (mobile) robotics project (European-Union, Framework-VI: Project Number 045541), consisting of 9 European partners, that investigated the use of semi-autonomous mobile robot platforms to establish ground safety in the aftermath of fire incidents. The project was coordinated by the Materials and Engineering Research Institute at Sheffield Hallam University and officially ended on 30th November 2009, with final review, reports and demos scheduled for the 18 and 19 Jan. 2010.

The primary aim was to gather data (visual, environmental and chemical) to assist fire rescue personnel after a disaster has occured. A base station combined gathered information with information retrieved from the large scale GMES-information bases. Issues addressed, related to: 2.5D map building, localisation and reconstruction; interfacing local command information with external sources; autonomous robot navigation and human-robot interfaces (base-station). Partners PIAP, UoR, SHU and IES were pre-dominantly involved in the indoor scenario and RMA, DUTH predominately involved in the outdoor scenario; with SAS and SyFire being involved in both.

Notifications and Announcements

  • Advisory: You are advised to visit the official VIEW-FINDER page: "Vision and Chemi-resistor Equipped Web-connected Finding Robots".
  • Disclaimer: In this page you will be viewing pre-dominantly the indoor scenario even though we have tried for most of the information provided herein to be from the project as a whole. To the best of our knowledge the information provided herein are correct at the time of publication.
  • Final dissemination event: IARP workshop RISE 2010 at Sheffield Hallam University on 20-21 January 2010. Further details available here.

System description

The developed VIEW-FINDER system was a semi-autonomous system; the individual robot-sensors operate autonomously within the limits of the task assigned to them. That is, they autonomously navigate from two assigned points by planning their path and avoid obstacles whilst inspecting the area. Henceforth, the remote central operations control unit assigns tasks to the robots and monitors their execution, with the ability to intervene at any given time. Inasmuch, central operations control has the means to renew task assignments or provide further details on tasks of the ground robot. System-human interactions at the central operations control were facilitated through multi modal interfaces, in which graphical displays play an important but not exclusive role.


Viewfinder Logo
EU flagCORDIS logoIST logo

Although the robots have the ability to operate autonomously, human operators monitor the robots' processes and send high level task requests, as well as low level commands, through the human-computer interface to some nodes of the ground system. The human-computer interface (base station) had to ensure that a human supervisor and human intervener on the ground, are provided with a reduced yet relevant overview of the area under investigation including the robots and human rescue workers therein.

The project comprised of two scenarios: indoor and outdoor, with a corresponding robot platform for each scenario. The indoor scenario used a heavily modified ATRV junior robot platform that used to be available from iRobot and a purpose built outdoor robot based on the RoboSoft mobile platforms.

The indoor robot was equipped with two laser range finders, one of which was attached to a tilt unit for providing 3D acquisition, a front sonar array, a pan-tilt-zoom camera, a chemical sensor array and a long range wireless communication device. Apart from the existing robot processing unit (Ubuntu 8.04), there were added another two processing units (one with winXP and one with Ubuntu 8.10). The low level control of the robot and behaviour (e.g. obstacle avoidance and navigation) was achieved via the pre-existing processing unit whilst the data processing for the purposes of mapping and localisation were placed on the additional linux unit. The winXP unit was used whenever windows specific (proprietary) software items had to be deployed.

All robot collected data were forwarded to an ergonomically designed base station.


Project Partners

Coordinator

  • SHU: Sheffield Hallam University, Materials and Engineering Research Institute (MERI, MMVL), Sheffield, United Kingdom

Academic Research Partners

Industrial partners

Project outputs and dependencies

Selected Publications

  • J. Bedkowski, and A. Maslowski (2009), An nVIDIA CUDA application in the Cognitive Supervision and Control of a mobile robot system. IARP/EURON Workshop in Robots for Risky Interventions and Environmental surveillance (Brussels, Belgium): 1-12
  • A. Carbone, A. Finzi, A. Orlandini and F. Pirri (2008). Model-based control architecture for attentive robots in rescue scenarios. Autonomous Robots 24: 87-120
  • L. Alboul, B. Amavasai, G. Chliveros and J. Penders (2007). Mobile robots for information gathering in a large-scale fire incident. IEEE 6th (SMC UK-RI) Conference on Cybernetic Systems (Dublin, Ireland): 122-127


Selected Public Reports

  • coming soon

Software that has proven useful


See Also




Videos and results

A prototype of the ViewFinder SLAM procedure based on an SIR-RB particle filter implementation: ladar and odometry data collected via the Player software platform.
A video demonstration of the base-station controlling the ATRV-Jr robot through the Human-Machine interface. Different views and control means are shown.
More to add here
Nitrogen gas evolution in a room-fire scenario (simulated with NIST's FDS-SMV); a vertical and horizontal plane are only shown with the fire start indicated by a yellow patch.
Outdoor trials at the Royal Military Academy outdoor robot (RMA) and Base Station (SAS); a glimpse of the indoor ATRV-jr robot (PIAP) can also be seen.
X-addition
X-addition X-addition More to be added here
Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox