Workshop at IEEE/RSJ IROS 2008 (Nice, France, 26 September, 2008).
Organizers
This e-mail address is being protected from spambots. You need JavaScript enabled to view it. This e-mail address is being protected from spambots. You need JavaScript enabled to view it. This e-mail address is being protected from spambots. You need JavaScript enabled to view it.
Motivation
Realization of purposive goals in complex environments, especially in applied robotics (e.g. service, manufacturing, and unmanned vehicle applications), impose stringent demands on robots in terms of being able to cope with and react to dynamic situations using limited on-board computational resources, incomplete and often corrupted sensor data. High-level cognitive competencies encompassing knowledge representation, perception, control, and learning are considered essential elements that will allow robots to perform complex tasks within highly uncertain environments. New more successful implementations of concepts already presented in literature, but not implemented with exhaustive experimental methodology, run the risk of being ignored, if appropriate benchmarking procedures, allowing to compare the actual practical results with reference to standard accepted procedures, are not in place.
It is a well-known fact that current robotics research makes it difficult not only to compare results of different approaches, but also to assess the quality of individual research work.
Some steps have been taken to address this problem by studying the ways in which research results in robotics can be assessed and compared.
In this context the European Robotics Research Network EURON has as one of its major goal the definition and promotion of benchmarks for robotics research. As part of this goal, it has funded a Special Interest Group on Good Experimental Methodology and Benchmarking in Robotics.
Similarly, the Performance Metrics for Intelligent Systems (PerMIS) Workshop series has been dealing with similar issues in the context of intelligent systems. The main purpose of this workshop is to contribute to the progress of performance evaluation and benchmarking, focusing in intelligent robots and systems, by providing a forum for participants to exchange their on-going work and ideas in this regard. These objectives are pursued in a context where there is not a wide agreement on the concepts like ‘autonomy’, ‘cognition’ and ‘intelligence’.
The emphasis of the workshop will be, then, on cognitive solutions to practical problems. These cognitive approaches should enable an “intelligent” system to behave appropriately in real-world scenarios in various unstructured application domains. In the context of this workshop, we define intelligence as “the ability to act appropriately in an uncertain environment, where appropriate action is that which increases the probability of success, and success is the achievement of behavioral goals” (J. Albus, “Outline for a Theory of Intelligence”, IEEE Trans. on Systems, Man, an Cybernetics, Vol. 21, No. 3, May/June 1991). Another key issue will be a capability-led understanding of cognitive robots: how to define shared ontologies or dictionaries to discuss robotic cognitive systems in terms of their performance, relationships between different cognitive robotics capabilities, requirements, theories, architectures, models and methods that can be applied across multiple engineering and application domains, detailing and understanding better the requirements for robots in terms of performance, the approaches to meeting these requirements and the trade-offs in terms of performance. The proper definition of benchmarking is related to the problem of measuring capabilities of robots in a context in which, in many cases, the ‘robotics experiments’ themselves are difficult to ‘replicate’.
Program
09:15
Introduction
Angel P. del Pobil Raj Madhavan F Bonsignorio
Characterizing Mobile Robot Localization and Mapping
Raj Madhavan
NIST
10:00
Benchmarking dexterous dual-arm/hand robotic manipulation
Gerhard Grunwald Christoph Borst J. Marius Zöllner
DLR and FZI Universität Karlsruhe
10:30
Coffee Break
On evaluating performance of exploration strategies for autonomous mobile robots
Nicola Basilico and Francesco Amigoni
DEI - Politecnico di Milano
Test Bed for Unmanned Helicopters' Performance Evaluation and Benchmarking
Nikos I. Vitzilaios and Nikos C. Tsourveloudis
Technical University of Crete
Collecting outdoor datasets for benchmarking vision based robot localization
Emanuele Frontoni Andrea Ascani Adriano Mancini Primo Zingaretti
Università Politecnica delle Marche
12:30-13:00
Open Discussion
13:00
Lunch
14:00
Benchmarking activities in Euron
Angel P. del Pobil
University Jaume I
14:30
Requirements for Benchmarking in Mobile Service Robotics
Kai Pfeiffer
Fraunhofer IPA
On the collection of robot-pose Ground-Truth for indoor scenarios in the RAWSEEDS project
D. Marzorati, M. Matteucci, D. Migliore, D. G. Sorrenti
Politecnico di Milano and University of Milano Bicocca
15:30
Coffee Break
On The Design of Experiments in Robotics
Monica Reggiani Enrico Pagello
University of Padua
The Euron GEM Review guidelines: current state and perspectives
Fabio Bonsignorio John Hallam Angel P. del Pobil
17:00
Open discussion