Workshop at IEEE/RSJ IROS 2010 (Taipei, Taiwan, 22 October, 2010)
Organizers
This e-mail address is being protected from spambots. You need JavaScript enabled to view it. This e-mail address is being protected from spambots. You need JavaScript enabled to view it. This e-mail address is being protected from spambots. You need JavaScript enabled to view it.
Workshop Highlights
Objectives and contents
Robust and adaptive pursuing of complex goals in complex and open ended changing environments, especially in applied robotics (e.g. service, manufacturing, and unmanned vehicle applications), impose stringent demands on robots in terms of being able to cope with and react to dynamic situations using limited on-board computational resources, incomplete and often corrupted sensor data with an higher level of dependability. The high level cognitive capabilities including knowledge representation, perception, control, and learning are considered essential elements that will allow robots to perform complex tasks within highly uncertain environments with an appropriate level of autonomy need to be measured in term of bare performance, adaptivity, dependability. As the complexity and variety of required tasks and of the targeted environments increase exploring new application capabilities and domains, it is more and more necessary to develop well principled procedures allowing to compare quantitatively the solutions provided by robotics research, making easier the exchanges of methods and solutions between the different research groups and the assessment of the state of the art. The cumulative progress coming from new more successful (or simply more robust) implementations of concepts already presented in literature, but not implemented with exhaustive experimental methodology, run the risk of being ignored, if appropriate benchmarking procedures, allowing to compare the actual practical results with reference to standard accepted procedures, are not in place.
It is a well-known fact that current robotics research makes it difficult not only to compare results of different approaches, but also to assess the quality of individual research work. This is even more clear when we analyze cognitive capabilities or levels of autonomy. In the last years a number of initiatives have been taken to address this problem by studying the ways in which research results in robotics can be assessed and compared. In this context the European Robotics Research Network EURON, the eu robotics researcg funding offices have as one of their major goal the definition and promotion of benchmarks for robotics research. Moving from similar motivation the Performance Metrics for Intelligent Systems (PerMIS) Workshop series has been dealing with similar issues in the context of intelligent systems. The recently constituted IEEE TC PEBRAS has the purpose to deal with these objectives. The main purpose of this workshop is to contribute to the progress of performance evaluation and benchmarking, focusing in intelligent robots and systems, including those with some high level cognitive capabilities and some degree of autonomy, by providing a forum for participants to exchange their on-going work and ideas in this regard. A purpose of the workshop is also to achieve better and agreed ideas on how to define and measures system level characteristics like ‘autonomy’, ‘cognition’ and ‘intelligence’.
The emphasis of the workshop will be on principles, methods, and application expanding beyond the current limit of robotics applications in term of cognitive capabilities and autonomy. Another key issue will be a capability-led understanding of cognitive robots: how to define shared ontologies or dictionaries to discuss robotic cognitive systems in terms of their performance, relationships between different cognitive robotics capabilities, requirements, theories, architectures, models and methods that can be applied across multiple engineering and application domains, detailing and understanding better the requirements for robots in terms of performance, the approaches to meeting these requirements and the trade-offs in terms of performance. The proper definition of benchmarking is related to the problem of measuring capabilities of robots in a context in which, in many cases, the ‘robotics experiments’ themselves are difficult to ‘replicate’.
List of topics
We welcome any topic relevant to benchmarking and performance evaluation in the context of cognitive solutions to practical problems, such as:
- Knowledge representation, perception (sensing), and learning
- Uncertainty management in robot navigation, path-planning and control
- Cognitive manipulation
- Metrics for sensory motor coordination and visual servoing effectiveness/efficiency
- Benchmarking autonomy and robustness to changes in the environment/task
- Capability-led understanding of cognitive robots
- Scalable autonomy measurements
- Shared ontologies to discuss robotic cognitive systems in terms of their performance capabilities
- Relationships between different cognitive robotics capabilities
- Requirements, theories, architectures, models and methods that can be applied across multiple engineering and application domains
- Detailing and understanding better the requirements for robots in terms of performance, the approaches to meeting these requirements, the trade-offs in terms of performance
- The development of experimental scenarios to evaluate performance, demonstrate generality, and measure robustness
- Performance modeling of the relationship between a task and the environment where it is performed
- Benchmarking of sensory motor coordination Relationship between benchmarking and replication of experiments with robots
- Robotics experiment reporting
Intended audience:
The proposed workshop is the fifth in the series after four successful and well attended workshops in IROS’06 to IROS’09, while related workshops have been organized at the RSS and ICRA. It is part of a general effort to improve the effectiveness of robotics research carried on within the Euron network an at US NIST. The primary audience of the proposed workshop is intended to be researchers and practitioners both from academia and industry with an interest in cognitive robotics and how these approaches can be utilized in generating intelligent behaviors amidst uncertainty for robots in the service and commercial sectors and robotics technology to find new applications in everyday life. The workshop is also aimed at benchmarking and objectively evaluating performance of such robots. Accordingly, it is envisioned to be useful for anyone who has an interest in quantitative performance evaluation of robots and/or robot algorithms.
Format:
The workshop will consist of invited presentations and regular presentations. INTERACTION IS HIGHLY ENCOURAGED.
Proceedings:
The papers are included in the IROS 2010 Tutorials and Workshops DVD.
You can register for the workshop via the IROS '10 website at http://www.iros2010.org.tw/.
Previous workshops:
Previous workshops on the topic of Benchmarking have been organized successfully at IROS (4 in a row) and RSS conferences.
Backing:
This workshop is under the auspices of the IEEE-TC on Performance Evaluation and Benchmarking of Robotic and Automation Systems (TC-PEBRAS) and of the EURON Good Experimental Methodology and Benchmarking SIG.
Workshop Program
October, 22th 2010
08:20-08:30
Introduction: Towards Benchmarking and Replicable Robotics: Progress and Future Directions
Experiments on grasp acquisition
Thomas Wimböck*, Berthold Bäuml, Gerd Hirzinger
DLR (German Aerospace Research Center)
Germany
Jeff Trinkle
Rensselaer Polytechnic Institute
USA
8:55-9:20
Hierarchical structuring of manipulation benchmarks in service robotics
Rainer Jäkel, Sven R. Schmidt-Rohr*, Martin Lösch, Rüdiger Dillmann
Karlsruhe Institute of Technology
Germany
9:20-9:45
Benchmarking for grasping in human environments
Aaron Dollar
Yale University
USA
Evaluating Social Robots: Insights from the Diagnosis of Autism Spectrum Disorders
Bill Smart
Willow Garage
USA
10:10-10:20
Coffee Break
Standardization of Cleaning Robot Performance Measurement in IEC
Sungsoo Rhim
Kyung Hee University
Korea
Decision-Theoretic Probabilistic Methods for Benchmarking
Matthijs T. J. Spaan, Pedro U. Lima
Institute for Systems and Robotics, Instituto Superior Técnico
Portugal
11:10-11:35
Metrics for Assistive Robotics Brain-Computer Interface Evaluation
M Stolen A Jardon C Balaguer
Universidad Carlos III de Madrid
Spain
F Bonsignorio Heron Robots and Universidad Carlos III de Madrid
Italy
11:35-12:00
RoboCup@Home and RoboCup@Work: From Benchmarking Algorithms to Benchmarking Service Robots in Residential and Industrial Scenarios
Gerhard K. Kraetzschmar
Bonn-Rhine-Sieg University
Germany
12:00
Final Discussion