Workshop at ICRA 2014 (Hong Kong, China - June 5, 2014, PM)
Proposers:
This e-mail address is being protected from spambots. You need JavaScript enabled to view it. This e-mail address is being protected from spambots. You need JavaScript enabled to view it. This e-mail address is being protected from spambots. You need JavaScript enabled to view it. This e-mail address is being protected from spambots. You need JavaScript enabled to view it.
UPDATED PROGRAM
14:00 Welcome
14:05 Why Experimental Methodology and Epistemology matter in Robotics and Cognition/AI?
F.Bonsignorio, IEEE RAS TC-Pebras co-chair, coordinator of euRobotics TG on Experimental Methodology, Benchmarking, Competitions and Challenges
Institute of Biorobotics, SSSA, Pisa, Italy and Heron Robots
Abstract:
What really still could be improved in robotics research ( or ‘sucks’ :-), taking from the title of the Friday’s workshop) may depend on some of the (experimental methodology and epistemological) issues we will discuss in this workshop.
14:35 Robotics Applications Based on Merged Physical and Virtual Reality
P. Galambos*†, T. Haidegger*, P. Zentay*, J. K. Tar*, P. Pausits*, I. J. Rudas*
*Antal Bejczy Center for Intelligent Robotics, Obuda University, Budapest, Hungary
†Institute for Computer Science and Control, Hungarian Academy of Sciences, Budapest, Hungary
Abstract: We are focusing on various industry related aspects and new possibilities of virtual environment based benchmarking. A brief introduction to augmented virtual reality simulators is given, focusing on the basic concepts and features that make these well suited for collaborative work and benchmarking in mixed virtual and physical reality. Through the concrete example of the VirCA (Virtual Collaboration Arena) system —developed at our centers— the way of involving real industrial robots in a remote collaboration scenario is discussed.
Typical uses of a shared infrastructure are reviewed, considering the relationship of the virtual and real entities.
14:55 To What Extent Are Competitions Experiments? A Critical View
F. Amigoni, A. Bonarini, G. Fontana, M. Matteucci, V. Schiaffonati
Politecnico di Milano, Milano, Italy
Abstract: In recent years, a point of view that considers robotic competitions as experiments has emerged and represents the core of at least two EU projects: RoCKIn and euRathlon.
Between competitions and experiments there are obvious differences: the most notable ones are probably that an experiment evaluates a specific hypothesis while a competition usually evaluates general abilities of robotic systems
and that competitions push to development of solutions, while experiments aim at exploring phenomena and sharing results. Nonetheless, there are a number of reasons for recasting robotics competitions as experiments, considering
traditional experimental principles (comparison, repeatability,
15:15 Coffee Break
15:45 Subjective and objective measures in human-robot interaction research (invited)
M. Scheutz
Tufts University, MA, USA
Abstract: Research from a small number of human-robot interaction experiments are often used to make general claims about human-robot interaction that are supposed to be valid across many interaction contexts. Such experiments frequently employ either subjective measuressuch as subject surveys or objective measures such as task performance even though single measures are often insufficient for getting at the underlying effects, thus limiting the inferences that can be reasonably drawn from experimental results. Restricted tasks, narrow subject populations, and lack of robotic variations further limit the generalizability of experimental results, typically lead to replication failures in other contexts.
In this talk, we will present various examples of HRI experiments that are lacking on any of these fronts, thus raising doubts about the validity of the proposed results. For some of those experiments, including our own, we will then discuss ways to remedy the shortcomings.
We will end with a general set of guidelines for proper experimental design in human-robot interaction research that has the potential to produce valid results that hold for a variety of tasks, robots, and interaction contexts.
16:30 Panel and Discussion:
- F. Amigoni, Politecnico di Milano, Italy
- W. Burgard, University of Freiburg, Germany
- R. Dillmann, Karlsruhe Institute of Technology, Germany
- O. Khatib, University of Stanford, CA, USA
- M. Scheutz, Tuft University, MA, USA
And __last but not least__…the workshop attendees!
We really aim to an highly interactive session (please feel free to book in advance a 3 min flash-talk)
We will likely not fix, but will we, hopefully, be able to take-home a couple of inspiring ideas?
17:30 Wrap up
Statement of objectives
This workshop addresses a number of epistemological issues in robotics, related to performance measurement, methods for the objective comparison of different algorithms and systems, the possibility itself to replicate published results. How research should be performed (and reported) in order to allow result replications? While the practical utility of challenges and competitions is widely recognized, which is their epistemological status? How should they be designed in order to maximize their contribution to robotics research (and the industrial exploitation of results?)? To which extent and for measuring which functionalities and capabilities are benchmarks useful and possible? Which balance of result replication, benchmarking, challenges and competitions would allow a more objective evaluation of results and consequent objective assessment of the state of the art in the various subfields of our domain of investigation? The workshop will provide a forum for the discussion of these issues and will show a number of examples of replicable experiment in Robotics Research. Moreover, we believe that the ideas developed within the community and the available tools allows to discuss, and possibly to implement, a robotics research publishing thread based on the publication of fully replicable experiments. This would possibly be very beneficial for the education of undergraduate and graduate students, too.
List of topics
- Experiments in Robotics
- Epistemological issues
- Examples of Good Practice
- Evaluation of Experimental Robotics Work
- Proposals for Promotion of Good Experimental Work
- Performance modeling of the relationship between a task and the environment where it is performed
- Relationship between benchmarking and replication of experiments with robots
- Reporting experiments in Robotics
- Comparison of experimental methodology in neurosciences, neurophysiology and in Robotics
- Comparison of experimental methodology in neurosciences/neurophysiology and in AI
- Comparison of experimental methodology in Biology and in Robotics
- Comparison of experimental methodology in Biology and in AI
- Experimental scenarios to evaluate performance, demonstrate generality, and measure robustness in robots and animals
- Well grounded experimental methods to compare artificial and natural cognitive systems; Benchmarking in Robotics/AI/Cognition
- Robotics/AI/Cognition competitions
- Design of Robotics/AI/Cognition competitions
- Robotics/AI/Cognition Challenges
- Design of Robotics/AI/Cognition Challenges
- How to integrate Experimental Methods, Benchmarking, Challenges and Competitions for a better evaluation of results
Primary/secondary audience
Primary: Robotics researchers from any subfield of the discipline from both academia and industry. Secondary: Industry members interested in the exploitation of research results, others interested in methodologies in scientific and engineering disciplines.
Relation to previous ICRA or IROS workshops
This workshop is a joint initiative of the IEEE-TC on Performance Evaluation and Benchmarking of Robotic and Automation Systems (PEBRAS) and the EURON Special Interest Group on Good Experimental Methodology for Robotics (SIG GEM). The proposers are co-chairs of IEEE-TC PEBRAS and/or EURON SIG GEM and have jointly co-organized about 20 successful related events in the last years, such as several workshops on Benchmarking and Performance Metrics at IROS (2006-2010,2012, 2013), 5 workshops at RSS (2008-2010, 2012, 2013) and the Performance Metrics for Intelligent Systems Workshop (PERMIS) series (2000-2012) and three organized at ICRA 2010, 2011, 2012 on replication of experiment in robotics research. John Hallam has been involved in the organization of many conferences on the Simulation of Adaptive Behavior and he is the president of International Society for the Simulation of Adaptive Behaviour.
Support
This workshop is supported by IEEE RAS TC-Pebras and the Euron GEM Sig