researchTo take robots out of the factories in everyday life is not a free lunch.
The main issue is if we have the science or even the concept framework for dealing with open ended unstructured environments.

In nature there are many kinds of loosely coupled networks of intelligent agents, largely varying in number and cognitive capacity (i.e. of computational needs) of each agent.
In nature's domain the most widely used methods of 'intelligence', computation and 'cognition' are 'embodied' biological neural networks.

Could robotics become the science of embodied cognition?

And as such would require a proper experimental methodology (about this issue check Euron GEM Sig)?

A few ideas about these issues are discussed here.

A number of empirical and theoretical researches are investigating, on one side on the aspects and implications of 'embodiment' and on the other side on the 'emergence' of cognition from networked interaction of physical agents.

What we probably need for being able to build 'real' artificial cognitive systems is a deep interchange of concepts, methods and insights between fields so far considered well distinct like information and control theory, nonlinear dynamics, general AI, psychology and neurosciences.

The interaction with physical world implies extending computational theory: a 'paradigm change' from topdown symbolic processing to emerging self-organized cognitive behaviors of complex adaptive dynamical systems is needed.