Robotics is a paradigm of science based innovation. The term robot is not a new one; it was introduced by 1920 in context of fiction. Nowadays there are in use 1000 surgical robots, 10.000 military robots in US Army and 5.000.000 household robots like vacuuming robots.
Institute for Future, IFTF, based in Palo Alto, CAL, USA, organized in November last year an interesting conference abut robotics: RobotRenaissance – The future of human-machine interaction. Like all conferences of IFTF, also this was well prepared and well organized. I was invited to participate, and this was happy time to me, because I worked with artificial intelligence in Nokia Research Center in the middle of 80’s. And robotics has been a central theme in AI-research for decades.
One of the keynote speakers was professor Ken Goldberg from UC Berkeley. He emphasized the change in thinking about robotics. In the earlier years the belief and even the intention was to replace human being by robots (Robotics I). Then people started to realize that we have to understand in which jobs or tasks human beings are better than robots and otherwise. Now in Robotics II robots are seeing to co-operate with people. Robots might be quick, make complex calculation, repeat things, hake sharp observations, are patient and robust etc. The best use of robotics is to develop robots to complement human being so that the new complex “people + robots” is as effective as possible.
In robotics the cognitive science is quite relevant. One interesting line of research is to study mental models of robots. Somehow one must construct the model senses – thinks – acts for robots. It is also confusing area, because people don’t know how to communicate with robots. Philosophically we have to ask what kind of status must be given to intelligent robots. Are they living or perhaps human. Anyhow they are “co-inhabitants” acting in our life, in factories, hospitals, schools, in battle fields and in home. In terms of German philosopher Martin Heidegger, technology is a way of being. Robots “are available”, they are open to us to use them.
The conference presented many real cases of robots and development in robotics. Some of most sophisticated developments are related to make robots intelligent and “human”. I see here huge possibilities but also some dangers. On issue is that if robots are doing complex decision based on complex calculation, how we could trust them. I see that in standard conditions artificial intelligence is working well. But in strange situation any algorithm fails, say in condition of tsunami and catastrophe in nuclear power plants. People are best in this kind of situation because they could change the algorithm. This is based on an essential phenomenon of human being – they give meanings to things. People interpret their environment based on their history, experiences, communities, knowledge and values. Things have no meaning to robots, so they follow mechanically their algorithms. I propose to Ken Goldberg that we have to study how to embed meaning structure to robots. I sent the following email to Ken after the conference. Hope that he takes my proposal seriously.
From: Antti Hautamäki
Day: 14.11.2010 18:41:28
Topic: meaning and robots
Thank you for your excellent presentation. You mentioned also Heidegger.
My point about meaning is that human beings are not only acting. They have and give meanings to their actions. Meanings are not only intentions of actions, but they are larger interpretation what is happening. People make sense of things. They embedded things to context of their life experience.
It is possible that robots imitate human behavior quite well, but if they lack meaning the actions might be different in certain circumstances. Meanings human being attached to their actions contains emotional dimension and values. This is important if we like to understand human decision making. The formula is:
(*) Targets + circumstances + meanings -> deliberation -> decisions (to act)
Deliberation here is partly rational and partly emotional related to what is important for people.
In 80% or even 90% of all actions are quite routine following rules. So deliberation is passed. But in special cases people brake rules and go back to values and then deliberation might change even targets.
If we like to develop robots, which action and decisions we can trust, we must somehow codify the meaning structure of human life. So it must be something like the formula (*). Perhaps, robots must become anthropologists which try to construct the meaning structure of this strange tribe we call human race. Did you get the point?
Ph.D in philosophy