Pioneers in artificial intelligence and robotics for 50 years

Entering the austere building that bears the inscription “Istituzione Elettrotecnica Carlo Erba” (Carlo Erba Electrotechnical Institution) we would not have expected it. This grey frame encloses a colourful and sound, multimedia and technological world: that of AIRLab. From the shelves, chairs, floor and ceiling, puppets, bins, objects of different kinds seraphically look at these strangers as soon as they enter.

Guiding us through what may seem at first glance to be a fascinating theatrical warehouse, but which is actually the Artificial Intelligence and Robotics laboratory of the Politecnico di Milano, is Andrea Bonarini, from the Department of Electronics, Information and Bioengineering.

Together with him, we will retrace the 50 years of the laboratory, trying to understand where the future will take us.

Professor Bonarini, thank you for welcoming us to AIRLab. You have recently celebrated the fiftieth anniversary of the laboratory, is it true?

I can confirm that AIRLab was founded in 1973. It is not always easy to find out the right dates, because Professor Marco Somalvico, the unforgettable founder of this place and its historical memory, passed away in 2002.

It was one of the first research groups on artificial intelligence and robotics in Italy. So, yes, we can proudly say that studies on artificial intelligence at the Politecnico di Milano were born fifty years ago.

The building that houses us is historic, and for some years now it has not only hosted you from AIRLab…

In fact, in 2019 different multidisciplinary skills on robotics came together in the POLIMI Leonardo Robotics Labs, in this space of over 500 square metres. Our “neighbours” are MERLIN and NEARLab.

What is done in AIRLab, specifically?

We are among the longest-running research groups in Italy on AI, robotics and machine perception. Our researchers are world-class experts in many fields of artificial intelligence, autonomous robotics, human-robot interaction, computer vision, machine learning, and the philosophy of artificial intelligence and robotics. We produce both methodological results and effective solutions for practical problems in many different fields.

Can you introduce us to your robots? How are they used?

What you see are largely mobile robots with a good number of sensors on board. Because they have to know where they are going, they have to have the situation under control. There are two main fields of application: agricultural robotics and inspection robotics.

In the first case, their purpose is to detect if the plants have problems of various kinds. It is a matter of managing punctual interventions on crops, mainly vineyards, in the framework of some European and industrial projects. Here outside the laboratory, you can see the simulation of apple harvesting by one of our mechanical arms mounted on a mobile robot. For these projects we collaborate with the Faculty of Agricultural and Food Sciences of the University of Milan.

In the second, it is also a matter of inspecting pipelines to detect problems. Auto location is done with different sensors: GPS, lasers, cameras. We have ongoing projects with important companies.

In fact, in here, a lot is supplied or bought thanks to the collaboration of companies. But many products are made in the laboratory, in an artisanal way.

What does your team do, in particular?

All of these that you see around here, looking so playful, are the robots that we make.

Most of them are robots that help people with disabilities communicate especially cognitive, cerebral palsy and autism.

We don’t do therapy; we make games that use robots. I also talked about it in a book on the subject.

What is different about this approach of yours?

First of all, play is a basic element that is fundamental in the education of any child. We add that many of these children with disabilities often cannot play or are not let play because they may have relationship problems, or because the toy is not adequate.

On the other hand, the relationship with the robot is an exchange. Because the robot is autonomous; It is not like a doll that requires the user to invent a story that brings it to life. The robot, autonomously, reacts.

And you too, sometimes, are faced with unexpected developments in the playful activity, right?

Sure. Once, there was a little girl with Down syndrome who played with a robot, helping it find coloured patches on the floor. At one point the little girl found herself moving the black piece, realizing that the robot was following her, and thus changing the course of the game. Once his teammates arrived, she explained the game to them. At that moment, finally, she related to them for the first time.

Another time, a child who noticed that a robot was remote-controlled, went to get the remote control, and thus created a relationship through him with another child that he had not approached before.

I will tell you one more episode. A robot had started crying because it had been pushed to the ground, to make it clear that the action towards it had been violent, and the child involved was amazed and then brought him a yellow piece, the colour of his body, to console him.

What makes robots so special?

The robot allows an action-reaction mechanism. And in our case, the reaction is controlled. It’s like when children want to listen to the same fairy tale: for them, this means always feeling a known reaction, which represents a confirmation that what they are doing “is right”.

When the robot is commanded, the therapist can put into practice actions aimed at facilitating specific situations and reactions.

Can you explain any of your specific projects?

The FROB project, for example, is very interesting. The goal is to develop a methodology and tools to create autonomous robots capable of supporting the play activities of children with physical and cognitive disabilities, helping them to overcome individual limitations and environmental barriers that prevent them from playing independently.

While most of the toys and robotic prototypes available are suitable for only one type of interaction, with little playful potential, the characteristic of FROBs is that they are extremely modular.

This is the prototype. As you can see, it moves on wheels and has different modules that you can insert, and that make things “happen”. The child himself can build his own robot with speakers, lights, accelerometer, and many other modules.

We are about to start experimentation in kindergartens, to promote the inclusion of children with disabilities.

What about other projects?

We are working on even more technological projects.

For example, a robot that integrates artificial intelligence with a camera to detect what people are doing, analysing their skeleton and movements. We use it to share attention with autistic children: in that way I make it clear that I am focusing on an object, and you are looking at that same object. The robot, being equipped with a camera, allows objective data to be collected to carry out an evaluation, which is normally only done later by human people watching videos of the experience.

And that wheelchair?

It is a wheelchair with shared autonomy. It is equipped with a laser that lets it know where it is going. The person who uses it is able to control it with interfaces consistent with what he can do: operate with a joystick, clench his jaw, breathe. And if it has difficulties, the wheelchair takes care of safety by avoiding bumping into objects.

With that, we also did human-robot-interface work with people with very advanced ALS, who cannot move muscles but have a fully functioning brain. By examining the user’s encephalogram, the wheelchair knows where to go. How does it do it? If the person focuses on a place, the brain generates a recognition wave, the P300. The system cyclically flashes the names of places, and when we recognize this wave, we learn where the person wants to go.

Although we had won two competitions for the creation of the product, the project did not have a subsequent practical application, because the necessary investment was high, and no one took the risk on such an innovative product.

Unfortunately, it is not uncommon for a project with great potential to not find practical application

With the National Cancer Institute, we had developed a breath-based lung cancer diagnostic system which, collected in a bag, is then analysed with artificial intelligence and an artificial nose. Out of a sample of 200 people, he had given better results than the CT scan; it also had a much lower cost and was much faster. But to move on to the next phase, at least 2,000 cases were needed, and not having them available, the project sank.

You also have a long history of collaboration with the School of Design

Yes, for over 13 years we have been offering a course in “Robotics and Design”, the result of the collaboration between our School of Industrial and Information Engineering and the School of Design. Half of the students come from one school and half from the other. Every year we provide a brief, and they make robots, always very original.

In this way, we are able to bring together two different worlds: design, characterized by problem setting and many prototypes, and engineering, with the analysis of specifications and the creation of the prototype.

Some products of this collaboration, in addition to proposing new solutions, are also very fun to see

The robots in question do a little bit of everything. Some had to represent a certain type of music and move to its rhythm: for example, the one that dances funky or the other, the one that does grunge, who at the end of the song takes “its life” and loses its mind.

There you see the Mexican that had to invite customers to the Mexican restaurant by offering nachos and then removing them, saying “inside you eat better”. Or the beer mug that has 46 different ways of interacting.

Then there is another part of research, related to the interaction between human beings and robots

Yes, it is based on the fact that the robot shows emotions: it tries to convince, it tries to understand. When we approach this type of robot, they have reactions, they express emotions. Some are happy if we pet them, others get scared if we get closer. These robots are also used with children with disabilities, precisely because of these peculiar characteristics in interaction.

Tell us about the robot actor

Our prototype of robot actor stands on stage and follows the director’s instructions. With him we also made a song from Pinocchio, where he does not find himself in the whale, but in a spaceship. We also tried to do a part of improvisation, a preparatory theatrical activity to create robots that stay at people’s homes and find themselves in unexpected situations.

We have inserted the robots precisely in the theatrical context: he enters the scene and understands what kind of movements the other actor makes; he responds in the context of the stage action with gestures and natural language. Even the understanding of sarcasm had succeeded very well. We had to stop, however, in front of the stumbling block of the expressiveness of speech: we have not yet managed to render the perfect correspondence between what is said and the way of saying it.

You also created an escape room at the X-Cities exhibition. Can you explain how the experiment worked?

For the X-Cities exhibition at the School of Architecture, we created a kind of labyrinth, an escape room: the person participating in the experience knew where the keys to get out were, but had to convince the robot to find them. The robot was not autonomous, there was another person outside with a headset who lived the same experience as him in a virtual world without knowing that his avatar was a real robot. It was a minimal representation of the robot retranslated: two people who actually spoke living in two different worlds.

We also did the opposite, in which the robot knew where the right keys were, and the person had to find them by being convinced by the gestures that the robot made.

The point was to experiment with different strategies of expressiveness. Arms and cameras that must move expressively. Robots that become collaborative only if petted. But first they have to convince people to stroke them only with movement.

We have created art installations on the same theme. For example, a furry blob, almost repulsive, which however wanted to be hugged, and had to make people understand this desire of its.

What is this kind of cardboard room here in the middle of the laboratory?

We did build it. Inside there are several objects, popping up from all directions, from above, from the floor. The room perceives reality through these objects. There is also a more secluded place inside. If a person comes into the room gently, “she” sprays perfume; if it comes with arrogance or violence, it sprays a disgustingly smelly substance.

The room is a distributed place. It does not have a position in space like a robot, which moves similarly to us. There is a person with a visor who is perceiving what the room perceives; that is, something or someone that “enters him”.

It is a theme that still concerns brain plasticity: we are all different, and with our bodies we experience the world in an ever-changing way from each other.

Have you carried out other experiments on the experience of space?

Yes. For example, in one case, some people were placed in a real room with a visor, while another person positioned outside lived the experience of the room on their own body, through a series of haptic sensors, which vibrated if the other person squeezed parts and furnishings of the room.

Also, with this in mind, in a workshop we had artists with sensors on them: pumps, microphones, accelerometers, distance sensors, etc. They went on stage with the ability to control robots that moved according to the data from the sensors worn. We wanted to see interesting things come out of the interaction between the person and the robot.

Is the goal for robots to be able to transmit expressions at the level of humans?

We don’t want robots to be humanoid and we are experimenting with very different forms. We are studying basic elements and physical configurations that the person recognizes as emotional expression. This is why we are exploring mechanisms that are borrowed both from dance – starting from the coding for the emotional expression of dancers introduced by Rudolf Laban – and from cartoons. In the thirties, for example, one of the exercises for Disney animators was to practice with a half-empty sack of flour, to which they convey emotions through various positions and movements. Or think of the Pixar lamp and its expressive capabilities.

Look at those two garbage cans: the one on the right is an emotional garbage can, which uses the movement of the lid to invite people to give it garbage. The other, on the other hand, invites people to enter a museum with an exhibition on recycling: when someone approaches, the lid opens, and a small radio comes out that moves the antenna in an expressive way.

There are also ethical and philosophical implications, we imagine, regarding the emotions of robots…

I once participated in a Pint of Science, during which in front of a beer and with several young people including a young philosophy student we spent a lot of time discussing whether a robot can “have” emotions or just “show” emotions.

What is an emotion? A physical reaction? In this case, if I can reproduce a physical reaction to an external event, that is an emotion. Others say that it is something different, “higher”, ineffable.

But we are not philosophers in here. We are engineers. And we need rules: if you tell me what you want to achieve, if you give me the specifics, we try to do it. If you tell me what an emotion is for you, we make it happen.

What is the added value of addressing cognitive disabilities with robots?

The educator proposes games, activities that presuppose interactions between people. The school classes we attend try to educate through these tools. But the autistic child, perhaps, in participating in the game goes on his own.

The robot is an additional tool. The robot is an autonomous entity: it is neither a person nor an authority. The autistic child has difficulty interacting when other people are present. The robot has emotions and helps him/her recognize his own.

For example, this puppet with the big red mouth is used to involve people in wheelchairs. The game is to throw balls into the robot’s mouth.

Following the theory of optimal experience, the game makes it possible to achieve flow, or the balance between challenge and skill. When I’m really involved, there’s only the activity I’m doing. And I can only do it if it’s not too difficult, but it’s also quite challenging. It applies to everyone, but especially to the child with autism. You have to find a communication channel that conveys the message: “Look: you can do it”.

When was artificial intelligence born?

Artificial intelligence was born in the legendary summer of 1956, at a seminar organized at Dartmouth College, in New Hampshire. Eleven people with any kind of background, from computer science to medicine, to philosophy, found themselves thinking about how a machine with intelligence could be created.

The early years were dominated by reasoning: putting the logical chains together. But it was also much more: managing uncertainty, learning, alternative worlds, recognizing the nuances of reality.

Tell us what happened in that fateful 1973, the date of birth of AIRLab

Many people who brought the experience of that time to the Politecnico are now retired. They were young people who had been to Stanford, where artificial intelligence was developing, in those seventies. During the day studying and in the evening at the pond doing everything. It was the world where everything we see now about AI was born.

Marco Somalvico was an extremely fascinating character. Unpredictable, surprising, brilliant. At Stanford, he had worked with John McCarthy, one of the founders of artificial intelligence, and his collaborators. From that journey in the “machine of the future”, as he called the United States, he had returned full of skills and ideas. Seeing him in a suit and tie, you would never have imagined that in those three years at Stanford he had also led psychological theatre groups or brought home books on Krishna.

Thanks to him, AIRLab was one of the first research groups on AI and Robotics in Italy. “Robotics is everything,” he said. And in fact, it is a discipline that requires many skills: programming, artificial intelligence, electronics, communication and so on. Each of us has specialized in a different area, making AIRLab a very valuable reservoir of skills. This is the beauty of the laboratory.

In 1987 Somalvico even succeeded in bringing the world conference on AI to Milan: IJCAI87. Among the founders of the Como and Cremona Campuses of the Politecnico, he unfortunately died too soon.

Subsequently, how did reflection on artificial intelligence evolve?

Reflection on neural networks was immediately there, precisely because doctors and biologists were always present in the first working groups. And with them the cognitive scientists, because it was important to understand how the mind worked.

The job of computer scientists, as I told you before, is to create the model to achieve the set goal: others proposed models and computer scientists implemented them, with the computers of the time.

The eighties were those of behaviourism: robots no longer have to reason, but they must have basic behaviours and reactions. Putting together many simple behaviours to arrive at complex behaviours. It was reflected whether it was possible to create models of emotions.

Where are we today?

At the beginning, they studied natural language, sentence structure, and the meaning of words. But this approach failed because the complexity was too high, and the costs did not repay the results.

Today the approach is different: we no longer “feed” the artificial intelligence system directly, but we feed it many examples, focusing on the fact that the mechanism we have programmed will learn by building the model independently.

However, in this way, we are losing the people “who know things”. AI always reacts in the same way, impoverishing thinking. It’s the mechanism behind algorithms: they always look for the same things, they think you always have the same interests. The algorithm, today, uses nothing more than statistics: this is why we still manage to fool it.

Not to mention the mechanisms that are being implemented by the generative AI chatbots now in vogue: the values of the ruling class, male, American, white, politically correct.

So artificial intelligence is not just “computer science stuff”?

Absolutely not, as I said remembering the dawn of the discipline.

AI is now a commodity for many. What is done here is to invent new methods to get the results. Making models that try to understand what is going on. To do this, skills are needed that go beyond current models.

One of the negative things about the moment we are experiencing is that we are always very focused on the technical aspect, looking for incremental improvements. One thing that is missing is the “leap”, the paradigm shift. The current paradigm works well in some areas, less so in others. For example, having a robot learn how to deal with an unexpected situation on the fly cannot be achieved with algorithms that require millions of interactions, such as the current ones.

Taking up a cue you gave us earlier, do you find that artificial intelligence is based on a model that is far too human-centred?

A lot of technology is inspired by us. We may wonder if it is necessary.

The neural network is inspired by the human brain. The arms through which a robot interacts are inspired by our arms. Cameras, light sensors are inspired by the receptors of the human eye. Is there a reason why they are? Maybe everything would be better in different forms.

Can artificial intelligence no longer be turned off?

Agree. Because there are so many actors present, that they should all be turned off. And this is not possible. The Chinese will not turn off anything, for sure. Americans are trying to keep everything at home, as long as they can. But no one stops.

What does the future hold?

I am always very cautious about the future. Two or three years ago we did not imagine what we have today. Ten-year forecasts are always useless.

It seems to me that this is the phase of “let’s do things that can be useful”. Which then means making money, winning the war, and so on.

Here at AIRLab we are moving forward with our work on the interaction of robots with people. We’re not working too natural language because there are so many more powerful organizations already doing it around the world. What we do in our laboratory is a bit niche.

As per tradition, can you recommend any books on the subject?

For those who want to know the scientific basis, the text of our students is: Russell, Norvig, “Artificial Intelligence. A modern approach” (Pearson).

Ted Chiang’s science fiction stories are very plausible and open up interesting sociological and philosophical problems to reflect on. I recommend two collections: “Exhalation” and “Stories of your life” (Vintage).

On the Metaverse (more or less physical), the reference is “Snow Crash” by Neal Stephenson (Del Rey), where the term “avatar” is used for the first time to indicate the being in which an individual impersonates himself. An enjoyable and compelling novel.

On robots and play for children with disabilities, I wrote together with Serenella Besio “Robot play for all” (Springer). Not a novel, but a rigorous guide resulting from the activities of our laboratory and the experience of our colleague, another example of multidisciplinary collaboration.

Condividi