• Angelica Sirotin

What Do We Do When AI & Robots Break the law? Q/A With Renowned AI Researcher, Expert, Maria Gini



Maria Gini is a Professor in the Department of Computer Science and Engineering at the University of Minnesota. She studies decision making of autonomous agents for robot exploration, distributed methods for allocation of tasks, teamwork, and AI in healthcare. She is Editor in Chief of Robotics and Autonomous Systems, and is on the editorial board of numerous journals in Artificial Intelligence and robotics. She is a fellow of the Association for Advancement of Artificial Intelligence, a Fellow of IEEE, a Distinguished Professor of the College of Science and Engineering at the University of Minnesota, and the winner of numerous awards.

I sat down with Professor Gini some time ago for an exclusive interview where I asked her thought provoking questions on the subject of AI.

If AI reaches the level of human intelligence and consciousness, do you think it should be considered human and thereby subject to our legal system? What do we do when a robot breaks the law?


There is discussion right now over whether or not robots should have rights. If you think about AI, the AI itself thinks that it is just a brain, just a computer. However, when you attach the body of a robot to an AI, and the robot looks like a human, then people all of a sudden wonder whether or not it should have rights. There was a world forum a year ago where a list of problems was discussed, and one of the problems had to do with AI, such as ethics, killer robots, and robot rights.


A great example of robot rights is when people started using Sony’s AIBO dog. It was very interesting to read all of these blogs about AIBO where people wrote you should never mistreat an AIBO because it is a dog–but it is actually a robot. In this sense, humans are more sensitive towards AIBO because it is more of a cute AI. But then again, would people feel bad if there was a factory filled with working robots that looked like humans, and then somebody comes and shoots all of them? I don’t think we are used to thinking about entities which are not living things that have rights. Perhaps we’ll decide not to give robots any rights at all. The question is still very much open.


On the topic of killer robots, there is discussion right now that is really centered on whether or not drones should be able to kill humans autonomously without a human controlling them. Right now, drones have the ability to drop bombs without any human control, but they don’t. There is a big movement in the AI community, which is trying to go to Geneva and have a discussion and resolution about killer robots. This is because it is clearly stated in the Geneva Convention that only a human can kill another human in the event of war. Thus, if a robot kills a human in this situation, the robot violates the Geneva Convention. The issue is that we should not allow robots to make independent decision when it means killing somebody. Maybe a robot cannot kill a person, but can a robot harm a person? Now the focus is more on the extreme cases of dropping bombs and killing people; however, there is a big spectrum of legal issues that will come into the picture. For instance, if the robot intentionally or unintentionally hurts a person, what do you do to the robot? Put it in prison? Shut down the power? The problem is our legal system and punishments are designed for people, not robots. This topic will open up many interesting issues, and will definitely have to be addressed in the future.

0 views0 comments