European Parliament wants designers to integrate a kill switch in robots

Spread the love

The European Parliament has passed a motion calling on the European Commission to come up with rules for a mandatory ‘kill switch’ for robots. Designers should integrate these into their designs of advanced robots and artificial intelligence.

The kill switch proposed in the motion is part of a designer’s license proposal appended to the motion. This concept license, which European robot designers may have to adhere to in the future, states that designers must integrate clear interruption mechanisms or ‘kill switches’ that fit the objectives of the robot’s design. There is no further explanation in the motion about the specific application and conditions for the kill switch.

The main reason for the motion is the rapid development of robots and artificial intelligence and the need for rules to clarify who is liable for what in the event of damage caused by autonomous robots. The European Parliament proposes to create a specific legal personality in the long term, especially for advanced robots. A special ‘electronic’ legal personality for robots means that robots are considered – to a certain extent – as a person before the law, including the rights that go with it.

Particularly with regard to the technology of self-driving cars, the motion argues for the establishment of a mandatory insurance scheme. Producers or owners of these robots should be obliged to take out insurance for the damage caused by these robots. For cases of damage caused by robots where there is no insurance cover, a compensation fund must be established to ensure that compensation is still paid. If manufacturers, programmers, owners or users of robots contribute financially to this compensation fund, their liability could possibly be limited, the motion proposes.

The independence of the damage-inflicting robot in question may in the future become a crucial point for the designer’s degree of liability. The motion considers that the liability should be proportional to the concrete level of the instructions given to the robot and the degree of autonomy. This means that the greater a robot’s learning capacity or autonomy, and the longer the robot’s possible ‘training’ has lasted, the greater the legal liability of its ‘trainer’ will be.

The European Parliament wants to make a specific distinction between the skills that the robot has acquired during the training and the skills that are purely attributable to the self-learning ability. This makes it clear that, according to the motion, for the time being the responsibility must lie with humans and not with the robot.

The motion, which opens in a playful way with a short reflection on Frankenstein, the Prague Golem and Karel Čapek’s robot, was passed with 396 votes in favour, 123 against and 85 abstentions. The European Commission is not obliged to follow the recommendations in the motion; however, in the event of a rejection, the Committee must substantiate why the motion is not being implemented.

You might also like
Exit mobile version