Scientists do not promise to help with AI weapons
During the Joint Conference of Artificial Intelligence thousands of scientists have signed a pledge that they do not want to participate in the development of deadly autonomous weapons. Also tech leaders Elon Musk from SpaceX and Demis Hassabis from Google Deepmind have signed the promise.
Artificial Intelligence weapons
Weapons in which AI is used to select and attack targets, without people being able to intervene, can be dangerous. These weapons are a sensitive issue both morally and in practice. As far as the moral side is concerned, a human life should not be dependent on a machine. In practice, such AI weapons can destabilize countries and residents.
This step is the most recent in addressing the danger of AI machines that decide life or death. The promise comes from The Future of Life Institute (FLI), an organization that wants to reduce the dangers and challenges of technology. They advocate standards and laws that stop the development of killer robots. These are not there yet, but the signatures have been made to “not participate in and not support the development, creation, sale and use of deadly automatic weapons.” More than 150 AI companies have put their name under the promise of conference in Stockholm.
The danger of automatic weapons
The developments in AI are fast. Weapons are getting more and more sophisticated, so the possibilities for AI weapons are there. That is why it is now the right time to respond to the imminent danger. This also thought FLI, which with the signatures in any case wants to establish the standard that automatic weapons can not.
The promise is also a way to put public organizations that produce automatic weapons into public shame. This step may possibly reduce the production of such weapons. Much AI technology is used in the army. Flying robots, robots on the ground and robots in the water are already being used.
Fight against AI weapons
This is not the only step taken against the dangerous side of AI. Google’s workers have rebelled against Google last month, for support in the development of AI drones for military personnel. Amazon also came under pressure after sharing the facial recognition technique.
AI can offer many possibilities, but at the same time also hazards. It is important to generate global attention for these dangers. The FLI promise is a good start to stop organizations from producing automatic weapons.