Experts, professors and tech employees are calling for a temporary halt to AI development

Spread the love

A group of experts, philosophers, investors and developers calls on AI companies in a letter to temporarily stop developing artificial intelligence. Steve Wozniak, writer Yuval Noah Harari and politician Andrew Yang, among others, signed the letter.

The open letter has now been signed by more than 1,100 people. The signatories include founders of tech companies, professors, developers and think tank members. It also includes members of consultancy organizations and think tanks that specifically think about artificial intelligence and its ethical implications. On the other hand, it also includes some troll names like The Continental’s John Wick, so the total number of signatories is not fully representative. Moreover, anyone can add to the list without checking, which means there is already at least one signatory said that he did not sign the manifesto. The list also includes signatories who have AI start-ups themselves or are conducting research into them.

The letter writers argue that the development of artificial intelligence can have major consequences for how society is organized. This would have to be thought about first before further development can take place. This concerns, for example, the automation of jobs and the larger amounts of ‘propaganda and untruths’, although the developers do not mention specific examples of the latter category. The letter writers also wonder whether ‘we are not at risk of losing our society’, but the writers do not go into further detail about this either.

The makers refer to a manifesto from 2017, in which the authors prescribe rules that advanced AI should adhere to. The authors believe that AI developers must first understand exactly how artificial intelligence works and what the possible risks are.

In the letter, the group specifically calls for all AI companies to take an immediate six-month pause “on training all AI systems more powerful than GPT-4.” That break should be ‘public and verifiable’. “If such a pause cannot be set up quickly, politicians should intervene and enforce a moratorium,” the authors write.

During the pause period, developers should establish common rules around AI development. According to the letter writers, these must be safety protocols that can be checked by external experts. The letter writers also argue that the current development of systems such as GPT-4 and higher should first be further thought out before even more powerful artificial intelligence is built. The existing AI should become more accurate, safer, more understandable and more transparent.

You might also like