Experts, professors and tech workers are calling for a temporary halt to AI development
A group of experts, philosophers, investors and developers has issued a letter calling on AI companies to temporarily stop developing artificial intelligence. Steve Wozniak, writer Yuval Noah Harari and politician Andrew Yang, among others, signed the letter.
The open letter has now been signed by more than 1100 people. The signatories include founders of tech companies, professors, developers and think tank members. It also includes members of advisory organizations and think tanks that specifically consider artificial intelligence and its ethical implications. On the other hand, it also includes some troll names like John Wick from The Continental, so the total number of signatories is not fully representative. In addition, anyone can add to the list without checking, so that there is already at least one signatory said that he did not sign the manifesto. The list also includes signatories who own AI start-ups or are researching them.
The letter writers state that the development of artificial intelligence can have major consequences for how society is structured. This should be considered before further development can take place. This concerns, for example, the automation of jobs and the larger amounts of ‘propaganda and untruths’, although the developers do not cite specific examples for the latter category. The letter writers also wonder whether ‘we don’t run the risk of losing our society’, but the writers do not go into detail about that either.
The makers refer to a manifesto from 2017, in which the authors prescribe rules that advanced AI should comply with. The authors believe that AI developers should first understand how artificial intelligence works and what the potential risks are.
In the letter, the group specifically calls for all AI companies to take an immediate six-month hiatus “from training all AI systems more powerful than GPT-4.” That pause should be “public and verifiable.” “If such a pause cannot be put in place quickly, politicians should step in and enforce a moratorium,” the writers write.
During the pause period, developers should establish common rules around AI development. According to the letter writers, these must be safety protocols that can be checked by external experts. The letter writers also argue that the current development of systems such as GPT-4 and above should be further thought through before building even more powerful artificial intelligence. The existing AI should become more accurate, safer, more understandable and more transparent.