Google, Microsoft, OpenAI and others are making promises to make AI more secure

Spread the love

Seven AI makers have pledged to the US government to take measures to make their artificial intelligence more secure. The companies say they will implement the promises immediately. These are voluntary promises, with no consequences if they are broken.

With the promises says the US government that companies’ AI must become safer and more reliable. For example, the companies promise to test their AI internally and externally, with the tests being partly done by independent experts and the greatest risks being examined. The companies also say they share information about AI risks, for example, within the industry and with governments and scientists. The companies also promise to continue researching such risks.

The companies pledge to further invest in cybersecurity to protect key AI components, to prevent unsafe AI from being released. They also promise to enable third-party research and pledge that researchers can report vulnerabilities in their AI.

In terms of reliability, AI makers promise to use watermarks, for example, to indicate when something was created by AI, so that others can see this. The organizations also promise to clearly indicate what their AI can do and what their artificial intelligence is less good at. The companies commit to developing AI systems that can be used for social purposes, such as research into cancer and climate change.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI have made the commitments to the US government. The government is also working on an executive order to enshrine AI measures in legislation, although the government has not yet provided any details about this.

You might also like