‘Microsoft is working on its own AI chip for training large language models’
Microsoft may be working on its own AI chip that could be used, for example, to train large language models. Insiders report this to The Information. The company would like to become less dependent on expensive chips from other suppliers.
According to anonymous sources from The Information Microsoft has been working on its own AI chip since 2019, codenamed Athena. A small group of Microsoft and OpenAI employees would already have access to the chip. According to The Information, the company hopes that project Athena will perform better than the chips it currently buys from other suppliers. This should save time and money on its AI work. Other major tech companies, such as Amazon and Google, are also working on their own AI chips.
According to sources, the chip is used to train large language models and support inference, for example for use in AI models such as GPT-4 that the company uses in Bing and Office. Microsoft may make the chips more widely available ‘as early as next year’. However, it is not known whether the chips will be available to customers using Microsoft Azure, or if they are only intended for internal use. Microsoft is also said to have an internal roadmap with future generations.
Chips for training AI models are currently mainly produced by Nvidia. That company offers its A100 and H100 accelerators for this purpose. Microsoft would like to save costs on purchasing AI chips with its own chips. However, according to the sources, the tech giant does not see its chips as a ‘broad replacement for Nvidia’s products’.
There have been reports for some time that Microsoft is working on its own chips. In 2020, Bloomberg stated that the company is developing Arm chips for use in servers and consumer Surface devices. However, those alleged chips have not yet appeared, although the company previously collaborated with Qualcomm on Arm chips for its Surface lineup.