How Apple Trained Its AI Models With Assistance From Google
- Technology
- June 12, 2024
Apple (AAPL.O), led by CEO Tim Cook, revealed on stage on Monday a major partnership with OpenAI to integrate the latter’s potent AI model into Siri, the company’s voice assistant.
However, Apple made it apparent in the tiny print of a technical document released following the event that Alphabet’s Google (GOOGL.O) has become yet another victor in the Cupertino, California, company’s race to catch up in artificial intelligence.
Apple engineers employed their own framework software and a variety of hardware, including their own on-premise graphics processing units (GPUs) and chips known as tensor processing units (TPUs), which are exclusive to Google’s cloud, to create the company’s foundation AI models.
For the past ten years or so, Google has been producing TPUs. Two of its fifth-generation chip flavors, which may be used for AI training, have been made public. According to Google, the performance version of the fifth generation gives performance on par with Nvidia’s (NVDA.O) H100 AI processors.
At its yearly developer conference, Google said that a sixth generation will be available this year.
The CPUs are specifically made by Google to run AI apps and train models, and the company has developed a cloud computing infrastructure and hardware around them.
Requests for comments were not immediately answered by Apple or Google.
The amount to which Apple depended on Google’s chips and software in comparison to Nvidia or other AI manufacturers’ hardware was not disclosed by the company.
However, in order to use Google’s chips, a client must normally acquire access to them through its cloud business, in a similar manner to how users purchase computing time from Microsoft’s (MSFT.O) Azure or Amazon.com’s (AMZN.O).