The business, MatX, will produce processors made especially to accommodate Large Language Models (LLMs). The business has already raised $25 million, and AI investors Daniel Gross and Nat Friedman have just contributed to its fundraising.
Co-founders Reiner Pope and Mike Gunter stated in an interview with Bloomberg that although Google had made strides toward making LLMs operate more quickly, the company’s goals were too nebulous, leaving them with the impression that they were forced to go it alone in order to concentrate on developing chips for processing the data required to power LLMs.
Pope wrote the AI program at Google, and Gunter created the chips and other hardware that the software ran on.
The two are now placing bets that the MatX-designed processors will outperform Nvidia’s GPUs by at least ten times when it comes to training LLMs and producing results, according to Bloomberg. Theoretically, this will be accomplished by taking away what the inventors referred to as “extra real estate” that GPUs are given in order to support a wide range of processing tasks.
Rather than concentrating on multipurpose circuits, MatX will create chips with a single massive computing core. With “dozens of employees already hired,” the company hopes to get the first iteration of its product finalized by 2025.
“Nvidia is a really strong product and clearly the right product for most companies… but we think we can do a lot better,” Pope told Bloomberg.