Some AI companies appear content to simply dump their latest offerings into the digital void, akin to a pirate ship offloading dead weight, while the majority of them are eagerly revealing their latest algorithms through blog posts and press tours. Mistral is a French AI startup that falls into this latter category. Over the weekend, Mistral posted a nondescript Torrent link to X that contained their most recent large language model, without any explanation.
Mistral has been making waves with its quick, effective LLMs and lighthearted, carefree hacker mentality. It recently raised $415 million in a series A funding round and is currently valued at $2 billion. Memes and praise for X arose in response to the company’s abrupt announcement of its newest program. One commenter wrote: “No blog, no sizzle, no description — just a torrent with the model files…Mistral understands their primary audience to be engineers and knows their cultural erogenous zones.”
The program, which is simply known as Mixtral-8x7B, was given additional details in a blog post published by the company on Monday, the day after it was first announced. Benchmarks published in that blog post show that Mistral’s algorithm performs better than some of its American rivals, such as OpenAI’s GPT-3.5 and Meta’s Llama 2 family. Online users appear to concur that Mistral’s new algorithm is really excellent. Many people are currently bragging about how quick and enjoyable the program is.
In contrast to the ironically named OpenAI, which has kept its most recent LLMs closed source and consequently sparked some backlash, Mixtral-8x7B is open source, which is an additional benefit. Mistral is, in fact, committed to making all of its AI software open source, placing it squarely on one side of the developing culture war within the AI sector. Arthur Mensch, the CEO and co-founder of Mistral AI, recently commented on this choice, stating that his organization was dedicated to following “an open, responsible and decentralised approach to technology.”