A lightweight artificial intelligence model was introduced by Microsoft (MSFT.O) on Tuesday in an effort to appeal to a larger clientele with more affordable solutions.
As it bets its future on a technology that is anticipated to have a broad impact on the world and the way people operate, the company has launched the first of its three small language models (SLM), known as Phi-3-mini.
“Phi-3 is not slightly cheaper, it’s dramatically cheaper, we’re talking about a 10x cost difference compared to the other models out there with similar capabilities,” said Sébastien Bubeck, Microsoft’s vice president of GenAI research.
According to the company, SLMs are easier for businesses with little resources to use because they are made to do fewer tasks.
According to the business, Phi-3-mini will be made available right away on the machine learning model platform Hugging Face, the AI model catalog on Microsoft cloud service platform Azure, and Ollama, a framework for using models locally.
The SLM has been customized for Nvidia’s graphics processing units (GPUs) and will also be accessible via Nvidia’s (NVDA.O) new tab software product, Nvidia Inference Microservices (NIM).
Microsoft gave the UAE-based AI startup G42 $1.5 billion last week. Additionally, in the past, it collaborated with the French startup Mistral AI to make their models accessible via the Azure cloud computing platform.