Magic AI Inc., a generative artificial intelligence-powered coding business, is allegedly looking to earn more than $200 million in a new investment round, bringing its valuation to $1.5 billion.
The conversations are taking place just months after the company raised $23 million in Series A funding. According to three unnamed sources, Jane Street is in talks to lead the next round, which would triple Magic AI’s valuation, despite the fact that the company has yet to generate revenue or have a product available.
Magic AI was originally valued at $500 million following its last investment round in February. Since its inception in 2022, the company has raised more than $140 million from investors such as Alphabet Inc.’s CapitalG, Nat Friedman, and Daniel Gross’s NFDG Ventures.
The funding conversations show the immense promise that investors perceive in generative AI coding solutions. Enterprises pay a lot of money to hire software engineers, and many struggle to find enough of them, so AI technologies that can produce code or aid developers are quite appealing.
Generative AI is at the basis of existing technologies like GitHub Inc.’s Copilot and OpenAI’s ChatGPT, both of which may recommend improvements for lines of code authored by engineers. However, although these tools can help with code production, businesses like Magic AI go above and beyond, automatically building the coding for whole applications.
Magic AI’s tool, which is currently unknown, is believed to allow software engineers to utilize their natural language to define the type of application or function they are attempting to construct. Magic AI will then write all of the code required to develop the app. The startup’s inventors describe its tool as “software that builds software,” allowing engineers to utilize AI to identify code, analyze its utility, reuse it, and collaborate on code improvements.
In other words, Magic AI describes its tool as a “colleague inside the computer,” working as a clever agent capable of performing all of the heavy work required for code creation and editing.
Magic AI is one of several startups pursuing the ideal of AI-generated programming. Encouraged by the success of GitHub’s Copilot, investors have poured millions of dollars into those startups in recent months.
One of the most recent investment rounds occurred in April, when the AI coding assistant startup Augment Inc. raised a $227 million Series B round, increasing its valuation to $977 million. Meanwhile, Cognition AI Inc., the company behind Devin, received a $175 million investment round headed by Founders Fund in March, raising its valuation to $2 billion.
There are many other well-funded competitors in the market, including Amazon Web Services Inc. and Google LLC, which offer their CodeWhisperer and Gemini Code Assist tools that compete with Microsoft Corp.-owned GitHub’s Copilot, as well as various startups that have not yet secured multimillion-dollar funding rounds, such as Tabnine Ltd., Codegen Inc., Refact, Laredo Labs Inc., and TabbyML Inc.
Others, like the French AI coding startup Poolside AI, are aiming to raise millions from investors.
Investors regard the success of GitHub’s Copilot as evidence of how large the generative AI coding sector is expected to grow. Over the last year, GitHub’s revenue climbed by 40%, with its AI coding service accounting for the majority of that growth, with over 1.3 million paid customers.
“The success of Microsoft has validated the commercial market for AI code assistants, leading everyone to believe there is clear market demand and a customer willingness to pay for the right product,” said Brian Dudley, a partner at venture capital company Adams Street Partners, to Reuters.
However, creating a generative AI coding assistance is costly. Startups must collect massive datasets to train their underlying huge language models. They also require large, energy-intensive computational resources to carry out the training.
According to Reuters, Magic AI will utilize the next round of funding to develop its coding assistant models, which are supposed to handle long-context windows, allowing them to process more data in a single query.
The business has previously stated that the capacity of its models to interpret and absorb lengthy prompts and a huge amount of context is due to the revolutionary architecture of its LLMs, which go beyond the typical transformer model that underpins models such as OpenAI’s ChatGPT.