A $100 million fund for artificial intelligence (AI) firms has been established by Anthropic and Menlo Ventures.
“We created this fund to fuel the next generation of AI startups through the powerful combination of Menlo’s extensive company-building experience and Anthropic’s pioneering AI technology and deep research expertise,” Menlo said in a news release Wednesday (July 17).
“Through this collaboration, we aim to catalyze innovation and shape the future of artificial intelligence in the startup ecosystem.”
According to the companies, “Anthology” is a nod to “Anthropic,” but it also symbolizes their common goal of assembling a “curated collection” of AI pioneers who would collaborate.
“Just as an anthology represents a collection of diverse works of art that form a masterpiece, our fund connects visionary entrepreneurs with Anthropic’s groundbreaking technology and Menlo’s venture expertise to fuel revolutionary advancements,” the announcement said.
Menlo, one of Silicon Valley’s original venture capital firms, has already invested in Anthropic, which is funded by Amazon. It announced last year that it has raised $1.3 billion for investments in emerging AI companies.
With investments starting at $100,000 or more, the companies behind The Anthology Fund will fund businesses at every level of development, from seed to expansion.
Meanwhile, PYMNTS reported earlier this month on Anthropic’s new funding initiative for advanced AI assessments, pointing out that industry observers believe it has the potential to quicken the adoption of AI in a variety of commercial domains.
This program fills a critical void in the quickly developing field by assisting third-party companies in creating innovative techniques for evaluating AI risks and capabilities.
The goal of the project is to provide more reliable standards for sophisticated AI applications, which might have a billion-dollar economic impact. Businesses looking to implement AI technologies have found it difficult to achieve broad adoption due to a lack of thorough evaluation tools.
“We’re seeking evaluations that help us measure the AI Safety Levels (ASLs) defined in our Responsible Scaling Policy,” Anthropic said in its announcement.