Meta, IBM & Startup Collaborating to Assess the Safety of AI Models

Meta, IBM & Startup Collaborating to Assess the Safety of AI Models

In order to assess generative AI models used in high-risk industries, IBM and Meta have partnered with HydroX AI, a startup that creates solutions to secure AI models and services.

The San Jose, California-based startup, which was founded in 2023, developed an assessment tool that enables companies to evaluate and assess the security and safety of their language models.

HydroX will collaborate with IBM and Meta to assess language models in a variety of industries, such as the legal, financial, and health care sectors.

Together, the three will develop benchmark tests and toolkits to assist business developers in making sure their language models are secure and compliant prior to deployment in particular industries.

“Each domain presents unique challenges and requirements, including the need for precision/safety, adherence to strict regulatory standards and ethical considerations,” said Victor Bian HydroX’s chief of staff. “Evaluating large language models within these contexts ensures they are safe, effective, and ethical for domain-specific applications, ultimately fostering trust and facilitating broader adoption in industries where errors can have significant consequences.”

A language model’s performance can be assessed using benchmarks and similar tools, which give model owners a judgment of the results their model produces for particular tasks.

According to HydroX, model owners are unable to verify that their systems are safe for use in high-risk businesses because there are insufficient tests and resources available.

The startup is currently collaborating with two significant tech firms that have previously worked on AI safety.

Prior to this, Meta created Purple Llama, a collection of technologies meant to guarantee the safe deployment of its Llama brand of AI models. At the recent AI Safety Summit in Korea, IBM was one of the tech companies that promised to reveal the safety precautions they took when constructing foundation models.

The AI Alliance is an industry association that aims to promote ethical and transparent AI research. Meta and IBM were among its founding members. In addition, HydroX has joined and will collaborate with other member organizations while providing its assessment resources.

“Through our work and conversations with the rest of the industry, we recognize that addressing AI safety and security concerns is a complex challenge while collaboration is key in unlocking the true power of this transformative technology,” Bian said. “ It is a proud moment for all of us at HydroX AI and we are hyper-energized for what is to come.”

The AI Alliance also includes colleges like Yale and Cornell, as well as companies like Hugging Face, AMD, and Intel.