Security Startup Pillar Secures $9 Million to Address AI-Specific Risks

Security Startup Pillar Secures $9 Million to Address AI-Specific Risks

Pillar Security, a startup focused on artificial intelligence (AI) security, has secured $9 million in seed funding to enhance its research and development (R&D) initiatives and go-to-market strategies.

Pillar Security CEO and Co-founder Dor Sarig stated in a press release on Wednesday (April 16) that the company’s solution is tailored for an era where “software has gained agency and data itself has become executable.”

According to Sarig, “Pillar’s technology is built with this understanding, backed by real-world AI threat intelligence, delivering a new class of protection designed explicitly for AI-related security risks.” “To adapt to the agentic and autonomous software of the Intelligence Age, we are redefining application security.”

According to the announcement, the company’s security platform is made especially for software systems that incorporate AI and covers risk areas unique to AI, such as data poisoning, evasion attacks, data privacy, and intellectual property leaks.

According to the announcement, the platform validates AI models, implements guardrails that proactively avoid errors, automatically maps all AI-related assets across the business, and connects with an organization’s current code repositories, data infrastructures, and AI/ML platforms.

“At a time when more agentic AI solutions are being deployed within businesses and the threat surface is expanding, Pillar understands that it takes more than incremental improvements to secure software,” stated Elias Manousos, lead investor at Shield Capital, which led the funding round, in the release.

“Their innovative methodology establishes a new benchmark for enterprises’ security and administration of intelligent systems,” Manousos stated.

According to a March research, AI agents differ from rules-based bots, which have preset actions. Constructed on top of generative AI models, AI agents possess the autonomy and decision-making skills to accomplish a task assigned by the user, as well as the ability to understand, adapt, and learn.

Accordingly, chief financial officers ought to approach agentic AI in the same manner as they have previous automations they have implemented: analyzing the processes that stand to gain, determining which expenses may be eliminated, recognizing the possible advantages of speeding up work, and evaluating the dangers to finances and reputation. In March, MIT Sloan School of Management senior lecturer George Westerman told a report.

According to the survey, “COOs Leverage GenAI to Reduce Data Security Losses,” 55% of chief operating officers state their organizations have already put AI-based automated cybersecurity management solutions in place, which is a threefold increase from earlier in the year.