Google’s AI-powered healthcare search engine is now accessible to the entire public

Google’s AI-powered healthcare search engine is now accessible to the entire public

The purpose of Vertex AI Search for Healthcare is to rapidly query a patient’s medical history. According to Google, the technology will ease payers’ and providers’ administrative workloads.

Dive Brief:

  • On Thursday, Google Cloud launched a generative AI search engine that enables payers and providers to search through patient notes, scan documents, and other data for clinical information. This marks the expansion of the company’s collection of AI products for healthcare professionals.
  • In March, Vertex AI Search for Healthcare was first made available in a restricted capacity. The internet behemoth also increased the scope of its Healthcare Data Engine, which offers a longitudinal patient data record in a manner that is industry standard.
  • Google asserts that there is no chance of AI “hallucinations,” or producing false results, with this tool.

Dive Insight:
According to a Harris Poll and Google Cloud poll, claims employees and clinicians spend 27 and 36 hours a week, respectively, on administrative duties including documentation.

Vertex AI Search, according to Google, aims to reduce that time by assisting staff in querying various parts of a patient’s medical data. The program allows for search integration with Google’s major language models, MedLM and Gemini 1.5 Flash.

The first hint of Vertex appeared at the HIMSS conference in March. Health systems like Community Health Systems and Highmark Health have employed it.

According to the announcement, Google also guarantees that Vertex AI will link to internal sources of information and cite its sources, a process known as “grounding,” to boost providers’ trust in the tool’s responses.

The precaution was taken in response to rising worries that generative AI could provide inaccurate or misleading data.

Researchers at the University of Massachusetts at Amherst and scientists at the clinical AI startup Mendel discovered in August that generative AI had hallucinations when answering medical questions. The medical record summaries generated by AI technologies that were analyzed contained inaccurate or deceptive information, which could lead to doctors misdiagnosing patients or recommending unsuitable course of therapy.

According to Jeffrey Cribbs, a distinguished vice president analyst on consulting firm Gartner’s healthcare team, Google’s usage of grounding is a “meaningful step” toward addressing hallucinations. Citing the source of the data, however, would lessen the tool’s initial ability to lessen administrative burden.

Cribbs stated, “What I’ve just given you is a big research assignment.” “Our goal is to be able to save people time, which is our entire value proposition. We are not saving them time if we expect them to review every page of the medical record. We therefore want the citations to be utilized sometimes rather than consistently.

According to Cribbs, the industry will want to embrace a “higher level of validation” for generative AI so that providers can feel certain that the AI answered their inquiry correctly without requiring manual review.

To date, Google has started a number of generative AI projects, such as big language models designed to explore medical information for patients.

Cribbs says that the business is now a pioneer in generative AI testing. In the face of worries about algorithmic bias, Google is renowned in the industry for its thorough validity testing and health equity verification procedures.

However, Cribbs said that the evaluation of generative AI validation is still “an emerging practice,” and some prospective customers are wary about search capabilities such to the one Google debuted on Thursday. The phenomenon known as “automation bias”—the possibility that doctors would automatically choose to believe AI’s output over their own clinical judgment—is especially concerning.

Healthcare companies who are thinking about implementing AI in clinical workflow are “extremely concerned about how that tool may expose [them to] new kinds of risks,” according to Cribbs.

Still, Cribbs emphasized that perfection is not the aim when introducing a new AI tool.

“We are we are not comparing against perfect information extraction that exists today,” Cribbs said. “The fact of the matter is that information is not extracted from clinical records in healthcare in an efficient or in an accurate way in the market today, and so we’re looking for substantially better.”