During its annual Google Cloud Next conference, Google expanded the capabilities of its AI and machine learning platform Vertex AI by adding new large language models (LLMs) and an agent building tool.
A public preview of the Gemini 1.5 Pro model, which supports contexts with one million tokens, is included in the LLM.
The company said that it has received feedback from enterprises indicating that this expanded support can eliminate the need to fine-tune models or use retrieval augmented generation (RAG) to ground model responses. This expanded support for context enables native reasoning over massive amounts of data specific to an input request.
Vertex AI’s Gemini 1.5 Pro will also have the capability to process audio streams, which includes voice and audio from videos.
According to Google, the ability to interpret audio allows users to do cross-modal analysis, which offers insights from text, photos, videos, and audio.
The business also stated that the Pro model will include transcription, which can be used to search audio and video content.
The cloud service company has also added new features to its Imagen 2 family of LLMs, such as the capacity to edit photos and the capacity to produce “live images” or 4-second movies in response to text instructions.
The photo editing features have been made widely accessible along with a digital watermarking capability that enables users to tag AI-generated photos, even though the text-to-live-images feature is still under preview.
The introduction of CodeGemma, a new lightweight model from its exclusive Gemma family, is one of Vertex AI’s other LLM improvements.
Google will enable enterprise teams to ground LLMs in Google Search as well as their own data via Vertex AI, assisting businesses with grounding models to obtain more accurate responses from them.
The business stated that rooting foundation models in Google Search can greatly increase answer accuracy. “Foundation models are limited by their training data, which can quickly become outdated and may not include information that the models need for enterprise use cases,” the statement read.
Vertex AI’s enhanced MLops skills
The cloud service provider has enhanced Vertex AI’s MLops capabilities to assist businesses with machine learning projects.
Vertex AI Prompt Management is one of the enhanced features; it assists business teams in experimenting with prompts, migrating prompts, and monitoring prompts in conjunction with parameters.
The business stated that “Vertex AI Prompt Management offers a library of prompts used among teams, versioning, the ability to restore previous prompts, and AI-generated suggestions to improve prompt performance.”
In addition, it said that the prompt management functionality lets teams take notes while comparing prompt iterations side by side to evaluate how minor adjustments affect outcomes.
Additional enhanced features include assessment instruments like Quick Assessment, which can assess model performance during fast design iterations. Rapid Evaluation is in preview right now.
In addition to enhancing the models’ capabilities, the business has extended data residency to 11 additional countries—Australia, Brazil, Finland, Hong Kong, India, Israel, Italy, Poland, Spain, Switzerland, and Taiwan—for data held at rest for the Gemini, Imagen, and Embeddings APIs on Vertex AI.
Vertex AI acquires a new agent developer
A new generative-AI-based agent builder solution from Google Cloud aims to take on competitors like Microsoft and AWS.
The no-code solution, called Vertex AI Agent Builder, combines Vertex AI Search with the company’s Conversation product line. It provides a number of tools to help developers quickly create virtual agents that are supported by Google’s Gemini LLMs.
The benefit of the no-code solution is its built-in RAG system, Vertex AI Search, which can ground agents more quickly than more laborious and complex traditional RAG methods.
The firm released a statement saying, “With pre-built components, the platform makes it simple to create, maintain, and manage more complicated implementations.” Just a few clicks are required to be up and running.
It further said that developers can fast run checks on grounding inputs with the aid of RAG APIs integrated into the product.
Vector search is another feature that Vertex AI Agent Builder provides for creating unique embeddings-based RAG systems, for further more intricate implementations.
In order to further enhance results, developers can also choose to ground model outputs in Google Search.
The no code solution offers a variety of tools, such as data connections, functions, and extensions for Vertex AI.
Vertex AI functions help developers describe a set of functions or APIs and have Gemini intelligently select, for a given query, the right API or function to call, along with the appropriate API parameters, according to the company. Vertex AI extensions are pre-built, reusable modules to connect an LLM to a specific API or tool.