Vectara Portal helps non-developers build AI apps to chat with data: Here’s how to use it


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Vectara just made generative AI development a piece of cake. The Palo Alto, Calif.-based company, an early pioneer in the retrieval augmented generation (RAG) space, has announced Vectara Portal, an open-source environment that allows anyone to build AI applications to talk to their data.

While there are plenty of commercial offerings that help users get instant answers from documents, what sets Vectara Portal apart is its ease of access and use. Just a few basic steps and anyone, regardless of their technical skills or knowledge, can have a search, summarization or chat app at their disposal, grounded in their datasets. No need to write even a single line of code.

The offering has the potential to enable non-developers to power several use cases within their organization, right from policy to invoice search. However, it is important to note that the jury is still out on performance as the tool is still very new and only a handful of customers are testing it in beta. 

Ofer Mendelevitch, Vectara’s head of developer relations, tells VentureBeat that since Portal is powered by Vectara’s proprietary RAG-as-a-service platform, they expect to see massive adoption by non-developers. This will lead to increased traction for the company’s full-blown enterprise-grade offerings.

“We are eagerly watching what users will build with Vectara Portal. We hope that the level of accuracy and relevance enriched by their documents will showcase the complete power of (Vectara’s) enterprise RAG systems,” he said.

How does Vectara Portal work?

The portal is available as an app hosted by Vectara as well as an open-source offering under Apache 2.0 license. Vectara Portal revolves around the idea of users creating portals (custom applications) and then sharing them with their targeted audience for usage.

First, the user has to create a Portal account with their main Vectara account credentials and set up that profile with their Vectara ID, API Key and OAuth client ID. Once the profile is ready, the user just has to head over to the “create a portal” button and fill up basic details like the name of the planned app, its description and whether it is supposed to work as a semantic search tool, summarization app or conversational chat assistant. After this, hitting the create button will add it to the Portal management page of the tool.

Vectara Portal
Vectara Portal creation. Credit: Vectara.

From the Portal management screen, the user opens the created portal, heads into its settings and adds any number of documents for grounding/customizing the app to their data. As these files are uploaded, they are indexed by Vecatara’s RAG-as-a-service platform, which powers the portal’s backend, to provide accurate and hallucination-free answers.

“This (platform) means a strong retrieval engine, our state-of-the-art Boomerang embedding model, multi-lingual reranker, reduced hallucinations and overall much higher quality of responses to users’ questions in Portal. Being a no-code product, builders can just use a few clicks to quickly create gen AI products,” Mendelevitch said.

The developer relations head noted that when a user creates a portal and adds documents, the backend of the tool builds a “corpus” specific to that data in the user’s main Vectara account. This corpus acts as a place to hold all the portal-associated documents. So, when a user asks a question on the portal, Vectara’s RAG API runs that query against the associated corpus to come up with the most relevant answer. 

chat example 2
Demo Vectara Portal. Credit: Vectara.

The platform first picks up the most relevant parts of the documents (in the retrieval step) that are needed to answer the user question and then feeds those into the large language model (LLM). Vectara provides users with the option to pick from different LLMs, including the company’s own Mockingbird LLM as well as those from OpenAI.

“For Vectara Scale (company’s bigger plan) customers, Portal uses the best Vectara features, including the most performant LLMs,” Mendelevitch added. The apps are public default and shareable via links, but users can also restrict them to a select group of users.

Goal to increase enterprise customers

With this no-code offering, both as a hosted and an open-source product, Vectara is looking to give more enterprise users the ability to build powerful generative AI apps targeting different use cases. The company hopes it will increase sign-ups as well as create a buzz for its main RAG-as-a-service offering, ultimately leading to better conversion.

“RAG is a very strong use case for many enterprise developers and we wanted to open this up to no-code builders so they can understand the power of Vectara’s end-to-end platform. Portal does just that, and we believe will be a valuable tool to product managers, general managers and other C-level executives to understand how Vectara can help with their gen AI use cases,” Mendelevitch said.

The company has raised more than $50 million in funding  so far and has approximately 50 production customers, including Obeikan Group, Juniper Networks, Sonosim and Qumulo



Source link

About The Author

Scroll to Top