Generative AI has become a driving force in the business world, and organizations are eagerly trying to leverage its benefits while mitigating the associated risks. To address this, IBM Watson Assistant has introduced Conversational Search, powered by the Granite large language model and Watson Discovery. This beta release enables AI Assistants to deliver faster and more accurate answers by scaling conversational interactions grounded in business content.
Paving the way: Large language models
The focus in generative AI has been on Large Language Models (LLMs), which are revolutionizing knowledge discovery and interaction. Traditionally, enterprises relied on keyword-based search engines to support customers and employees. However, the introduction of chatbots brought the need for answering questions without a predefined path or answer. IBM Watson Assistant has been successfully meeting this need for several years. With LLMs and generative AI, IBM is taking this capability even further.
Introducing Conversational Search for Watson Assistant
IBM is pleased to announce the beta release of Conversational Search for Watson Assistant. This feature combines the power of the Granite LLM model and Watson Discovery to provide comprehensive and contextually grounded answers, enabling outcome-oriented interactions. Conversational Search seamlessly integrates with the augmented conversation builder, allowing customers and employees to automate answers and actions.
Users of the Plus or Enterprise plans of Watson Assistant can now request early access to Conversational Search.
How does Conversational Search work behind the scenes?
When a user asks a question, Watson Assistant determines the best way to help – whether to trigger a prebuilt conversation, employ conversational search, or escalate to a human agent. The new transformer model in Watson Assistant achieves higher accuracy with less training needed. Conversational Search relies on two key steps: retrieval and generation. Retrieval involves finding the most relevant information using search capabilities from Watson Discovery, while generation structures the information to generate a conversational response. By leveraging the Retrieval Augmented Generation framework, Watson Assistant minimizes the need to retrain the LLM model.
Conversational AI that drives open innovation
IBM Watson Assistant Conversational Search is a flexible platform that delivers accurate answers across various channels and touchpoints. IBM offers deployment options on IBM Cloud as well as a self-managed Cloud Pak for Data deployment option for semantic search with Watson Discovery. Semantic search will be available as a configurable option for Conversational Search in the future. Additionally, organizations can bring their proprietary data to customize Watson LLM models through watsonx.ai, or leverage third-party models for conversational search and other use cases.
Conversational Search in action
Let’s look at a real-life scenario and how Watson Assistant leverages Conversational Search to help a customer of a bank apply for a credit card. The assistant can seamlessly extract information from the user’s messages to gather the necessary details, call the appropriate backend service, and return the offer details back to the user. If there are no suitable pre-built conversations, Conversational Search looks through the bank’s knowledge documents and answers the user’s question. If a special topic is recognized, the assistant will escalate to a human agent.
What is Conversational Search?
Conversational Search is a feature in IBM Watson Assistant that utilizes the Granite large language model and Watson Discovery to provide fast and accurate answers to user queries in a conversational manner.
How does Conversational Search work?
Conversational Search involves two steps: retrieval and generation. Retrieval involves finding relevant information using search capabilities from Watson Discovery, while generation structures the information to generate a conversational response using the Granite LLM model. The aim is to deliver rich and contextually grounded answers.
Can Conversational Search be customized?
Yes, organizations can bring their proprietary data to customize Watson LLM models using watsonx.ai. Additionally, third-party models from the Hugging Face community can be leveraged for conversational search and other use cases.
How can I access Conversational Search?
Users of the Plus or Enterprise plans of Watson Assistant can request early access to Conversational Search. They can contact their IBM Representative to gain exclusive access or schedule a demo with an expert.
Will semantic search be available for Conversational Search?
Yes, semantic search will be available as a configurable option for Conversational Search in the future, allowing enterprises to run and deploy the feature according to their specific needs and preferences.
Is Watson Assistant committed to responsible AI usage?
IBM understands the importance of using AI responsibly. Watson Assistant allows organizations to enable Conversational Search selectively based on recognized topics and offers trigger words to automatically escalate to a human agent when necessary to ensure responsible and effective usage.