This article guides you through the process of creating a travel assistant chatbot using powerful AI technologies. This chatbot, called "Yatra Sevak.AI," acts as a personalized travel assistant to help plan trips and make travel planning easier and more enjoyable.
Travel assistance chatbots like Yatra Sevak.AI can revolutionize the travel industry by enhancing the travel planning experience. These chatbots offer various advantages, including:
Hugging Face is an open-source platform for machine learning and natural language processing. It offers tools for creating, training, and deploying models. It hosts thousands of pre-trained models for tasks like computer vision, audio analysis, and text summarization.
LangChain is an open-source framework for building applications based on large language models. It provides modular components for creating complex workflows, tools for efficient data handling, and supports integrating additional tools and libraries.
Mistral AI is a cutting-edge platform specializing in large language models (LLMs). These models excel across multiple languages, demonstrating robust capabilities in handling code. They offer high context windows, native function calling capacities, and JSON outputs.
Mistral AI offers a range of models optimized for different tasks:
Mistral 7 B (open source) | Mistral 8x7B (open source) | Mistral 8x22B (open source) | Mistral small (optimized Model) | Mistral large (optimized Model) | MistralEmbed (optimized Model) |
---|---|---|---|---|---|
7B transformer, fast-deployed, easily customizable | 7B sparse Mixture-of-Experts, 12.9B active params (45B total) | 22B sparse Mixture-of-Experts, 39B active params (141B total) | Cost-efficient reasoning, low-latency workloads | Top-tier reasoning, high-complexity tasks | State-of-the-art semantic, text re-presentation extraction |
The travel assistant chatbot, Yatra Sevak.AI, operates through a streamlined workflow:
Building the travel assistant chatbot, Yatra Sevak.AI, involves several steps:
pip install – requirements.txt
streamlit
python-dotenv
langchain-core
langchain-community
huggingface-hub
#After importing all libraries and setting up envirnoment. in app.py write these line.
load_dotenv() ## Load environment variables from .env file
# Define the repository ID and task
repo_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
task = "text-generation"
# App config
st.set_page_config(page_title="Yatra Sevak.AI",page_icon= "🌍")
st.title("Yatra Sevak.AI ✈️")
Utilize the prompt template available on the GitHub repository to create robust prompts for your travel assistant chatbot.
prompt = ChatPromptTemplate.from_template(template)
# Function to get a response from the model
def get_response(user_query, chat_history):
# Initialize the Hugging Face Endpoint
llm = HuggingFaceEndpoint(
huggingfacehub_api_token=api_token,
repo_id=repo_id,
task=task
)
chain = prompt | llm | StrOutputParser()
response = chain.invoke({
"chat_history": chat_history,
"user_question": user_query,
})
return response
# Initialize session state.
if "chat_history" not in st.session_state:
st.session_state.chat_history = [
AIMessage(content="Hello, I am Yatra Sevak.AI How can I help you?"),
]
# Display chat history.
for message in st.session_state.chat_history:
if isinstance(message, AIMessage):
with st.chat_message("AI"):
st.write(message.content)
elif isinstance(message, HumanMessage):
with st.chat_message("Human"):
st.write(message.content)
# User input
user_query = st.chat_input("Type your message here...")
if user_query is not None and user_query != "":
st.session_state.chat_history.append(HumanMessage(content=user_query))
with st.chat_message("Human"):
st.markdown(user_query)
response = get_response(user_query, st.session_state.chat_history)
# Remove any unwanted prefixes from the response u should use these function but
#before using it I requestto[replace("bot response:", "").strip()] combine 1&2 to run without error.
#1.response = response.replace("AI response:", "").replace("chat response:", "").
#2.replace("bot response:", "").strip()
with st.chat_message("AI"):
st.write(response)
st.session_state.chat_history.append(AIMessage(content=response))
Explore Yatra Sevak.AI Application on GitHub. The full code is available on GitHub. Feel free to explore and utilize it as needed.
Deploying your travel assistant chatbot on Hugging Face Spaces makes it accessible to a wider audience. Here are the steps for deployment:
This article demonstrated how to build a travel assistant chatbot using Hugging Face, LangChain, Mistral AI, and Streamlit. The chatbot, Yatra Sevak.AI, offers personalized travel assistance, enhancing the travel planning experience.
A. Integrating Mistral AI’s models with LangChain boosts the chatbot’s performance by utilizing advanced functionalities like extensive context windows and optimized attention mechanisms. This integration accelerates response times and enhances the accuracy of handling intricate travel inquiries, thereby elevating user satisfaction and interaction quality.
A. LangChain provides a framework for building applications with large language models (LLMs). It offers tools like ChatPromptTemplate for crafting prompts and StrOutputParser for processing model outputs. LangChain simplifies the integration of Hugging Face models into your chatbot, enhancing its functionality and performance.
A. Hugging Face Spaces provides a collaborative platform where developers can deploy, share, and iterate on chatbot applications, fostering innovation and community-driven improvements.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Ask anything...