Summary of Build a Travel Assistant Chatbot with HuggingFace, LangChain, and MistralAI

  • analyticsvidhya.com
  • Article
  • Summarized Content

    Building a Travel Assistant Chatbot with AI

    This article guides you through the process of creating a travel assistant chatbot using powerful AI technologies. This chatbot, called "Yatra Sevak.AI," acts as a personalized travel assistant to help plan trips and make travel planning easier and more enjoyable.

    • The chatbot uses advanced AI, specifically Mistral AI, a cutting-edge platform specializing in large language models (LLMs).
    • The chatbot utilizes LangChain, an open source framework for building applications based on large language models (LLMs).
    • It seamlessly integrates with Hugging Face, an open-source platform for machine learning and natural language processing.
    • The chatbot leverages Streamlit to create interactive user experiences.

    Why Travel Assistance is Revolutionizing the Travel Industry

    Travel assistance chatbots like Yatra Sevak.AI can revolutionize the travel industry by enhancing the travel planning experience. These chatbots offer various advantages, including:

    • Weather-based Recommendations: AI chatbots provide alternative plans in case of adverse weather conditions at the destination.
    • Gamification and Engagement: Chatbots incorporate travel quizzes, loyalty rewards, and interactive guides to enhance the travel experience with enjoyable and engaging elements.
    • Crisis Management and Real-Time Updates: Chatbots offer immediate assistance during travel disruptions and provide timely updates.
    • Multilingual Support and Cultural Sensitivity: Chatbots communicate in multiple languages and provide culturally relevant advice, catering effectively to international travelers.
    • Instant Trip Adjustment: Users can instantly change their trip itinerary based on their requirements, facilitated by AI chatbots' dynamic response capabilities.
    • Continuous Advisor Presence: Chatbots ensure an always-on advisory presence throughout the trip, offering guidance and support whenever needed.

    What is Hugging Face?

    Hugging Face is an open-source platform for machine learning and natural language processing. It offers tools for creating, training, and deploying models. It hosts thousands of pre-trained models for tasks like computer vision, audio analysis, and text summarization.

    • Over 30,000 datasets available, developers can train AI models and share their code within the community.
    • Users can showcase their projects through ML demo apps called Spaces, promoting collaboration and sharing in the AI community.

    What is LangChain?

    LangChain is an open-source framework for building applications based on large language models. It provides modular components for creating complex workflows, tools for efficient data handling, and supports integrating additional tools and libraries.

    • LangChain makes it easy for developers to build, customize, and deploy LLM-powered applications.
    • LangChain connects and uses models from platforms like Hugging Face for the travel assistant chatbot.
    • It helps handle user questions about booking flights, hotels, rental cars, and providing travel tips.
    • LangChain speeds up development by using pre-trained models effectively.

    What is Mistral AI?

    Mistral AI is a cutting-edge platform specializing in large language models (LLMs). These models excel across multiple languages, demonstrating robust capabilities in handling code. They offer high context windows, native function calling capacities, and JSON outputs.

    • Mistral AI models are versatile and suitable for various applications.

    Types of Mistral AI Models

    Mistral AI offers a range of models optimized for different tasks:

    Mistral 7 B (open source) Mistral 8x7B (open source) Mistral 8x22B (open source) Mistral small (optimized Model) Mistral large (optimized Model) MistralEmbed (optimized Model)
    7B transformer, fast-deployed, easily customizable 7B sparse Mixture-of-Experts, 12.9B active params (45B total) 22B sparse Mixture-of-Experts, 39B active params (141B total) Cost-efficient reasoning, low-latency workloads Top-tier reasoning, high-complexity tasks State-of-the-art semantic, text re-presentation extraction

    Workflow of the Travel Assistant Chatbot

    The travel assistant chatbot, Yatra Sevak.AI, operates through a streamlined workflow:

    • User Interaction: The user interacts with the Streamlit frontend to input queries.
    • Chat Handling Logic: The application captures the user’s input, updates the session state, and adds the input to the chat history.
    • Response Generation (LangChain Integration):
      • The get_response function sets up the Hugging Face endpoint and uses LangChain tools to format and interpret the responses.
      • LangChain’s ChatPromptTemplate and StrOutputParser are used to format the the prompt and parse the output.
    • API Interaction: The application retrieves the API token from environment variables and interacts with Hugging Face’s API to generate text responses with the Mistral AI model.
    • Generate Response: The response is generated using the Hugging Face model invoked through LangChain.
    • Send Response Back: The generated response is appended to the chat history and displayed on the frontend.
    • Streamlit Frontend: The frontend is updated to show the AI’s response, completing the interaction cycle.

    Steps to Build a Travel Assistant Chatbot

    Building the travel assistant chatbot, Yatra Sevak.AI, involves several steps:

    Step 1: Importing Required Libraries

    • Create requirements.txt file and install required libraries using: pip install – requirements.txt
    • Create app.py file in your project directory & import necessary libraries.
    
    streamlit
    python-dotenv
    langchain-core
    langchain-community
    huggingface-hub
                    

    Step 2: Setting Up Environment and API Token

    • Access Hugging Face API:
      • Log in to your Hugging Face account.
      • Navigate to your account settings.
    • Generate API Token: Generate an API token from your Hugging Face account settings.
    • Set Up .env File: Create a .env file in your project directory to store sensitive information such as API tokens.
    
    #After importing all libraries and setting up envirnoment. in app.py write these line.
    load_dotenv()  ## Load environment variables from .env file
                    

    Step 3: Configuring Model and Task

    
    # Define the repository ID and task
    repo_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
    task = "text-generation"
                    

    Step 4: Streamlit Configuration

    
    # App config
    st.set_page_config(page_title="Yatra Sevak.AI",page_icon= "🌍")
    st.title("Yatra Sevak.AI ✈️")
                    

    Step 5: Defining Chatbot Template

    Utilize the prompt template available on the GitHub repository to create robust prompts for your travel assistant chatbot.

    Step 6: Implementing Response Handling

    
    prompt = ChatPromptTemplate.from_template(template)
    
    # Function to get a response from the model
    def get_response(user_query, chat_history):
        # Initialize the Hugging Face Endpoint
        llm = HuggingFaceEndpoint(
            huggingfacehub_api_token=api_token,
            repo_id=repo_id,
            task=task
        )
        chain = prompt | llm | StrOutputParser()
        response = chain.invoke({
            "chat_history": chat_history,
            "user_question": user_query,
        })
        return response
                    

    Step 7: Managing Chat History

    
    # Initialize session state.
    if "chat_history" not in st.session_state:
        st.session_state.chat_history = [
            AIMessage(content="Hello, I am Yatra Sevak.AI How can I help you?"),
        ]
                    
    
    # Display chat history.
    for message in st.session_state.chat_history:
        if isinstance(message, AIMessage):
            with st.chat_message("AI"):
                st.write(message.content)
        elif isinstance(message, HumanMessage):
            with st.chat_message("Human"):
                st.write(message.content)
                    

    Step 8: Handling User Input and Displaying Responses

    
    # User input
    user_query = st.chat_input("Type your message here...")
    if user_query is not None and user_query != "":
        st.session_state.chat_history.append(HumanMessage(content=user_query))
    
        with st.chat_message("Human"):
            st.markdown(user_query)
    
        response = get_response(user_query, st.session_state.chat_history)
    
        # Remove any unwanted prefixes from the response u should use these function but 
    #before using it I requestto[replace("bot response:", "").strip()] combine 1&2 to run without error.
    
        #1.response = response.replace("AI response:", "").replace("chat response:", "").
        #2.replace("bot response:", "").strip()
    
        with st.chat_message("AI"):
            st.write(response)
    
        st.session_state.chat_history.append(AIMessage(content=response)) 
                    

    Complete Code Repository

    Explore Yatra Sevak.AI Application on GitHub. The full code is available on GitHub. Feel free to explore and utilize it as needed.

    Deploying the Travel Assistant Chatbot

    Deploying your travel assistant chatbot on Hugging Face Spaces makes it accessible to a wider audience. Here are the steps for deployment:

    • Step 1: Navigate to Hugging Face Spaces Dashboard.
    • Step 2: Create a New Space.
    • Step 3: Configure Environment Variables:
      • Click on Settings.
      • Click on New Secret options and Add name HUGGINGFACEHUB_API_TOKEN and your key value.
    • Step 4: Upload Your Model Repository:
      • Upload all the files in File section of Space.
      • Commit Changes to Deploy on HF_SPACE.
    • Step 5: Travel Assistant Chatbot Application Deployed on HF_SPACE successfully!!.

    Conclusion

    This article demonstrated how to build a travel assistant chatbot using Hugging Face, LangChain, Mistral AI, and Streamlit. The chatbot, Yatra Sevak.AI, offers personalized travel assistance, enhancing the travel planning experience.

    Key Takeaways

    • Learn to build a powerful language model chatbot using Hugging Face endpoints without relying on costly APIs.
    • Learn how to integrate Hugging Face endpoints to effortlessly incorporate their diverse range of pre-trained models into your applications.
    • Mastering the art of crafting effective prompts using templates empowers you to build versatile chatbot applications across different domains.

    Refrences

    Frequently Asked Questions

    Q1. How does integrating Mistral AI’s models with LangChain benefit the performance of a travel assistant chatbot?

    A. Integrating Mistral AI’s models with LangChain boosts the chatbot’s performance by utilizing advanced functionalities like extensive context windows and optimized attention mechanisms. This integration accelerates response times and enhances the accuracy of handling intricate travel inquiries, thereby elevating user satisfaction and interaction quality.

    Q2. What role does LangChain play in developing a travel assistant chatbot?

    A. LangChain provides a framework for building applications with large language models (LLMs). It offers tools like ChatPromptTemplate for crafting prompts and StrOutputParser for processing model outputs. LangChain simplifies the integration of Hugging Face models into your chatbot, enhancing its functionality and performance.

    Q3. Why is it beneficial to deploy chatbots on Hugging Face Spaces?

    A. Hugging Face Spaces provides a collaborative platform where developers can deploy, share, and iterate on chatbot applications, fostering innovation and community-driven improvements.

    The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

    Ask anything...

    Sign Up Free to ask questions about anything you want to learn.