Hi, I'm Lylia!

Full-stack web developer and web designer

Build a chatbot using langchain, openai and panel

user

A walkthrough of my journey in building a chatbot using langchain and openai

Aug 4, 2024


Table of Content


In this project, I've build a chatbot using langchain that talks to a PDF file.

Retrieval Augmented Generation (RAG)

First thing I learned about is a process called Retrieval Augmented Generation. It's a type of retrieval of information where an LLM is presented a set of specified documents as the base of knowledge to draw from when generating responses in preference of its own large, static and unspecialized dataset.

Langchain

Langchain is an open-source framework that allows you to build, deploy and manage LLM applications. It integrates well with both Python and Javascript. For this project, I wanted to work with Python.

What I'm building

A AI app that allows a user to upload a PDF File and then talk to it.

Steps of building a chatbot using langchain.

1. Document loading.
2. Document splitting.
3. Embedding and Vectorstores.
4. Retrieval.
5. Generation.
6. Add a user interface and putting it all together.

Building a chatbot that we can train on our own docume

Document loading

Langchain allows you to load data from different sources. In the context of this project, I will need to load a PDF.


from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("filepath")
doc = loader.load()

Document splitting

A PDF file can be rather large and you need a way to make the data more digeastable to the AI. Enters techniques of document splitting. It's a crucial step

In the langchain docs, you can find an article that lists all the splitters and their use-cases.


from langchain.text_splitter import RecursiveCharacterTextSplitter
r_splitter = RecursiveCharacterTextSplitter(
    chunk_size = 1000,
    chunk_overlap = 150
)
splits = r_splitter.split_documents(doc)

Embedding and Vectorstores

In this step, I've learned about embeddings.

Embeddings are a way to store data in number arrays that represent complex information in a more compact and machine-readable format. These arrays capture semantic meaning and relationships between different pieces of data.

Vectorstores are databases designed to efficiently store and retrieve vector embeddings. They are specialized data structures that enable fast similarity searches among high-dimensional vectors. Key features of vectorstores include:

Retrieval.

Once the data is in the database, we will need to retrieve relevant splits. LangChain supports many different retrieval algorithms of which we can count similarity search, MMR (Maximum Marginal Relevance), LLM-aided retrieval and compression.

Generation.

In this step, the LLM produces an answer using a prompt that includes the question and the retrieved data

Add a user interface and putting it all together


import os
import openai
import panel as pn
from dotenv import load_dotenv, find_dotenv
from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
import uuid
import chromadb
from chromadb.utils.batch_utils import create_batches
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.vectorstores import DocArrayInMemorySearch
import datetime

current_date = datetime.datetime.now().date()
if current_date < datetime.date(2023, 9, 2):
    llm_name = "gpt-3.5-turbo-0301"
else:
    llm_name = "gpt-3.5-turbo"
print(llm_name)

pn.extension()

openai.api_key  = os.environ['OPENAI_API_KEY']

## load pdf
loader = PyPDFLoader("filepath")
doc = loader.load()

## split the document into chunks
r_splitter = RecursiveCharacterTextSplitter(
    chunk_size = 1000,
    chunk_overlap = 150
)

splits = r_splitter.split_documents(doc)

## Embedding and Vectorstore
persist_directory = '../docs/chroma'

class CustomOpenAIEmbeddings(OpenAIEmbeddings):
    def __init__(self, openai_api_key, *args, **kwargs):
        super().__init__(openai_api_key=openai_api_key, *args, **kwargs)
        
    def _embed_documents(self, texts):
        return super().embed_documents(texts)

    def __call__(self, input):
        if isinstance(input, str):
            return self._embed_documents([input])[0]
        return self._embed_documents(input)

client = chromadb.PersistentClient(path=persist_directory)

embedding_function = CustomOpenAIEmbeddings(openai_api_key=os.environ['OPENAI_API_KEY'])

collection = client.get_or_create_collection(
    name='chroma', 
    embedding_function=embedding_function
)

for batch in create_batches(
        api=client,
        ids=[str(uuid.uuid4()) for _ in range(len(splits))],
        metadatas=[t.metadata for t in splits],
        documents=[t.page_content for t in splits],
):
    collection.add(*batch)

db = Chroma(client=client, collection_name=collection.name, embedding_function=embedding_function)


## Retrieval
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum. Keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer. 
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)

qa_chain = RetrievalQA.from_chain_type(
    llm,
    retriever=db.as_retriever(),
    return_source_documents=True,
    chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)

result = qa_chain("What is the main idea of the document?")


# Memory
memory = ConversationBufferMemory(
    memory_key='chat_history',
    return_messages=True,
)
retriever = db.as_retriever()

qa = ConversationalRetrievalChain.from_llm(
    llm,
    retriever=retriever,
    memory=memory,
)

a = qa("What is the main idea of the document?")


def load_db(file, chain_type, k):
    # load documents
    loader = PyPDFLoader(file)
    documents = loader.load()
    # split documents
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150)
    docs = text_splitter.split_documents(documents)
    # define embedding
    embeddings = OpenAIEmbeddings()
    # create vector database from data
    db = DocArrayInMemorySearch.from_documents(docs, embeddings)
    # define retriever
    retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": k})
    # create a chatbot chain. Memory is managed externally.
    qa = ConversationalRetrievalChain.from_llm(
        llm=ChatOpenAI(model_name=llm_name, temperature=0), 
        chain_type=chain_type, 
        retriever=retriever, 
        return_source_documents=True,
        return_generated_question=True,
    )
    return qa 

import panel as pn
import param

class cbfs(param.Parameterized):
    chat_history = param.List([])
    answer = param.String("")
    db_query  = param.String("")
    db_response = param.List([])
    
    def __init__(self,  **params):
        super(cbfs, self).__init__( **params)
        self.panels = []
        self.loaded_file = "filepath"
        self.qa = load_db(self.loaded_file,"stuff", 4)
    
    def call_load_db(self, count):
        if count == 0 or file_input.value is None:  # init or no file specified :
            return pn.pane.Markdown(f"Loaded File: {self.loaded_file}")
        else:
            file_input.save("temp.pdf")  # local copy
            self.loaded_file = file_input.filename
            button_load.button_style="outline"
            self.qa = load_db("temp.pdf", "stuff", 4)
            button_load.button_style="solid"
        self.clr_history()
        return pn.pane.Markdown(f"Loaded File: {self.loaded_file}")

    def convchain(self, query):
        if not query:
            return pn.WidgetBox(pn.Row('User:', pn.pane.Markdown("", width=600)), scroll=True)
        result = self.qa({"question": query, "chat_history": self.chat_history})
        self.chat_history.extend([(query, result["answer"])])
        self.db_query = result["generated_question"]
        self.db_response = result["source_documents"]
        self.answer = result['answer'] 
        self.panels.extend([
            pn.Row('User:', pn.pane.Markdown(query, width=600)),
            pn.Row('ChatBot:', pn.pane.Markdown(self.answer, width=600))
        ])
        inp.value = ''  #clears loading indicator when cleared
        return pn.WidgetBox(*self.panels,scroll=True)

    @param.depends('db_query ', )
    def get_lquest(self):
        if not self.db_query :
            return pn.Column(
                pn.Row(pn.pane.Markdown(f"Last question to DB:", styles={'background-color': '#F6F6F6'})),
                pn.Row(pn.pane.Str("no DB accesses so far"))
            )
        return pn.Column(
            pn.Row(pn.pane.Markdown(f"DB query:", styles={'background-color': '#F6F6F6'})),
            pn.pane.Str(self.db_query )
        )

    @param.depends('db_response', )
    def get_sources(self):
        if not self.db_response:
            return 
        rlist=[pn.Row(pn.pane.Markdown(f"Result of DB lookup:", styles={'background-color': '#F6F6F6'}))]
        for doc in self.db_response:
            rlist.append(pn.Row(pn.pane.Str(doc)))
        return pn.WidgetBox(*rlist, width=600, scroll=True)

    @param.depends('convchain', 'clr_history') 
    def get_chats(self):
        if not self.chat_history:
            return pn.WidgetBox(pn.Row(pn.pane.Str("No History Yet")), width=600, scroll=True)
        rlist=[pn.Row(pn.pane.Markdown(f"Current Chat History variable", styles={'background-color': '#F6F6F6'}))]
        for exchange in self.chat_history:
            rlist.append(pn.Row(pn.pane.Str(exchange)))
        return pn.WidgetBox(*rlist, width=600, scroll=True)

    def clr_history(self,count=0):
        self.chat_history = []
        return 


cb = cbfs()

file_input = pn.widgets.FileInput(accept='.pdf')
button_load = pn.widgets.Button(name="Load DB", button_type='primary')
button_clearhistory = pn.widgets.Button(name="Clear History", button_type='warning')
button_clearhistory.on_click(cb.clr_history)
inp = pn.widgets.TextInput( placeholder='Enter text here…')

bound_button_load = pn.bind(cb.call_load_db, button_load.param.clicks)
conversation = pn.bind(cb.convchain, inp) 

jpg_pane = pn.pane.Image( './img/convchain.jpg')

tab1 = pn.Column(
    pn.Row(inp),
    pn.layout.Divider(),
    pn.panel(conversation,  loading_indicator=True, height=300),
    pn.layout.Divider(),
)
tab2= pn.Column(
    pn.panel(cb.get_lquest),
    pn.layout.Divider(),
    pn.panel(cb.get_sources ),
)
tab3= pn.Column(
    pn.panel(cb.get_chats),
    pn.layout.Divider(),
)
tab4=pn.Column(
    pn.Row( file_input, button_load, bound_button_load),
    pn.Row( button_clearhistory, pn.pane.Markdown("Clears chat history. Can use to start a new topic" )),
    pn.layout.Divider(),
    pn.Row(jpg_pane.clone(width=400))
)
dashboard = pn.Column(
    pn.Row(pn.pane.Markdown('# ChatWithYourData_Bot')),
    pn.Tabs(('Conversation', tab1), ('Database', tab2), ('Chat History', tab3),('Configure', tab4))
)


dashboard


pn.serve(dashboard, show=True)

Say Hi

I'm a freelance fullstack developer, working with Vue, Svelte, TypeScript, and Headless CMSes.
I'm interested in AI, web development, and creative coding. I love helping my clients achieve their goals by focusing on accessibility and using technology to empower all people.
Feel free to reach out to me here.