Langchain agents and their states in Dash

Hello,

Building agents and chatbots became easier with Langchain and I saw some of the astonishing apps built in the Dash-LangChain App Building Challenge - #11 by adamschroeder

Currently, I am working on chatbot for a dash application myself, and checking out some ways to use it the app. During that process, I came across a question and wanted to ask: How do you handle the agent with callbacks?

I am creating the agent always fresh with the callback call. I save the chat history in cache and provide it via prompt.
Langchain does offer some ConversationBuffer, but that would not work while creating the agent new. I am not sure if that is the best way.

In the app challenge, I saw some examples using global as a way to store the agent. What is the best approach to work with the agent, which we might want to be stateful, in dash, which is stateless?

Langhain uses pydantic, doesn’t it? So maybe you can convert the objects to json and use a dcc.Store()

You could also use dash-extensions serverside output.

1 Like

That would be a good approach. Thanks @AIMPED
You would need to convert the objects to json using Langchain’s:
from langchain.load import dumps, loads
Regular json.dumps does not work

@simon-u ,
Your quetion is a nice coincidence. It’s a bit outside the scope of the common usage of a Plotly Dash app, so I’ll be creating a video about it this weekend.

Here’s how a LangChain agent with Dash as frontend could look like. I incorporated the Tavily search tool so that the agent has access to the web.

from dotenv import find_dotenv, load_dotenv
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import AIMessage, HumanMessage
from langchain.load import dumps, loads
from dash import Dash, dcc, html, callback, Output, Input, State, no_update


# make sure you create a .env file with the following:
# TAVILY_API_KEY="insert-your-tavily-key"
# OPENAI_API_KEY="insert-your-openAI-key"

# activate api keys in your app
dotenv_path = find_dotenv()
load_dotenv(dotenv_path)

llm = ChatOpenAI(temperature=0)

tavily_tool = TavilySearchResults()
tools = [tavily_tool]


prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are an assistant. Make sure to use the tavily_search_results_json tool for information"),
        MessagesPlaceholder("chat_history", optional=True),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)

agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
)

def process_chat(agent_executor, user_input, chat_history):
    response = agent_executor.invoke({
        "input": user_input,
        "chat_history": chat_history
    })
    return response["output"]


app = Dash()
app.layout = html.Div([
    html.H2("Ask me anything. I'm your personal assistant that can search the web"),
    dcc.Input(id="my-input", type="text", debounce=True, style={"width":500, "height":30}),
    html.Br(),
    html.Button("Submit", id="submit-query", style={"backgroundColor":"blue", "color":"white"}),
    dcc.Store(id="store-it", data=[]),
    html.P(),
    html.Div(id="response-space")
])

@callback(
    Output("response-space", "children"),
    Output("store-it","data"),
    Input("submit-query", "n_clicks"),
    State("my-input", "value"),
    State("store-it","data"),
    prevent_initial_call=True
)
def interact_with_agent(n, user_input, chat_history):
    # the first time you submit a query to the agent, your chat_history is an empty list, so 
    # no need to convert the json back to Langchain objects
    if len(chat_history) > 0:
        chat_history = loads(chat_history) # de-serialize the chat_history
    
    print(chat_history)
    response = process_chat(agent_executor, user_input, chat_history)
    chat_history.append(HumanMessage(content=user_input))
    chat_history.append(AIMessage(content=response))

    history = dumps(chat_history)  # serialize the chat_history (convert object to json)

    return f"Assistant: {response}", history



if __name__ == '__main__':
    app.run_server(debug=True)
3 Likes

@AIMPED and @adamschroeder, thank you for the input!

Seems I was thinking more complicated than I needed. :sweat_smile:
I did not realise I can put the setup of the executer on the page itself like you did.

I have multipage app, and the idea is to integrate the chatbot on each page and register different data sources for him.
With your way, @adamschroeder, I will be able to do that without making the executer every time new, and keep some memory as well. Wasn’t sure how to move as much as possible out of the callback. I want to have something similar an AIO component, with some register functionality.

For the chat history, I use a similar approach. I am still trying to figure out if there is a way to use the built-in Buffers from Langchain which summarises the conversation and maybe tie it to the user session or something.

@adamschroeder Thank you as well for the videos on youtube :smiley: They got me hooked up on this topic in the first place! :+1:

3 Likes