Databricks LLM (DBRX) with Dash App Building Challenge

App submission deadline extended to end of the day Sunday, July 7, 2024.

Hey Everyone,
Recently, Databricks announced a new standard for efficient open source LLMs – DBRX. This LLM can be used together with Dash to create remarkable AI data apps.

Starting today, we challenge the LLM builder community to develop large language models with DBRX and create front-end app interfaces for them with Dash.

The winning apps will be judged according to the following categories:

  • App must integrate DBRX
  • Impact the app could have (great business idea, societal impact, etc.)
  • Integration of AI agents (LangChain, CrewAI, etc.)
  • App UI/UX Design

Please submit your app as a new post in this thread. Please include a link to the app (not mandatory, but encouraged), link to code on GitHub, and a short description of the app.

Submission deadline is the end of the day Sunday, June 30, 2024.

The winners will be announced in July and will receive a reward of:

:1st_place_medal: $125

:2nd_place_medal: $75

:3rd_place_medal: $50

The lucky app challenge participants will be featured in the next edition of Dash Club and have a chance to showcase their app at a community event.

For any questions, please email Adam at adam@plot.ly.

DBRX Pricing Note:

:stop_sign: You can open your first Databricks account with a 14-day free trial. All Databricks usage is free during this time, but Databricks uses compute and S3 storage resources in your AWS account, so please be aware of usage and expenses.

Create Free Trial Databricks Account:

  • To work with the DBRX LLM, you will need to create a Databricks account and a Databricks workspace. We’ve recorded a video tutorial to support you through the process.

App Ideas:

  • Question-Answering app
  • RAG app that can summarize and answer questions from links, PDFs, videos, etc.
  • App that summarizes risks and opportunities in a financial report
  • App that incorporates a chatbot - customer support

Minimal App Examples:

:point_right: Question-Answering Dash app with DBRX:

from dash import Dash, html, dcc, callback, Output, Input, no_update
from openai import OpenAI
from dotenv import find_dotenv, load_dotenv
import os

dotenv_path = find_dotenv()
load_dotenv(dotenv_path)

# Once you have your workspace, create your token by going to your databricks avatar -> settings -> developer -> Access tokens
DATABRICKS_TOKEN = os.getenv("DATABRICKS_TOKEN")  # Create a .env file and write: DATABRICKS_TOKEN="insert-your-token"

client = OpenAI(
    api_key=DATABRICKS_TOKEN,
    base_url="https://***your-workspace***.cloud.databricks.com/serving-endpoints"
    # the base_url will be sent to your email once your workspace is created. It's also in the 1st part of the url when you're in your workspace
)

app = Dash()
app.layout = [
    dcc.Markdown("# Minimal example of a no-memory Chat Dash app"),
    html.Label("Type your question to activate the DBRX LLM"),
    html.P(),
    dcc.Input(id='user-input', type='text', debounce=True),  # debounce will delay the Input Processing until after you hit Enter
    html.Div(id='response-space', children='')
]


@callback(
    Output('response-space', 'children'),
    Input('user-input', 'value'),
    prevent_initial_call=True
)
def activate_chat(input_value):
    if not input_value:  # don't update the Output if the input value is empty (no text)
        return no_update
    else:
        chat_completion = client.chat.completions.create(
            messages=[
                {
                    "role": "system",
                    "content": "You are an AI assistant"
                },
                {
                    "role": "user",
                    "content": input_value
                }
            ],
            model="databricks-dbrx-instruct",  # this is the DBRX model
            max_tokens=256
        )
        print(chat_completion)
        response = chat_completion.choices[0].message.content
        return response



if __name__ == '__main__':
    app.run_server(debug=True)

chat-app-demo


:point_right: RAG App with DBRX:

from dash import Dash, dcc, html, callback, Output, Input, State
import dash_mantine_components as dmc  # pip install dash-mantine-components==0.12.0
from langchain_community.document_loaders import WebBaseLoader, PyPDFLoader
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain
from dotenv import find_dotenv, load_dotenv
import re
import os

dotenv_path = find_dotenv()
load_dotenv(dotenv_path)
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")  # Create a .env file and write: OPENAI_API_KEY="insert-your-openai-token"

# Once you have your workspace, create your token by going to your databricks avatar -> settings -> developer -> Access tokens
DATABRICKS_TOKEN = os.getenv("DATABRICKS_TOKEN")  # Add to your .env file: DATABRICKS_TOKEN="insert-your-token"
llm = ChatOpenAI(model_name="databricks-dbrx-instruct",  # this is the DBRX model
                 openai_api_key=DATABRICKS_TOKEN,
                 openai_api_base="***your-workspace***.cloud.databricks.com/serving-endpoints")
                 # the base_url will be sent to your email once your workspace is created. It's also in the 1st part of the url when you're in your workspace


prompt = ChatPromptTemplate.from_template("""Answer the following question based only on the provided context:

<context>
{context}
</context>

Question: {input}""")

document_chain = create_stuff_documents_chain(llm, prompt)  # chain the LLM to the prompt

# Initialize the Dash app and define the layout
app = Dash()
app.layout = html.Div(
    [
        dmc.Container(  # dash-mantine-components==0.12.0
            children=[
                dmc.Title(order=1, children="Online Document Summarizer"),
                dmc.TextInput(label="Summarize Doc", placeholder="Enter the webpage or pdf...", id="input-1"),
                dmc.TextInput(label="Ask your question", placeholder="Ask away...", id="input-2"),
                dcc.Loading(html.Div(id='answer-space')),
                dmc.Button(children="Submit", id="submit-btn", mt="md")
            ],
            style={"maxWidth": "500px", "margin": "0 auto"},
        )
    ]
)

@callback(
    Output('answer-space', 'children'),
    Input('submit-btn', 'n_clicks'),
    State('input-1', 'value'),
    State('input-2', 'value'),
    prevent_initial_call=True
)
def update_output(n_clicks, input1, input2):
    if bool(re.search(r'\.pdf$', input1, re.IGNORECASE)):  # checks if link refers to a pdf
        # https://image-us.samsung.com/SamsungUS/tv-ci-resources/2018-user-manuals/2018_UserManual_Q9FNSeries.pdf
        # https://arxiv.org/pdf/2304.03271.pdf
        loader = PyPDFLoader(input1)
        docs = loader.load_and_split()
    else:
        # load HTML pages and parse them
        # https://en.wikipedia.org/wiki/Paris
        loader = WebBaseLoader(input1)
        docs = loader.load()


    embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
    text_splitter = RecursiveCharacterTextSplitter()
    documents = text_splitter.split_documents(docs)
    vector = FAISS.from_documents(documents, embeddings)
    retriever = vector.as_retriever()
    retrieval_chain = create_retrieval_chain(retriever, document_chain)
    # Sample questions for testing app:
        # What was Paris architecture like in the 19th century
        # How can I fix my remote control?
        # How many litters of water did google consume in 2022?
    response = retrieval_chain.invoke(
        {"input": input2})

    return response["answer"]


if __name__ == '__main__':
    app.run_server(debug=True)

rag-app-demo


:point_right: Pandas Agent Dash App with DBRX and Langchain:

from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
from langchain_openai import ChatOpenAI
from dash import Dash, html, dcc, callback, Input, Output
import plotly.express as px
import pandas as pd
from dotenv import find_dotenv, load_dotenv
import os

dotenv_path = find_dotenv()
load_dotenv(dotenv_path)
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")  # Create a .env file and write: OPENAI_API_KEY="insert-your-openai-token"

df = px.data.stocks()
df['date'] = pd.to_datetime(df['date'])
dropdown_options = list(df.columns)
dropdown_options.remove('date')



DATABRICKS_TOKEN = os.getenv("DATABRICKS_TOKEN")  # Add to your .env file: DATABRICKS_TOKEN="insert-your-token"
llm = ChatOpenAI(model_name="databricks-dbrx-instruct",
                 openai_api_key=DATABRICKS_TOKEN,
                 openai_api_base="https://***your-workspace***.cloud.databricks.com/serving-endpoints")
# agent = create_pandas_dataframe_agent(llm, df, verbose=True)
# response = agent.invoke("what is the data telling us about APPL stock performance?")
# print(response)
# exit()

app = Dash()
app.layout = [
    dcc.Markdown("# Demo App using a Langchain Pandas Agent"),
    dcc.Dropdown(id='stock-picker', options=dropdown_options, value=['AAPL','FB'], multi=True),
    dcc.Graph(id='line-chart', figure={}),
    dcc.Loading(html.Div(id='answer-space'))
]


@callback(
    Output('line-chart', 'figure'),
    Output('answer-space', 'children'),
    Input('stock-picker', 'value')
)
def activate_agent(stocks_chosen):
    # Create the figure
    stocks_chosen.append('date')  # append date column to other columns chosen by user
    df_filtered = df.loc[:, stocks_chosen]
    print(df_filtered)
    fig = px.line(df_filtered, x='date', y=stocks_chosen)

    # Use pandas agent to analyze the dataset
    agent = create_pandas_dataframe_agent(llm, df_filtered, verbose=True, handle_parsing_errors=True)
    response = agent.invoke("The dataset includes data for 2018 and for 2019. what is the dataset telling us about the performance of the stocks in both years")
    print(response)
    return fig, response["output"]



if __name__ == '__main__':
    app.run_server(debug=True)

pandas-agent2


Good luck with the challenge. If you have any question while working on the challenge, please use the forum and support each other.

4 Likes

Hi Plotly Community! :smiley:

I’m excited to share an app I’ve developed for this competition called Quizdash, a minimalist AI-powered quiz generation app. With Quizdash, powered by DBRX, you can effortlessly generate quizzes on any topic/text/pdf, take the quizzes, and receive assistance from a personal “tutor” through an integrated chat interface.

output

  • Quiz Generation: Users can generate a customized quiz by selecting a topic, uploading text or a PDF, and specifying the number of questions, options, and difficulty level. DBRX will do the rest!

  • Quiz Overview: The Quiz page provides an overview of created quizzes, displaying basic information and allowing users to play any quiz.

  • Play a Quiz: Users can play the quiz, check answers, receive explanations, and view results on the final slide using interactive dash components.

  • Personal AI Tutor: A chat interface allows users to ask questions and receive guidance from a personal AI tutor.

The app is live: quizdash.onrender.com (I used the free tier, so it will take about 1 min to load)
In this live version, you are able to view the app and take an example quiz.
To generate quizzes, clone the repo: GitHub - ceeskaan/quizdash and follow the instructions.

I had a lot of fun building this app! Many thanks to @CNFeffery (feffery-antd-components), as I mainly used the Ant Design UI components for the app!

Cheers,

Cees Kaandorp (LinkedIn)

2 Likes

What a cool app, @ceeskaan . Thanks for submitting it.
A quiz generator is a creative subject to build an app on. I don’t recall having seen this type of app yet.

I’m going to try to clone the repo and add my DBRX credentials to see if it works. I’ll let you know if I encounter any errors.

Thank you.

p.s. for some reason, the live app stopped working. Do you know what might have happened?

1 Like

Thank you! Woops, I made a typo in the live app url, fixed it now: https://quizdash.onrender.com/

1 Like

Excited to share, DoomBerg, a multi-agent powered dashboard generation app. DoomBerg pulls in S&P500 data, recent news, and runs a negative analysis on the stock. Based on the EBIDTA and recent news, a Senior Doom Research Analyst orchestrates a team of agents such as an Analyst, Financial Model Builder, and Dash Programmer to build a dash app for measuring “Gloom” and it’s reasoning, for how the ticker/company will be guided negatively based on current events.

1 Like

Thank you @MarketMaker for submitting your app. Can you please provide instructions on how to run your app locally?

Thank you everyone for submitting your apps :tada:

This challenge is now closed. We’ll review the submitted apps and get back to the winners directly within a week.