How to dynamically add pages that load on the background

I’m relatively new to Dash, and my objective is to develop a user interface for an application where users can upload multiple files. I have a machine learning algorithm that processes each uploaded file and displays the output for the user to review. My goal is to create an app where a new page is dynamically generated for each uploaded file, and the corresponding file is processed in the background.

While I’ve successfully created a single-page app where users can analyze one file, I’m struggling to find examples or documentation on how to dynamically create pages for each uploaded file. I’m wondering if this is achievable in Dash, or if you’d recommend a different approach for this functionality? Also if you can help with integrating the background processes to this, that would be great as processing each file takes quite long and I would like to allow the user to interact with the first file while the others are being processed.

1 Like

Really looking for any help. Another approach would be to create dynamically elements and then process their input in the background in some kind of a que. Would that be doable or better approach?

Hey @hemmlin welcome to the forums.

As far as creating pages (I’m assuming you are referring to app multipage apps) I don’t think this is possible because the pages have to be registered at app startup.

What you could do is adding tabs dynamically. and show the file in the tab. Here is something I did in the past concerning tabs.

Here some more information on how create content dynamically:

1 Like

Thank you, that looks really promising! Do you know any resources on how to process the tabs in a queue in the background as the processing takes quite some time and it would be great to let the user see the first results before every file is processed?

1 Like

Not really. You would want to upload the files as batch, right? Like dcc.Upload(multi=True). I think with just using this you would have to wait until each file has been uploaded. Does the upload take a lot of time? I guess you could trigger the processing of each file individually once the upload has finished.

The uploading is fast but the processing for which I use Ai model is the bottle neck. How to do this triggering according to the best practice?
Thank you so much for your help!

Hello @hemmlin,

My recommendation, from what this sounds like is this:

For each file a user uploads:

Create a tab with the filename as the value and name (this happens immediately) → Triggers a background callback to load the content to the children of the tab

During the background callback:

  • have a loading spinner added to the tab name
  • have progress updates to the children of the tab to give the user an update on the processing of the file (could place this in the name of the tab too if you wanted)

^ will be better if you could somehow dissect your ML process to subprocesses to give nice updates

Finally from the callback:

  • update the loading spinner to a checkmark
  • remove any extra names from the tab name
  • place your finished data into the children of the tab
1 Like

This sounds exactly what I need, do you know any examples of Triggering a background callback with the loading spinners. I already got nice overview of how to dynamically create tabs from the previous response. Thank you for your help!

1 Like

Sure, he is an example with a progress bar, should be similar in concept:

I’d start with a simple process and a time delay to make sure things are working properly before hooking up your ML processing.

You may also need to use dmc Tabs instead of dcc tabs:

Thanks a lot this will helps me so much further with my project! For triggering the processing call backs would you recommend using the dash-renderer as described here Advanced Callbacks | Dash for Python Documentation | Plotly? Or is there a more obvious way of triggering other callbacks from the initial call back?

Starting the initial callback could be triggered from the tab being added, yeah.

Like I suggested, you should start with something small and make sure it is working like you are thinking.

I got a nice start with you help, thank you for that ! However I am struggling on finding a way to trigger the background processes in a queue rather than in parallel. ie. I want only one file at the time to be processed by the AI model as otherwise it is too computationally heavy. Also in my use case it does not matter if the other tabs are loading while the first one is being checked by the user.

Here is my current simple version for the app, if you have suggestions please let me know!

import dash
from dash import dcc, html, MATCH, DiskcacheManager, Input, Output, State
from dash.exceptions import PreventUpdate
import time
import time
import dash_mantine_components as dmc
from dash_iconify import DashIconify
import diskcache
import backend.constants

cache = diskcache.Cache("./cache")
background_callback_manager = DiskcacheManager(cache)

app = dash.Dash(
server = app.server
poppler_path = backend.constants.poppler_path

colors = {"graphBackground": "#F5F5F5", "background": "#ffffff", "text": "#000000"}

app.layout = html.Div(
            children=html.Div(["Drag and Drop or ", html.A("Select Files")]),
                "width": "100%",
                "height": "60px",
                "lineHeight": "60px",
                "borderWidth": "1px",
                "borderStyle": "dashed",
                "borderRadius": "5px",
                "textAlign": "center",
                "margin": "10px",
            # Allow multiple files to be uploaded

def create_file_panel(file_name, file_nro):
    return dmc.Tab(
        id={"type": "file_tab", "index": file_nro},

def create_file_tab(file_name, file_nro):
    return dmc.TabsPanel(
        children="Loading for the data",
        id={"type": "file_card", "index": file_nro},

def parse_file_tab(page_paths):
    Hevylifting of detecting the tables,
    finding the text and putting it into a df

    return html.Div("Loaded")

        Output("app_placeholder", "children"),
        Output("page-image-paths", "data"),
    Input("upload-data", "contents"),
    State("upload-data", "filename"),
    State("upload-data", "last_modified"),
def update_output(list_of_contents, list_of_names, list_of_dates):
    if list_of_names is None:
        raise PreventUpdate
        list_of_page_panels = [
                    create_file_panel(file_name, file_nro)
                    for file_nro, file_name in enumerate(list_of_names)
        list_of_page_tabs = [
            create_file_tab(file_name, file_nro)
            for file_nro, (file_name, file_content) in enumerate(
                zip(list_of_names, list_of_contents)
        list_of_all = list_of_page_panels + list_of_page_tabs
        # save the images to temp folder and save the filepaths for later
        list_of_imagepaths = dict({"some dict": "to be saved"})
        return (

# call back to create new tab for each file
        Output({"type": "file_tab", "index": MATCH}, "icon"),
        Output({"type": "file_card", "index": MATCH}, "children"),
    Input({"type": "file_card", "index": MATCH}, "value"),
def update_output(file_name):
    if file_name is None:
        raise PreventUpdate
        data_blocks = parse_file_tab(file_name)
        page_inside = html.Div(
        return (

if __name__ == "__main__":

Also this is time to time throwing an error :
ImportError: cannot import name 'Popen' from partially initialized module 'multiprocess.popen_spawn_win32' (most likely due to a circular import)

But it is only occassional, and somehow caused by the background process.

I would appreciate any input you have to give, or if this a wrong way/channel to ask for help also let me know! question