Problem with dcc.interval which is taking too much time to update the data

Hello everyone,

I’ve just developped an app with two containers:

Worker container is fetching data from physical equipments and stores it into a CSV.
Api container is a Dash app which is reading the CSV and displays it in a DashTable.

There is a scheduler which runs the Worker every 10 minutes, so I get an updated CSV file every 10 minutes.

I’m using an interval component to read the CSV every second and update the Table, so the user always have an updated table. Also, I read the modification date of the csv file, then I can inform the user about the time of the last update. I also inform him when the worker is in process of fetching the data (thanks to the docker kpi).

It used to work for 3 weeks very well. I also deployed the app last week with Gunicorn and it worked well. The data was displayed instantly. But today, there is a bug I don’t know from where it’s coming:
Each time I open the app, the data in the table and the time of the last update are taking between 15 and 60 seconds to be displayed.
I saw in the logs that everything is working well but there’s only a bug in the display.
Also, once the data and the time is displayed, 10 minutes later, after the new data is updated, same story: The new data is taking long time to be displayed.

Here is the part of my code that deals with the display of data in the table and the time:

from dash import Dash, dash_table, dcc, html, State
from dash.dependencies import Input, Output
import dash_bootstrap_components as dbc
import pandas as pd
import docker
from datetime import datetime
import os
import pytz

app = Dash(external_stylesheets=[dbc.themes.BOOTSTRAP])

server = app.server

df = pd.read_csv("./data.csv")
df = df.fillna("NaN")
app.title = "Host Tracer"

# Layout
app.layout = html.Div(children=[
        columns=[{'name': i, 'id': i} for i in df.columns],
        page_current= 0,
        page_size= 40,
                interval=1*1000, # in milliseconds

def last_modification_time_of_csv(file):
    modTimesinceEpoc = os.path.getmtime(file)
    return datetime.fromtimestamp(modTimesinceEpoc).astimezone(pytz.timezone('Europe/Paris')).strftime("%d/%m/%Y at %H:%M:%S")

# Display data in table every second
    Output('time-infos', 'children'),
    Output('datatable-interactivity', 'data'),
    Input('interval-component', 'n_intervals'))

def update_table(n):
    df = pd.read_csv("./data.csv")
    df = df.fillna("NaN")
    date_time = last_modification_time_of_csv("./data.csv")
    infos = ""
    client = docker.from_env()
    container = client.containers.get('container_worker')
    if container.attrs["State"]["Status"] == "running":
        infos = f'⚠️ Worker in process...'
        infos = f'last updated data: ' + date_time
    return infos, df.to_dict('records')

if __name__ == '__main__':
    app.run_server(debug=True, host='')

First, I was thinking that it was a problem with Gunicorn. I replaced it by Flask and the problem is still here.

Maybe someone have an idea of where the issue is coming from?
I specify that I have a CSV of 15000 lines.

Thank you,


I just modified the time of the interval from 1*1000 to 1*2000 and it’s working better. I really don’t understand why.

I think I should rethink my mechanism. It’s too much to rewrite the data in my table every 1 or 2 seconds. The thing is that I don’t know when the data is updated exactly because I also let the user to fetch the data when clicking on a button. That’s why I’m refreshing every second.



Even with the 2 seconds interval, it’s sometimes taking a lot of time to load the data. I really don’t understand what’s the matter.

Thank you

HI, you may need this below.

1 Like

Thank you so much, it helped a lot.
So you think the problem was that the callback was called before the previous call was finished and that blocked the whole mechanism?

Thanks @stu

Yes, that it what the transform protects against :blush: