Multiple Callbacks for single inputs - not all are firing

I might have stepped over a callback chaining issue.
I’ve closely followed the example written here: Pattern for avoiding circular dependency in Dash callbacks - #2 by ebosi

When running this app below, upon clicking the button you should see a string below the button “got start_signal.children” and after 3s this should change to “got end_signal.children”.

What happens:
However, upon clicking the button nothing happens and only after 3s you see either of the two messages depending on the server config (e.g. flask/gunicorn).
So it seems the callbacks are not executed in parallel but rather chained, giving the long running process priority (for whatever reason).

Tried with:
kubuntu 20.04
Python 3.7, 3.8
dash 1.6, 1.9, 1.10, 1.12
Flask, waitress, gunicorn (multiple workers)
firefox, chrome
changed the order of object, functions and arguments in the code (no effect)

I would be happy, if the callbacks (at least for one user session) don’t run in parallel but are rather chained but would need a way to control execution order instead. Is this a bug or intended behavior?

import dash
from dash.dependencies import Input, Output
import dash_html_components as html
import time

app = dash.Dash(__name__)
app.layout = html.Div([
    html.Button("start job (3s)", id="start"),
    html.Div("Not run", id="div"),
    html.Div(id="start_signal", hidden=True),
    html.Div(id="end_signal", hidden=True),
server = app.server

@app.callback(Output("start_signal", "children"),
              [Input("start", "n_clicks")])
def start(_button):
    return True

@app.callback(Output("div", "children"),
              [Input("start_signal", "children"), Input("end_signal", "children")])
def show(_start_signal, _end_signal):
    prop = dash.callback_context.triggered[0]['prop_id']
    return f"got {prop}"

@app.callback(Output("end_signal", "children"),
              [Input("start_signal", "children")])
def job(_signal):
    return True

if __name__ == "__main__":
    app.run_server(debug=True, threaded=True)

With the code you have written, i would expect the job and show both to be triggered at the same time, and since the job takes long, the gui would freeze. It sounds like this is also what you see?

If you need to run a long job, it would probably be better to run is asynchronously, this way to gui will remain responsive.

hm. makes sense.
interestingly, if I replace the one-input/two-callbacks logic by one input per callback (two outputs from start), the whole thing works much better, but only until dash 1.10. in dash 1.12 its the same result as my first example.

But this one should work in any case, right?!

In order to solve the Problem with arbitrary order of callback firing, would it make sense to introduce a callback_delay per output object? It would add a configurable delay in seconds before it actually passes the signal to that output. Should be easy in pure JS but I am not sure how much the react API offers here. In contrast to a pure ordering, this would even allow for much more complex apps, where an output of a function is expected within an certain time before running any other associated functions (this concept vs. callback chains).

In general, my experience is that after 1.11.0, callbacks are executed more coherently.

Introducing delays is often a hack to solve other problems, I would generally prefer to address the root cause. In your example, a such solution could be to execute the long job asynchronously.

thanks for your suggestion, @Emil. However, because of the nature of my app mechanics, I’d prefer to keep the long running job in its own callback which can fire an event upon completion. I am using caching and flask session context quite heavily in the app which crashes when using async Threads. I have now used that hack to ensure a callback execution order when I introduced another interval which triggers the long running job after a few moments to make sure all other callbacks have executed beforehand.
Note: This is all in gunicorn/Flask context with multiple workers. Callbacks should be executed in parallel no matter their execution time.
It still feels that the GUI shouldn’t freeze when waiting for the long running job but instead only update (and therefore briefly freeze) once the job has finished.

It should be possible to run the task asynchronously without crashes. Not knowing the complete architecture of your application I can’t comment on the best possible approach, but one option could be to use a task queue such as Celery to run the long process and signal when it is completed. Here is a small example,

I am having a similar problem where only the first callback where callbacks share an input is firing.

code abstraction:

#submit callback
def some_function(n):
     return ''

#call back if submit executes
     Output('table', 'data'),
def update_sql_data(n):
     #update sql script
     print('ran first callback')
     return data

#another call back if submit executes
     Output('another-table', 'data'),
def update_other_sql_data(n):
     #update another sql script
     print('ran second callback')
     return data

I know that the code inherently works and i know that the first call back is working based on printing to the teminal alone, but the second callback with the same input just wont trigger. is this a bug (potentially similar to what is seen above) or is there something i am missing? I had wanted to keep the code cleaner by seperating the functions, but perhaps the fix is just have a larger callback with more outputs and do it all together.

any help would be much appreciated