Bring Drag & Drop to Dash with Dashboard Engine. 💫 Learn how at our next webinar!

@long_callback with Celery & Redis - how to get the example app work?

Hello,

today, I am trying to update my app to Dash 2.0 mainly to take the advantage of the @long_callback decorator.

I have troubles getting the Example with Celery/Redis app from Dash documentation to work.

I run Redis in docker using the following command:

$ docker run -p 6379:6379 redis

My dash app in module called playground.py and its content is copy-pasted from the documentation:

import time
import dash
from dash import html
from dash.long_callback import CeleryLongCallbackManager
from dash.dependencies import Input, Output
from celery import Celery

celery_app = Celery(
    __name__, broker="redis://localhost:6379/0", backend="redis://localhost:6379/1"
)
long_callback_manager = CeleryLongCallbackManager(celery_app)

app = dash.Dash(__name__, long_callback_manager=long_callback_manager)

app.layout = html.Div(
    [
        html.Div([html.P(id="paragraph_id", children=["Button not clicked"])]),
        html.Button(id="button_id", children="Run Job!"),
        html.Button(id="cancel_button_id", children="Cancel Running Job!"),
    ]
)

@app.long_callback(
    output=Output("paragraph_id", "children"),
    inputs=Input("button_id", "n_clicks"),
    running=[
        (Output("button_id", "disabled"), True, False),
        (Output("cancel_button_id", "disabled"), False, True),
    ],
    cancel=[Input("cancel_button_id", "n_clicks")],
)
def callback(n_clicks):
    time.sleep(2.0)
    return [f"Clicked {n_clicks} times"]


if __name__ == "__main__":
    app.run_server(debug=True)

I run the app by executing:

$ python playground.py

The app starts and I get no errors. I open the app in the browser to see this:

My assumption is that the callback() is triggered on page load and that’s why the ‘Run Job!’ button is disabled. The button never gets activated so I assume that the job never finishes. The app does not throw any errors.

What am I missing? Perhaps I need another separate process for the Celery worker? I tried to follow the Celery tutorial but couldn’t adapt it to use with Dash. Thanks for any advice!

1 Like

Same issue for me.

As I suspected, a separate process for the Celery worker is needed. In fact, two processes - one for the worker and another one for something called beat.

How to run the example app:

1) Start the redis server

Either with docker:

$ docker run -p 6379:6379 redis

Or without docker:

$ redis-server

2) Start Celery beat and worker processes

(two separate processes in separate terminal windows):

$ celery -A playground.celery_app beat --loglevel=INFO

and

$ celery -A playground.celery_app worker --loglevel=INFO

3) Start the Dash app

$ python playground.py

edit:

The beat process from point 2) is not necessary as @amarv pointed out.

1 Like

Great solution! For me, my Dash app is defined in application.py and long callbacks started working by also running celery -A application.celery_app worker. I didn’t need to run Celery beat.

You are right. It works also without the beat process. I’ll need to do some more reading about the purpose of the beat process.

Anyways, I’m happy it works for you. I’m now struggling with making long_callbacks work in a multi-page app. Have you tried that?

1 Like

Nice! Sorry, I haven’t tried it with multi-page Dash apps. Maybe you’ll have issues with circular imports that is common with Dash multi-page apps, so you’ll have to use the suggested approach of separating into two Python files the app declaration and server declaration (see URL Routing and Multiple Apps | Dash for Python Documentation | Plotly section “Structuring a Multi-Page App”).

So to make sure I understand the solution, even though the playground.py file creates a Celery object that is hooked into the dash app which is being run, the celery app also needs to be started on its own?

I am not sure about how exactly it all works nor am I sure about the correct terminology but I’ll try to formulate it the way I understand it…

The Celery app is created in playground.py as an object of type Celery. The Celery app on its own does not run the asynchronous tasks. It probably only sends the information about tasks to be executed to Redis and also keeps an eye on finished tasks which are also collected in Redis.

There must be a separate worker process whose job is to look for tasks submitted to Redis, execute the tasks (asynchronously from the main dash process), and once finished, submit the results to Redis. This is the process that is started with the command:

$ celery -A playground.celery_app worker --loglevel=INFO

Please anyone correct me if I’m wrong somewhere…

1 Like

Enabling @long_callback for a multi-page app

Making the @long_callback work in a multi-page app is quite simple. Assuming a project structure as follows:

- app.py
- index.py
- apps
   |-- __init__.py
   |-- app1.py
   |-- app2.py

In app.py:

import dash
from dash.long_callback import CeleryLongCallbackManager
from celery import Celery

celery_app = Celery(
    __name__, broker="redis://localhost:6379/0", backend="redis://localhost:6379/1",
    include=['apps.app1', 'apps.app2']
)

long_callback_manager = CeleryLongCallbackManager(celery_app)

app = Dash(__name__, long_callback_manager=long_callback_manager)

server = app.server

Note the include parameter of the Celery object - here, you have to set paths to all python modules in which you define functions decorated with @long_callback.

For the rest of the app, see the instructions in official documentation (Structuring a Multi-Page app).

The celery worker will be started by executing:
$ celery -A app.celery_app worker --loglevel=INFO

2 Likes

I found that I could avoid this need, and have the Celery worker automatically detect all long_callback by including in index.py a line:

from app import celery_app  # noqa: F401

and then running the Celery worker with

celery -A index.celery_app worker --loglevel=INFO
1 Like

I just hit the wall with @long_callback realising that it currently does NOT support pattern-matching dependencies.

I created a feature request for this functionality. You can add your comment there if you also find this feature important.

As a workaround, you can use a normal callback with pattern matching and store the inputs as a JSON string in a hidden div.
Then add a long_callback with the hidden div as input.
Not pretty but works for me.

That’s a smart solution! I wonder if something like this could be implemented behind the scenes to enable pattern-matching dependencies for @long_callback out of the box? @jmmease