Progress bar from tqdm

I am planning to implement an option into the dashboard that I am working on so that it will perform a very long computation, anywhere ranging from a few seconds to over an hour. In jupyter notebook, I can use Python’s tqdm package for a progress bar. So, I want to know is there a way to get tqdm’s output, and output it to the dashboard in the form of a progress bar.
If this is currently not possible, will there be any other option that I can consider?
Thanks in advance for any help.

3 Likes

Hey,

I am personally using dash-bootstrap-components and have been very satisfied with it so far :slight_smile:

1 - Install: pip install dash-bootstrap-components

2 - Add: app = dash.Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])

3 - Add your dbc component.

progress = html.Div(
    [
        dbc.Progress(id="progress", value=0, striped=True, animated=True),
        dcc.Interval(id="interval", interval=250, n_intervals=0),
    ]
)

More information there :

https://dash-bootstrap-components.opensource.asidatascience.com/l/components/progress

Hope that helps,

Quentin.

4 Likes

Thanks! This is really cool!

2 Likes

How do you update the progress?

With a callback, you have to link whatever you want to link to progress to, with the ‘id’ property :

import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output

progress = html.Div(
    [
        dbc.Progress(id="progress", value=0, striped=True, animated=True),
        dcc.Interval(id="interval", interval=250, n_intervals=0),
    ]
)


@app.callback(Output("progress", "value"), [Input("interval", "n_intervals")])
def advance_progress(n):
    return min(n % 110, 100)

How would this implementation work?

In my situation, I click a button which runs a callback function. The callback function does a long process with a big for loop. How can I simultaneously extract the loop number for the progress bar given that my function is running inside a callback?

5 Likes

One possible pattern is to run the long callback as a background task which records its process to a shared file system or database, and then have the app periodically check the progress to update the progress bar. Here’s a little demo of that.

Ideally the background process would notify the app directly rather than the app polling for updates, but I think for that type of thing we’d need websocket support in Dash? Not actually totally sure.

4 Likes

Has anyone managed to pipe the tqdm output to a dbc progress bar in Dash yet?

3 Likes

Hmm, how can I stop the interval after progress bar finished?

I’m not sure what stopping the progress bar will give you, but you can check if progress reached 100% and disable the interval.

Could you provide an example of how to disable the interval in the callback?

The example I posted above does this. See here.

2 Likes

Hi @tcbegley ,

I really appreciate your example implementation. Especially the way you documented it - it was really easy to follow. Thank you for that!

I adapted your example to my use case. And made it to work. Kind of…

Around one half of my RQ jobs ends up failing:

job.get_status() == 'failed'

When I look at job.exc_info, I find this:

Traceback (most recent call last): 
File "/Users/user/opt/anaconda3/envs/my_env/lib/python3.7/site-packages/rq/worker.py", line 1003, in perform_job self.prepare_job_execution(job, heartbeat_ttl) 
File "/Users/user/opt/anaconda3/envs/my_env/lib/python3.7/site-packages/rq/worker.py", line 893, in prepare_job_execution self.procline(msg.format(job.func_name, job.origin, time.time())) 
File "/Users/user/opt/anaconda3/envs/my_env/lib/python3.7/site-packages/rq/job.py", line 254, in func_name self._deserialize_data() 
File "/Users/user/opt/anaconda3/envs/my_env/lib/python3.7/site-packages/rq/job.py", line 222, in _deserialize_data self._func_name, self._instance, self._args, self._kwargs = self.serializer.loads(self.data)
ModuleNotFoundError: No module named 'my_module'

my_module is the module from which I load the long-running function my_function() which I pass to queue.enqueue().

Apparently, the RQ worker cannot see my_module.

The thing is - the other half of the same jobs is processed just fine! I can’t figure out what is going on. This is the first time I’m using RQ and Redis so I do not know where to look at for debugging.

Have you ever encountered anything like that? Or do you have any suggestions where I should look?

Glad you found it useful!

I’ve not come across that error myself, but perhaps this link is helpful? Failing that I’d be happy to take a look at your code and try to figure out what might be going on.

I tried what the link suggests and it made no difference.

However, I tried to run the app inside of docker (adapting your compose setup) instead of using 3 different terminal instances as up until now (one for redis, one for the RQ worker, and the last one for the app itself). Now, when running in docker, I don’t get the ModuleNotFoundErrors anymore!

So I guess problem solved. ¯\_(ツ)_/¯

Thanks again for your tutorial repo! It made my life so much easier. :slight_smile:

1 Like

Interesting! Not sure what the problem might have been exactly, but I’m glad you’ve got it working.