Using multiple processors for single callback with gunicorn

I have an application hosted on a web server with gunicorn. One of the callbacks is more demanding than the others by a long shot (it’s a semi empirical quantum chemistry calculation). On my M1 chip mac, it runs quite fast in my developer environment (1 second) but hosting it has a severe slow down.

I’ve been changing the workers and threads options on gunicorn with no luck. I’m using the free tier of onrender.com. Installing the packages with a requirements.txt then starting server with:

gunicorn —workers=2 —worker-class=gthread —threads=16 —timeout=120 chdir sec index:server

I’m new to web hosting so maybe this request isn’t reasonable (essentially multithreading/processing each callback). Any help would be greatly appreciated!!

Hello @venturellac,

What load balancer do you use?

Gunincorn in and of itself wouldnt be able to differentiate this type of need.

1 Like

I am using diskcache as the load balancer to start the app like:

import dash
import dash_bootstrap_components as dbc

from flask import Flask
import dash_auth

from dash.long_callback import DiskcacheLongCallbackManager
import diskcache


cache = diskcache.Cache("./cache")
long_callback_manager = DiskcacheLongCallbackManager(cache)

VALID_USERNAME_PASSWORD_PAIRS = {}

app = dash.Dash(__name__, 
                external_stylesheets=[dbc.themes.BOOTSTRAP], 
                meta_tags=[{"name": "viewport", "content": "width=device-width"}],
                suppress_callback_exceptions=True,
                long_callback_manager=long_callback_manager)
server = app.server
auth = dash_auth.BasicAuth(
    app,
    VALID_USERNAME_PASSWORD_PAIRS
)

If this type of app can’t be easily scaled on a simple/freetier webhosting, what options are there for running a callback on multiple cores like my developer environment?

@venturellac,

That’s not really a load balancer, that is a background process manager.

A load balancer would be something like nginx which is also a reverse proxy.

But thinking about it, you could potentially start up a python process that is not confined to be inside gunicorn, and then just check for updates into a data store of some sort. Maybe something like a sqlite db.

@venturellac I wonder if the massive delta in runtime you’re seeing is due to the change in CPU architectures. I hear those M1s can be quite snappy indeed for some workloads. Perhaps jumping to whatever visualised CPU cores Render has had a bigger impact than you were expecting from jumping to the cloud?

Is there another hosted-compute service you could deploy to, so as to use as an independent baseline?

2 Likes

I would like to leverage any available free hosting (hence render), but it seems I’m getting what I pay for speed-wise. Which cloud hosting might be suitable to run more heavy-duty callbacks?

Also one thing I have in my app is a dcc.Interval to update a timer every second. Is there a way to stop the dcc.interval from triggering a page refresh in chrome, but still update the timer in my app?

hi @venturellac
Have you tried Pythonanywhere for hosting your apps? I believe they also have a free tier.

Thank you for the suggestion. Unfortunately, I have >512MB of dependencies which exceeds pythonanwyhere’s free tier