I have an application hosted on a web server with gunicorn. One of the callbacks is more demanding than the others by a long shot (it’s a semi empirical quantum chemistry calculation). On my M1 chip mac, it runs quite fast in my developer environment (1 second) but hosting it has a severe slow down.
I’ve been changing the workers and threads options on gunicorn with no luck. I’m using the free tier of onrender.com. Installing the packages with a requirements.txt then starting server with:
I’m new to web hosting so maybe this request isn’t reasonable (essentially multithreading/processing each callback). Any help would be greatly appreciated!!
If this type of app can’t be easily scaled on a simple/freetier webhosting, what options are there for running a callback on multiple cores like my developer environment?
That’s not really a load balancer, that is a background process manager.
A load balancer would be something like nginx which is also a reverse proxy.
But thinking about it, you could potentially start up a python process that is not confined to be inside gunicorn, and then just check for updates into a data store of some sort. Maybe something like a sqlite db.
@venturellac I wonder if the massive delta in runtime you’re seeing is due to the change in CPU architectures. I hear those M1s can be quite snappy indeed for some workloads. Perhaps jumping to whatever visualised CPU cores Render has had a bigger impact than you were expecting from jumping to the cloud?
Is there another hosted-compute service you could deploy to, so as to use as an independent baseline?
I would like to leverage any available free hosting (hence render), but it seems I’m getting what I pay for speed-wise. Which cloud hosting might be suitable to run more heavy-duty callbacks?
Also one thing I have in my app is a dcc.Interval to update a timer every second. Is there a way to stop the dcc.interval from triggering a page refresh in chrome, but still update the timer in my app?