I have a few dash apps I run locally on a pc and a few coworkers view on our internal network. When we get too many users connected it starts to lag. Is there a way to spin up a new process to handle multiple users other than just running multiple instances on different ports? And related to that, is that what it means when people mention deploying an app? Ive never really had a clear understanding of what all that entails
What’s happening is that your hardware set-up is buckling under the demand and has not been configured for scale.
There are a few ways you could tackle it:
1.) Review your code and ask if you are using any memory or computationally expensive functions unnecessarily. Are you loading multiple or large instances of data? Could that be resolved with better coding practises? Are you getting your computer/server to do all the heavy lifting, when you could pass some of that back onto your coworkers computers? If you’re working with larger sets of data, is your database set up optimally, or are you loading all of your data in with disjointed Excel/CSV files?
2.) Upgrade your hardware to take on the demand, or:
3.) Deploy a decentralised version in the Cloud, where it can scale up and down based on the resource demand. This should cost you some money, but not as much as step 2.
Welcome to the community!
@MrMadium is right if you are running these local servers via multiple workers.
If you aren’t using multiple workers, your requests are piling up before they can be catered. Having multiple workers will allow for the requests to be handled by different workers that are independent of each other.
Depending upon your os, you can use gunicorn, uvicorn or waitress.
Also, I like to point out that if you have an older computer, you should look into Proxmox, it is a virtual machine virtualization software you can install. This can allow you to split up resources into containers and then use nginx and gunicorn inside those containers.
Thanks for your response. Do you have a good resource on how to get Dash to work with gunicorn?
What operating system are you running the server on?
Alright, just glancing at a walkthrough for running Flask and gunicorn with nginx on centos, it looks to be similar in format.
You should be able to follow this guide, obviously you wont need to make the new flask application.
Here is a dated guide, but I think its pretty close:
The one difference between regular flask and dash. You need to pass the flask server from dash as the server you are running for gunicorn.
To do this, you need to expose the app’s server like this:
server = app.server
And then, when you run gunicorn, if your dash app = app.py:
gunicorn -w 3 app:server
If you cant get this to work, then you might be able to parse the running of the centos os out of this:
This is Linux, but still similar. Just keep in mind the bit about the gunicorn server needing to be the flask server and not the dash app.
A byproduct of this and nginx, you will no longer need the port name in the url, you can also add multiple backend servers. You might even be able to split out heavier processes to different servers to give a higher priority to interactions vs background processing.
perfect, thank you. i’ll give this a shot