Excuse me for reanimating this zombie from 11 months ago.
I feel a need to expand the conversation to meet new challenges.
Currently, I have two separate EC2 T2-Micro-instances each running 4 docker containers, Nginx and three Gunicorn/Flask/Dash single app containers.
I build this as a multi-app container set and it uses almost all the 8gb of memory with the free-tier T2 instance. I have built two of these with 3 apps each.
They are both on https/443 (using letsencrypt) and http/80. Working fine.
The graphs are interactive and can be embedded in a WordPress blog easily.
Now, I see that it would make sense if there were more than three apps per server and I am sure that just adding the small .py Dash-powered apps would not smoke the rest of memory.
Adding additional containers, each with the whole stack, Dash, plot.ly, python, pandas, gunicorn etc. would eat memory.
So it makes much more sense just to try and expand the number of apps rather than the number of containers or servers to scale up in dash apps.
The multi-page app approach could be the answer if each page were separately addressable via a unique port # as a part of a sub-domain. I use sub-domains with an IGW (Internet GateWay) and route53 on AWS to allow me to use just a single domain - tsworker.com to route publicly to all the sub-domain instances. A simple modifier (’‘A’ record) for each subdomain allows something like: 'charts.tsworker.com to reach the other instances.
If it were possible to link to individual pages in the multi-page app approach, that would optimize instance consumption and storage which is where the charges get you eventually. I could merrily create hundreds of dashboard apps using 10’s of instances (or less maybe?) and treat all of them as uniquely addressable endpoints.
What do you think?
See some of the dash apps live on the blog below.