Hey guys,
I already know this will be too open of a question with too little information but i will try my luck anyway.
I’ve created a multipage dash app that uses basic auth. I’ve changed the basic_auth.py so it uses LDAP authentication. Once i receive data from LDAP server i save it as a variable in flask.session:
flask.session["something"] = "something"
in app.py i define SECRET_KEY and SESSION_TYPE for flask sessions:
server.config["SECRET_KEY"] = "some key"
server.config["SESSION_TYPE"] = "filesystem"
so i can later acces that data in app:
from flask import request, session
something = session.get("something", "")
in my understanding flask.session is unique for each user and no matter how many times the basic auth class is fired it will always rewrite the key in session dictionary. However i believe once the number of users reaches 150-200 the app starts to crash.
My app is hosted in a docker container. For about a week it works like a charm every mysql and s3 query is quick and pages are loaded instantly. Then something happens and the app is no longer reachable. Best case is i get the loading… white screen but mostly not even that. No request (POST, GET) are fired. I’ve tried debugging with the docker container logs but with no luck. Am i missing something crucial? Does anyone see any capital problems with my approach?
I’ve checked and I’m closing all connections (sql, s3) properly so i don’t think that is the problem.
All input and comments will be really appreciated.
Thanks in advance.
EDIT:
dash==2.17.1
dash-auth==2.2.1
Flask==3.0.3
Flask-Caching==2.3.0
Flask-Compress==1.15
Flask-SeaSurf==1.1.1
python-ldap==3.4.3
gunicorn==20.1.0
Hey @dashamateur !
Do you use some kind of backend to store your session data? If not, I think the flask session standard behaviour is to use Memory Cache. I could imagine, depending on the size of the whole session and resources allocated to your Docker container, that you bloated your memory if you dont use a backend like Redis to store the session.
I hope this helps, but a bit obsolete if you have Redis already in place haha
1 Like
Hi @Datenschubse and thank you very much for your reply.
I don’t use backend, i use flasks default “filesystem” which to my understanding stores data in a temp directory on the docker containers filesystem.
From what you mentioned I would assume two scenarios are possible:
- either the container exhaustes resources,
- or the SESSION_FILE_THRESHOLD which is 500 items by default is filled and the app can’t write new items.
Either way you gave me some new pointers to check out!
Will try to check these things and will update the post. Thanks again!
Alright nice. Do you deploy multiple containers? If so, you are more or less forced to use some centralised storage to manage your sessions and keep your replicas in sync.
1 Like
I do deploy multiple containers but I only use flask.session in one of them. In other apps I use a static file with valid users.
So in all apps except 1 the basic_auth.py is only modified for checking if the user is in active directory and those apps work fine. But in this 1 I also store some data from Active Directory to flask.session and this is where the problems occur.
That’s why I dont use centralised storage like redis. Or I guess my centralised storage is Active Directory?
Okay when only one machine needs/has access then its fine 
1 Like
Do you have something that will auto spin up your app if it crashes, if not, you should.
Also, if you update your dash version, you will have access to on_error, which you can use to log errors in callbacks or the app in general.
I use supervisor and have errors and outputs sent to log files. You can also have supervisor spin up a website to access this info from a url as well, making it easy to view the logs.
1 Like
First of thanks for the reply.
I do have auto reboot (restart) set on the container but the thing is there is no actual problem with the container. Its status remains running but the app won’t load any pages there is only a white screen or on rare occasions default dash loading… message. Also there is no message in the browser console.
Also worth mentioning that when i manually restart the container it starts working as expected again. That’s why I think it has something to do with session or maybe some other requests? Because it seems like the restart clears whatever is clogged up.
I will look into on_error thanks for the suggestion. Could you point me towards some docs on your supervisor or some links that could get me started on that?
Sure, here are the docs for on_error
:
You can configure it to send you an email, once you hook up your email that is.
As far as supervisor, I use it in ubuntu, but you can use it in docker (its essentially how Azure Web Apps work):
If you want to access these things while they are running, you can just go into the logs for the supervisor.
For a dash app, you’ll have to put the steps that you normally do in the supervisor.
Here is how I use it:
cat <<EOF > /etc/supervisor/conf.d/main.conf
[program:main]
directory=${BASEDIR}
command=antenv/bin/gunicorn -b 0.0.0.0:7000 -w ${BIG_WORKERS} --timeout 1200 "dashApp:run_app()"
autostart=true
autorestart=true
stdout_logfile=/home/var/logs/main.out.log
stderr_logfile=/home/var/logs/main.err.log
EOF
BASEDIR is the file base directory, my directory is dynamic, you probably wont need to do this. BIG_WORKERS is how many works minus a few cause I dont take up all the cores. gunicorn
recommends 2*cores+1 for the number of workers.
To have things print to the out, you need to make sure you sys
and flush. Otherwise, that stuff will sit in memory on your app until it crashes.
1 Like