How to diagnose memory leak on Dash application?

When I go to deploy my Dash app on Heroku I’ve been getting either an R15 or R14 error i.e. memory usage is too high.

I try to diagnose the problem locally using the dash diagnostics and I can see one page’s callbacks will take 3.2s to run and at most a different page will transfer 2.7MB of data. This hardly seems enough to cause a spike in memory usage over the 512MB limit Heroku offers.

I’ve tried using scattergl plots instead of scatter as per the dash documentation, I’ve tried to use some sort of garbage collection but couldn’t get it to work.

I’ve uploaded multi-page dash apps to Heroku before without a problem and my current one is any more complicated than they were.

I’m aware one option would be to use a memory profiler but from my understanding, I would need to test this on every function in my application which would take a very long time.

Does anyone have any advice on how to diagnose dash applications on how to find memory leaks please? I’d like to locate where the problem is coming from.

Struggling with an issue myself, came across this just in case you have to take the hard way: http://www.philblog.com/OptimizingHerokuApps.pdf

Also, here’s my requirements.txt – maybe there’s something on the list that’s blowing up and it’s on yours too:

appdirs==1.4.4
appnope==0.1.2
argon2-cffi==20.1.0
async-generator==1.10
attrs==21.2.0
backcall==0.2.0
bleach==3.3.0
branca==0.4.2
Brotli==1.0.9
certifi==2021.5.30
cffi==1.14.6
charset-normalizer==2.0.3
click==8.0.1
click-plugins==1.1.1
cligj==0.7.2
cycler==0.10.0
dash==1.21.0
dash-bootstrap-components==0.13.0
dash-core-components==1.17.1
dash-design-kit==0.0.1
dash-html-components==1.1.4
dash-table==4.12.0
debugpy==1.3.0
decorator==5.0.9
defusedxml==0.7.1
distlib==0.3.2
entrypoints==0.3
filelock==3.0.12
Fiona==1.8.20
Flask==2.0.1
Flask-Caching==1.10.1
Flask-Compress==1.10.1
folium==0.12.1
future==0.18.2
geocoder==1.38.1
geographiclib==1.52
geopandas==0.9.0
geopy==2.2.0
greenlet==1.1.0
idna==3.2
ipykernel==6.0.1
ipython==7.25.0
ipython-genutils==0.2.0
ipywidgets==7.6.3
itsdangerous==2.0.1
jedi==0.18.0
Jinja2==3.0.1
joblib==1.0.1
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==6.1.12
jupyter-console==6.4.0
jupyter-core==4.7.1
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
kiwisolver==1.3.1
MarkupSafe==2.0.1
matplotlib==3.4.3
matplotlib-inline==0.1.2
mistune==0.8.4
munch==2.5.0
nbclient==0.5.3
nbconvert==6.1.0
nbformat==5.1.3
nest-asyncio==1.5.1
nltk==3.6.2
notebook==6.4.0
numpy==1.21.0
packaging==21.0
pandas==1.3.0
pandasql==0.7.3
pandocfilters==1.4.3
parso==0.8.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.3.1
pipenv==2021.5.29
plotly==5.1.0
plotly-geo==1.0.0
prometheus-client==0.11.0
prompt-toolkit==3.0.19
ptyprocess==0.7.0
pycparser==2.20
Pygments==2.9.0
pyparsing==2.4.7
pyproj==3.1.0
pyrsistent==0.18.0
python-dateutil==2.8.1
pytz==2021.1
pyzmq==22.1.0
qtconsole==5.1.1
QtPy==1.9.0
ratelim==0.1.6
regex==2021.4.4
requests==2.26.0
scikit-learn==0.24.2
scipy==1.7.1
Send2Trash==1.7.1
Shapely==1.7.1
six==1.16.0
sklearn==0.0
SQLAlchemy==1.4.21
tenacity==8.0.1
terminado==0.10.1
testpath==0.5.0
threadpoolctl==2.2.0
tornado==6.1
tqdm==4.61.1
traitlets==5.0.5
urllib3==1.26.6
virtualenv==20.4.7
virtualenv-clone==0.5.4
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==2.0.1
widgetsnbextension==3.5.1

wow that sure is a long list of dependencies for the 4 packages I had to install.

I’ve decided to ditch Heroku and all things related to Salesforce.com, so I’ll be scouting alternatives going forward.

Thanks for the tips, I’ve not been able to locate a memory leak (still not sure how exactly to do it) but I think I’m just pulling in too much data into the Heroku app for the tier I’m using.

I’ve been advised to try putting the app in a docker container and using NGINX and EC2 to host, so that’ll be the next port of call.