I am new to DASH and I do not have a CS education so hopefully my question is not that stupid. First of all I wanted to thank DASH developers for providing us such a great tool. I searched for similar questions on this forum but I still couldn’t get an answer to my question.
For my project I am trying to build a real-time dashboard. My app reads data from a file every 5 sec, performs data analysis, and present results with multiple plots such as scatter, parallel coordinate, heat map, density plots etc. Some of the plots have drop-down menus and sliders. So I use multiple inputs for my callbacks: each plot updates every 5 seconds and when I click on slider/drop-down menu. Everything works great, however, when I add more and more plots everything slows down significantly. I calculated time for one of my functions that returns a plot. I found that this time is 0.4 seconds when I have only two plots and it goes up to 3 seconds when I add 12 more plots to my dashboard (separate functions/callbacks). Now, when I click a slider or drop-down menu, I will have to wait 3-6 seconds to see any updates on my plot. I also noticed that my CPU usage is only about 2-9% when I run this app. I have 12 cores and I was wondering if I could parallelize my app so it runs faster when I have lots of plots. Is it possible to assign one CPU for each callback?
I do not believe I could share my code at this time but here’s some useful information:
I use 12 CPU machine and run the app locally.
I run the app with either spyder or jupyter
Data file is about 100 KB and gets updated every 5 sec. I use pandas to read the data.
Plotly graphs: go.Scatter and go.Scattergl, px.parallel_coordinates ,go.Heatmap, px.histogram and go.Histogram, go.Bar, go.Indicator
I would appreciate any suggestions on how to improve the speed of my app.