Are there any best practices to apply in this situation? My code takes ~20 minutes to run with 150k+ records to plot. It can do about 126 plots per second, seems to be the same across differing processor speeds.
I’m curious what the data types look like. What’s this return?
The data viz best practice side of me however wants to find you a better UX human computer interaction design solution than scanning 150+ plots! If you’ll indulge sharing a little more about the use case and what led you to doing it this way, is it a comparative situation, or something else you’re looking for? Maybe other solutions would be less computing/memory intensive also.
Other ideas would be tuning and comparing your log files for performance:
Can you make use of subplots, and would that be faster or slower than distinct figures? The low level code there might yield some ideas.
Is being explicit with attributes slower or faster than letting Plotly render the defaults (on all those figures…).
Thanks for the response. The solution was in your first paragraph. I was needlessly calling go.Figure() on each plot. I just needed the dict created! removing the call took out all that extra overhead - exactly what I needed.
So glad my questions helped you track down a faster way to make this many plots! I’m always surprised by how many different ways there can be to get the same result, each with its tradeoffs in different situations. It can be so helpful just to hear how someone else would approach the same problem. Happy plotting