I have a very large dataset (10’s of GB) that I wish to show in a scrollable graph. My final goal is to read in new chunks of data as the user scrolls from left to right so as not to swamp the RAM.
I have some code written with Plotly Graph-Objects that produces a range slider. I can adjust the width of the range slider, and also grab it and move it like a window spanning the desired data range.
What I can’t do is get the values of the rangeslider min/max limits, which I want to use to read in the next chunk of data.
So I started looking at the range slider in Dash. I know I can get the value property from this slider. However, all the examples show only the ability to move the two handles independently. I will lose the vital property of moving the range window (both handles) together.
This is the example I worked from with my current rangeslider.
Can something like this be achieved in Dash?
It is definitely possible to achieve this behaviour in Dash. The main question is how efficient it will be. How fast are you able to query the data?
Apart from the fact that you don’t want to bring the data into memory, your needs seem very closely aligned with the (very cool!) plotly-resampler project. You might also want to take a look at Holoviews.
The plotly-resampler project does look very interesting, and I may use that to help with displaying the large number of datapoints. I still won’t be able to read all the full-res data into memory, but maybe I can decimate it in a similar way to the resampler. Thanks for sharing.
Regarding the range slider, could you elaborate on how I would obtain the moving window behaviour in Dash?
EDIT: Plotly_Resampler is fantastic! Testing on my 420MB data file, it dramatically speeds up the time taken to display the data. It seems that a large portion of the time was due to display rendering rather than reading data.