Sharing big data between callbacks


I have a use case where I am trying to read 180 million data from Azure blob using Databricks connect and binding the first 10 records to dash table to show as preview. And there is another features to run an aggregate and pivot function by passing few inputs on the same dataset for which I need to share the 180 million dataset from one call back to another as reading it from blob everytime is a time consuming process. Could you pelase suggest the most efficient way of sharing big data between call backs.

1 Like