What would be the best practice regarding application of callbacks in Dash?
Is it preferable to use long list of Inputs and Outputs within small number of callbacks or vice versa?
To clarify:my question is not about code readability but about effectiveness of the code execution, about inner works of Dash (threads, workloads, memory usage etc).
I think is best to have less callbacks with more inputs and outputs but at the same time enough callbacks to take advantage of the fact that in Dash each callbacks works independently each other.
But I’m not an expert.
It depends on the computations and how much data is shared between the outputs:
- If each output has a completely independent set of data fetching or processing commands, then use separate callbacks so that they can be run in parallel when deployed (
gunicorn app:server --workers 4)
- If the outputs share data fetching or processing, then combine them into a single callback with multiple outputs so that the data fetching is done only a single time (to not overwhelm e.g. the database) and to prevent unnecessary computations.
- If there is a combination of things - same data fetching for the original dataset and then independent computations based off of that dataset - then you can use strategies mentioned in Part 5. Sharing Data Between Callbacks | Dash for Python Documentation | Plotly like: Save the data to a cache with a single callback and then fire independent callbacks once that cache is filled to actually update the callbacks.
Thanks. That was my line of thinking as well. I use third option where I store data in the browser. Do I understand correctly that in that case it doesn’t make a difference to app’s performance if I use few long callbacks or plenty short ones. If so, readability of a code is the main concern.