My solution for Dash app for multi-user which is updatable with page refresh

Hi! For first i want to say few words about the project.
I wrote single-page Dash app that gets data from database with sql query and then builds 2 dropdowns 1 slider 2 radio buttons 3 graphs and all this elements are linked to each other. The only problem that i have encountered : to make app’s data in graphs updateable with refreshing page in browser. I read user’s guide and made app.layout a function, but when you have multiple linked graphs you need to have data stored in variable you can get from @callback namespace. Using global variables isn’t recomended in case of multi-user apps (i checked this. really dont do this it’s bad idea). On that forum i find advice to store result of sql query in html.Div in layout as json and not display that div - so you can get them from @callback namespaces and they are immutable. I checked this it works, but very slow. After that i add @lru_cache to a function that convert jsoned data from hidden div to pd.dataframe, but if your jsoned data are really big @lru_cache wouldn’t help.I think this is due to the fact that it compares the input for a long time when selecting the value that should be returned. but it is not exactly. So i decided to do following things: delete hidden div and use something shorter than json string is for input in @lru_cache… something like i have:
before layout:

def get_df(date):
    conn = pymssql.connect(server="", user="user",password="password", port=1433,charset='utf8') 
    stmt = "SELECT * FROM  svodka..analitic_app_main_data_frame"            
    #replace na with 0
    return df

in layout:

def layout():
   . . .

in @app.callbacks

    dash.dependencies.Output('vp_motivation', 'figure'),
    [dash.dependencies.Input('stored_data', 'children'),
     dash.dependencies.Input('drop_year', 'value'),
     dash.dependencies.Input('drop_otdel', 'value'),
     dash.dependencies.Input('quarter_slider', 'value'),
     dash.dependencies.Input('quarter_type', 'value')]
def update_main_graph(load_time,year,drop_otdel,quarter,qtype):

So when user refresh page in a browser we stored time in hidden div instead of jsoned data. in this case @lru_cache find result of query much faster for each user as is unique. Result of this is that every user receive fast linked graphs on his page like it was in single app mode.

Going ahead with load_time=str( we can do some another profit for example:

def layout():

output of load_time=load_time[:15] is something like ‘2017-09-16 15:5’. If we do that we will save in cache_memory result of query only every 10 minutes for all users. once in 10min one brave user updates his page in browser for full time required to get_df function. Other users will get their pages immediatly for next 10 mins.
So with slicing load_time=load_time[:15] we can controll how ofthen we need apdated data from our database.
fuhh… That’s i want to share with community. I Am not professional programmer but hope my solution can be useful for someone.

P.S. sorry for bad english

Regarding caching, I’ve had a lot of success with DiskCache. I cache expensive calls (KDE on 20,000+ plots) to disk and cache for a week, with auto-expiry. It’s really impressive.

Hi @roman! You can try to put separate dataframe into callback function. It won’t be a global variable (so different users won’t affect each other) and your data will be updated with a page refresh. Works for me at least.