I currently have an AG Grid and about 30,000 rows of data. The data is broken out into three years (about 10,000 rows per year). Rather than loading everything into the AG Grid I have a dropdown that lets users select the year they want to view. I am storing the pandas.DataFrame
server side in a dcc.Store
. Users can select the year they want from a dropdown and update the rowData
of the AG Grid but doing df.to_dict('records')
for the 10,000 rows from the server to client can be slow.
@app.callback(Output('datatable', 'rowData'),
Input('year-dropdown', 'value'),
Input('data-all', 'data'),
prevent_inital_call=True)
def year_dropdown(year, df):
df = df[df['Year'].eq(year)]
return df.to_dict('records')
Any tips to increase performance? I am not using the enterprise version of AG Grid but should I consider reconfiguring everything to use the infinite row model? The downside of the infinite row model is sorting and filtering since the table does not store all the data. I do have graphs that are updated when the table is filtered.
I am not opposed to using the infinite row model if that is going to be the best option but I just wanted to ask from some advice here before I reconfigure my update_graphs
callbacks for the infinite row model.