AG Grid "virtualRowData" preventing access to full table

Hi everyone!

In my WebApp, I am pulling data from SQL and transforming it into AG Grid table (first callback in the code below).
Then, if necessary, user is doing changes in the table (in browser), clicking the “Save to SQL” button and uploading changes back to SQL via .to_sql line (second callback in the code below).

#Take data from SQL and transform into Dash AG Grid Table

        Output('table-from-sql', 'children'),
         Input('interval_pg', 'n_intervals')

def populate_datatable(n_intervals):
    df = pd.read_sql_table(table, con)
    df.index = np.arange(1, len(df)+1)
    table = grid.AgGrid(
                "resizable": True,
                "sortable": True,
                "filter": True,
                "floatingFilter": True,
                "editable": True,
                "undoRedoCellEditing": True,
                "rowDragManaged": True,
                "animateRows": True,
                "rowDragMultiRow": True,
                "rowSelection": "multiple",
                "rowDragEntireRow": True,
    return table

#Save edited data back to SQL

    [Output('placeholder', 'children'),
     Output("store", "data")],
    [Input('save-to-sql', 'n_clicks'),
     Input("interval", "n_intervals")],
    [State('sql-table', 'virtualRowData'), #<===causes issues if filters are used
     State('store', 'data')],

def df_to_sql(n_clicks, n_intervals, dataset, s)
    pg = pd.DataFrame(dataset)
    pg.to_sql(table, con, if_exists='replace', index=False)
#some unnecessary lines are removed

Because sometimes order of data in the table can be changed (lines added/deleted/dragged) by the user (and I want the exact new order of data to be saved into SQL), in the callback I am using virtualRowData instead of rowData

But using virtualRowData has backfired, i.e. if table is filtered at the moment of editing and “save to SQL” button is clicked, then the whole existing data in SQL will be overwritten by the filtered data only (so other data that wasn’t part of the filter is deleted/lost).

Right now the only way to avoid this, would be to manually unfilter/remove all filters in the table after editing the data in it and only then upload it to SQL.

What I’d like is to avoid the step of MANUALLY unfiltering/removing all filters from the table.

A few options come to my mind:

  • automatically remove all filters before saving to SQL
  • filters still visually stay (still visible to the user), but tell Python/SQL to ignore filters when uploading
  • create a unique key for each row and tell Python/SQL to upload only edited rows only while ignoring all filters (but still taking into account the possible change in the order of rows even if data itself it not changed)

Unfortunately, I am lacking technical knowledge and experience to figure out how to implement any of those options.

I’d appreciate any tips and guidance.

Thanks in advance!

Hello @mrel,

Yes, what you are doing with the whole table is dangerous, I always recommend the changelog and then push the changelog up using keys for such an instance.

Why exactly are you trying to push the data in the order that it is represented?

Hi @jinnyzor,

I am building something like an interactive dashboard. I’d like the rows to be uploaded to SQL and then pulled back from SQL in a certain order (per user’s choice and decision).
That’s why I’ve been trying to figure out a way to keep that feature while still being able to filter the table and telling SQL to ignore filters and upload unfiltered (but ordered/sorted) data.

Yes, a changelog with identifiers (keys) is what I use.

Along with getRowId so it makes it clean.