Hi everyone!
In my WebApp, I am pulling data from SQL and transforming it into AG Grid table (first callback in the code below).
Then, if necessary, user is doing changes in the table (in browser), clicking the “Save to SQL” button and uploading changes back to SQL via .to_sql
line (second callback in the code below).
#Take data from SQL and transform into Dash AG Grid Table
@callback(
Output('table-from-sql', 'children'),
Input('interval_pg', 'n_intervals')
)
def populate_datatable(n_intervals):
df = pd.read_sql_table(table, con)
df.index = np.arange(1, len(df)+1)
df["entry"]=df.index
table = grid.AgGrid(
id="sql-table",
rowData=df.to_dict("records"),
columnDefs=columnDefs,
defaultColDef={
"resizable": True,
"sortable": True,
"filter": True,
"floatingFilter": True,
"editable": True,
"minWidth":125
},
columnSize="sizeToFit",
dashGridOptions={
"rowSelection":"multiple",
"undoRedoCellEditing": True,
"rowDragManaged": True,
"animateRows": True,
"rowDragMultiRow": True,
"rowSelection": "multiple",
"rowDragEntireRow": True,
}
)
return table
#Save edited data back to SQL
@callback(
[Output('placeholder', 'children'),
Output("store", "data")],
[Input('save-to-sql', 'n_clicks'),
Input("interval", "n_intervals")],
[State('sql-table', 'virtualRowData'), #<===causes issues if filters are used
State('store', 'data')],
prevent_initial_call=True)
def df_to_sql(n_clicks, n_intervals, dataset, s)
pg = pd.DataFrame(dataset)
pg.to_sql(table, con, if_exists='replace', index=False)
#some unnecessary lines are removed
Because sometimes order of data in the table can be changed (lines added/deleted/dragged) by the user (and I want the exact new order of data to be saved into SQL), in the callback I am using virtualRowData
instead of rowData
But using virtualRowData
has backfired, i.e. if table is filtered at the moment of editing and “save to SQL” button is clicked, then the whole existing data in SQL will be overwritten by the filtered data only (so other data that wasn’t part of the filter is deleted/lost).
Right now the only way to avoid this, would be to manually unfilter/remove all filters in the table after editing the data in it and only then upload it to SQL.
What I’d like is to avoid the step of MANUALLY unfiltering/removing all filters from the table.
A few options come to my mind:
- automatically remove all filters before saving to SQL
- filters still visually stay (still visible to the user), but tell Python/SQL to ignore filters when uploading
- create a unique key for each row and tell Python/SQL to upload only edited rows only while ignoring all filters (but still taking into account the possible change in the order of rows even if data itself it not changed)
Unfortunately, I am lacking technical knowledge and experience to figure out how to implement any of those options.
I’d appreciate any tips and guidance.
Thanks in advance!