Maintaining the specific layout/order of a pandas dataframe in dcc.store

TLDR: I am using ‘OrderedDict’ and json.dumps/json.loads to maintain the layout/order of specific pandas dataframes when passing to dcc.store. Is there a better way?

I have a use case where I need to store a number of distinct pandas dataframes in dcc.store and maintain their layout/order in order to display them as datatables in a multi page app (not using the multi-page feature yet). I quickly noticed that my original method of storing multiple dataframes (creating a dictionary of dictionaries) did not preserve the order of the individual dataframes:

data1_dict = data1_df.to_dict() 
data2_dict = data2_df.to_dict() 

all_data_dict = {}

all_data_dict[0] = data1_dict
all_data_dict[1] = data2_dict

I then tried creating ordered dictionaries:

data1_dict = data1_df.to_dict(into=OrderedDict) 
data2_dict = data2_df.to_dict(into=OrderedDict) 

# probably unnecessary to make this an OrderedDict
all_data_dict = OrderedDict()    

all_data_dict[0] = data1_dict
all_data_dict[1] = data2_dict

This also did not preserve the order of the individual dictionaries.

After thinking about (and googling) it for awhile, I determined that it was likely the conversion (whether as an ordered or non-ordered dict) to JSON that was causing the individual dictionaries to lose order.

So I decided to try to convert each dict to JSON using json.dumps before storing it in dcc.store and then using json.loads and pd.DataFrame.from_dict to convert it back into a dataframe. To my surprise (I am a JSON noob), it worked:

data1_dict = data1_df.to_dict(into=OrderedDict)
data1_json = json.dumps(data1_dict)

data2_dict = data2_df.to_dict(into=OrderedDict)
data2_json = json.dumps(data2_dict)

# again probably unnecessary to make this an OrderedDict
all_data_dict = OrderedDict()    

all_data_dict[0] = data1_json
all_data_dict[1] = data2_json

and then in each respective page:

if all_data_dict['0']:
    data_json = json.loads(data['0'])
    data_df = pd.DataFrame.from_dict(data_json)

Gives me a dataframe with the same layout/order in which it was created.

I have no idea if this is the right way to do this. In my case, I have several (16) small (under 150k) distinct dataframes that I need to store. I suspect that this process could be quite slow with large datasets.

I guess my question is there a better way to do this? Are there any pitfalls to this method that I should know about?