Live-plotting ~15 subplots of 20-50k points each

So I am live plotting several charts. All of them are imported using pandas read_csv. The CSV file gets updated every seconds through another process. The file is quite small at the beginning of the day but will have 15-20 variables having each up to 50k points. The goal would be to have a process that could generate all 15-20 subplots in under 1 sec, the faster the better. At the moment I am forcing the process to print the last 1,000 points for each variable so that it can remain under 1 sec.

Below is the code:

def update_charts(factor_names,jon_dataset):
    print('live chart data loading time')
    t0 = time.time()
    fact_mid_db = pd.read_csv(fact_mid_chart_path,index_col=0,parse_dates = True)
    fact_mid_db = fact_mid_db[pricing.index.tolist()]
    t1= time.time()
    print (t1-t0)    
    t0 = time.time()      
    Charts = []
    for factor in factor_names:
                            x = fact_mid_db.index.tolist()[1::10],
                            y = fact_mid_db[factor].iloc[1::10],
                            mode = 'lines',
                            name = factor),
                        margin=dict(l=20, r=0, t=30, b=0)
        ], className = 'three columns',style={'margin-left':15,'margin-right':15,'margin-top':5,'margin-bottom':5}))
    t1= time.time()
    print (t1-t0)  
    return Charts

I have tried to use Scattergl (just replacing Scatter by Scattergl) but it seems to be even worse as dash struggles to generate all charts:

How can I speed up the process?

In addition to all this and that is probably related, depending on he size of the dataset, Dash would only be able to partially print the charts: Let’s say it should be printing data from 9am to 11am, on some update cycles it would print data from 9am to 10am, then do 9am-11am then 9am-9.30am.
In case people ask: I am not using passing the data through a json file because the uncompression is way too slow. It is much faster to grab a csv file and importing it.

Thank you