Optimizing plotly's make_subplots() for large numbers of subplots

Hi all,

My use case for plotly involves frequently making plots with very large numbers of scatter plots as subplots subplots–sometimes as many as 800 total subplots with 3-5 columns.

The problem I have is that a ton of time is spent in the make_subplots() function of plotly’s subplots.py. Based on the below timing script, it seems that make_subplots is on the order of O(n^2) for n plots, however it seems like the type of function that should be able to run in O(n) or even O(1) time. It runs in negligible time for small numbers of plots but blows up quickly.

import time
from plotly import __version__ as plotlyVersion
from plotly import offline as po
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import plotly.io as pio

numCols = 3
numRows = 50

printf(f"Plotly version: {plotlyVersion}")
for numRows in range(1,200):
    tic = time.perf_counter()
    fig = make_subplots(rows=numRows,cols=numCols) 
    toc = time.perf_counter()
    print(f"{numCols*numRows},{toc-tic:0.4f}")
    del(fig)

Does anyone know if there’s any low-hanging fruit for optimization in make_subplots()? (Perhaps there are some optimizations that would only work for my use case where all the plots are the same layout of xy-scatters?) Even if these optimizations wouldn’t make it into a plotly release, it’d be great if we could find some that would work in my own version of plotly.

On first glance it seems like list_of_domains = [] may be optimizable (list_of_domains = [None] * rows * cols?) since it gets .append()'d to a lot, but I assume that’s not where the bulk of the time is spent. Beyond that I assume I’d need much more intimate knowledge of plotly internals to come up with more.