Python Dash SQLAlchemy

I am using Microsoft SQL Server and SQLAlchemy

I have a callback in dash app that connects to SQL DB but it does not close the SPID connections in my activity monitor in SQL Server. So it will just accumlate a ton of connections. It works locally, where I see the connection open and then close a few seconds later but it does not in production. I have it deployed on Azure App Services. Any idea why?

@app.callback(ServersideOutput(“store”, “data”),
Input(component_id=‘my_interval’, component_property=‘n_intervals’))

def query(n_interval):
try:
if n_interval > 0 :

        SERVER = 'DEVSQL'
        DATABASE = 'mydb'
        USERNAME = 'user'
        PASSWORD = 'password'
        DATA_CONNECTION = f"mssql+pyodbc://{USERNAME}:{PASSWORD}@{SERVER}/{DATABASE}?driver=ODBC+Driver+17+for+SQL+Server"
        engine = create_engine(DATA_CONNECTION)
        pyodbc.pooling = False

        with engine.connect() as connection:

            chosen_date_instance = pd.read_sql_query('''
                                    SELECT *
                                    FROM inspector.AUDIT_CHECK_INSTANCE
                                    ORDER BY RUN_DT DESC
                                    ''', connection)

            chosen_date_instance.to_csv('././datasets/audit_check_instance_data.csv')

           
            audit_item_instance = pd.read_sql_query('''
                                    SELECT *
                                    FROM inspector.audit_item
                                ''', connection)

            audit_item_instance.to_csv('././datasets/audit_item_instance.csv')

           

            audit_checklet_instance = pd.read_sql_query('''
                                    SELECT *
                                    FROM inspector.audit_checklet_instance
                                ''', connection)

            audit_checklet_instance.to_csv('././datasets/audit_checklet_instance.csv')

   
           connection.close()

        engine.dispose()

        stored_df = chosen_date_instance.copy()

        return stored_df.to_dict('records')

finally:

        pass

Can you elaborate a bit more on what exactly isn’t working?

As a side note, I don’t think you need to explicitly close the connection using the context manager (at least not in latest versions of SQLAlchemy)
.

my dash app in production has a app.callback which connects to microsoft sql server.

i have tried connection.close() and engine.dispose() but the SPID connections in my sql server remains open when I expected it to open > get the data via sql query > then close the connection until the next interval is reached for my callback.

It seems to work locally but not when I deploy it into production in my Azure App Services, I see multiple connections and they dont clear until I shut the app down.

My database admins do not want a ton of open connections.

Any help would be appreciated!

Got it now! Your issue is that the connections are not returned to the pull in AZ App Service, but it does locally.

I imagine this has more to do with your engine setup and/or Azure than the Dash app by itself, so you might have a better chance googling similar errors in Flask applications. Have you tried e.g. to limit the pool_size of the deployed engine?

Maybe someone else can jump in and help you out, but to me it doesn’t seem to be related to Dash… I have a few applications deployed in other services with similar callbacks and I never saw the open connections increase like this…

EDIT: on a second note, there is a chance that interval is short enough to trigger the callback while it is still executing, potentially before the connection was released… Not sure how Dash would handle that. I would try to move the engine init to the global scope of the app (setting up a sensible pool size), create a connection without the dispose and close it in the finally statement.

1 Like

thank you for the response!

i will try your recommendations and provide an update.

update:

tried the below

poolclass=Nullpool
pool_size
pool_recycle

nothing worked! will keep investigating.

1 Like

so far no solution, i dont think its dash

it has to do with Azure app services or SQL Server not executing the close from a remote query

so i think we can move on! thanks for the help

1 Like

Too bad… :frowning: Thanks for the update and please keep us posted in case got it through!

1 Like

thanks for your input! learned a good chunk after your hints.

i ended up leveraging azure data factory and just reading it from there into my dash app. its a bit more cleaner too!

1 Like

Just out of curiosity, did you ask the sqlalchemy folks if they had any suggestions? Someone out there has had to of encountered this before.

I have a similar setup in mysql but my database is serverless so its hard to tell if its releasing the connections.

1 Like

its because dash is not asynchronous so it will leave it hanging.

Dash devs needs to get it goin! LFG!