Dash not updating graph on DB insert as imported connection

Some background information:
Base system: Ubuntu 16.04
Docker: Docker version 20.10.6, build 370c289
Docker image information:
OS: Ubuntu 16.04
Python version: 3.6
Dash 1.12.0
DCC: 1.10.0
dash-html-components: 1.0.3
dash-renderer: 1.4.1

Hey guys, I’ve got an odd problem that I’ve been trying to sort out for several days. My graph won’t update in Dash despite inserting new data into a DB.

The dash app is in multi-page format with:
server.py
index.py
apps dir with module.py for every app in apps
DB_as_df module (the problem in question)

Due to some dependencies, the data that I am working with is saved entirely in an SQLite DB file. Every day, I download an updated version of this DB. Now, to speed up queries and updating, I wrote a class module to load the DB file into memory and save the DB connection as a property. This module is inside the server.py, and so runs at server start, which loads the DB file into memory.

I have a watchdog that checks for file changes in this DB file. On a file-change, it initializes an update process, where it connects to this DB file, queries it for new the new dates associated with new data, then adds those new entries to the in-memory DB. Each pages imports this DB_as_df object, which I have verified as the original object by checking the memory addresses associated with the objects.

The class object has an associated get_df method which I use in callbacks to convert relevant dates from the data into a pandas DataFrame object which is then graphed.

My problem is that when I update the DB via an update method triggered by the watchdog, for some reason, any callbacks to the get_df method do not return any new data even though I have verified separately that the module works as intended when run independently of the dash application. I have read several SO problems and several plotly forum posts, including this one which is the closest to my issue: SQL not updating in callback - #3 by will

Their problem is the same, the difference is that they are able to solve the problem by closing and opening the connection again. However, in-memory databases are destroyed upon closing the connection in sqlite, so I cannot do that.

I would like to maintain the performance advantage of keeping the DB in-memory, so I would like to understand the caching methodology described in the answer of the linked post so I can work around it, or switch to a different in-memory component like dash-tables or redis

class DB_as_df(object): 

    def __init__(self, directory):
        self.directory = directory
        self.db_in_mem = None # DB connection object
        self.connect()


    def connect(self, update=False):
        static_DB_con = sqlite3.connect(self.directory)
        tempfile = StringIO()
        for line in static_DB_con.iterdump():
            tempfile.write('%s\n' % line)
        static_DB_con.close()
        tempfile.seek(0)

        if not self.db_in_mem or update:
            self.db_in_mem = sqlite3.connect(':memory:', check_same_thread = False)
            self.db_in_mem.cursor().executescript(tempfile.read())
            self.db_in_mem.commit()
            self.db_in_mem.row_factory = sqlite3.Row
            print("Populated in memory DB")
        
        else:
            print("In memory DB already existing, ignoring initial connection call")
    

    def get_df(self, start:datetime.datetime, end:datetime.datetime, table:str="MW"):
        # removed for brevity
        # create query based on inputted dates, and return pandas DataFrame

    def update(self):
        # removed for brevity
        # connect to DB file, query relevant dates, add them to in-memory DB```