@ngaunguyens if you are deploying worker processes and using cronjobs I’d say you are doing pretty well for a new coder 
I’ve provided a little extra detail below on exactly how I would approach your problem, and kept it in this thread in case others encounter a similar challenge with accessing data between workers and web processes. Using Azure as an example.
Understanding cloud storage
This is the tricky part if you’ve never used AWS S3 or Azure Blob cloud stores before. It’s quite a bit to take in but worth the time to learn. You can think of them like a google drive or drop box for files, but they are specially designed to be connected to from code, such as your Python app. And they can store any type of file, providing super fast access to it from incoming TCP connections (i.e. anywhere on the internet). I’d watch a few youtube video tutorials that showcase how it’s done, search for “connect to Azure Blob with Python” or something like that. And follow the Azure guide I linked to earlier.
The critical thing to understand is that once you’ve setup your Azure storage account, in order to access it from your Python code in an automated fashion, you need special credentials so that your python app can authenticate and securely connect to your cloud store. Once that’s done you’re on easy street and you can read/write any files you like to your cloud store, directly from your python code. Step by step instructions to do this are in the Azure guide linked above. But essentially you copy and paste your azure credentials from the Azure web portal that you login to as a human, and then you have to paste them into an environmental variable that your Python code will use at run-time to connect to Azure.
This is what I would do.
Step 1: Get your csv file into cloud storage
Before even starting on the code, get an example .csv data file into Azure blob. Once you’ve setup your storage account, you can use a desktop application called Azure Storage Explorer which allows you to manage everything. This is similar to windows explorer. I’d connect this to your cloud store, then you can manually upload your .csv to a new storage container and see that it’s there! Check out this guide to help.
Step 2: Get your web process (Dash) app to READ the .csv stored in Azure (not locally)
Now you want to modify your dash app to read the .csv file directly from Azure at run-time. This is where you would need to follow the guide to setup Python to talk to Azure (and create the special environmental variable with your Azure credentials etc). If you are determined, it’s not too hard to brute force this and learn. Instead of reading the .csv from your local disk, you are now replacing that code to connect to Azure and read the .csv file directly from the cloud storage container. The connection process is usually super fast (less than 500ms) for your python app to connect and read files on Azure blob.
Step 3: Get your worker python app to WRITE to the .csv file on Azure blob
The final step is to WRITE over or write a new .csv file to your Azure storage container. You can reuse the code you used to connect from the step above for your worker python app, noting that of course you will have to change some actions to actually overwrite the file now, rather than read it. And you’d also need to setup the environment variable in your worker python app too.
Setting environment variables in Heroku
The one thing that might stump you is setting environment variables (containing your azure credentials) so your python code can connect to Azure. In Heroku you can do this through the web portal or by commands. Guide here. You might need to do some testing to ensure you can successfully access the environmental variable in your python app (e.g. printing it to console etc). Once you’re sure it’s working, you can then use it in the code snippets to connect to Azure that you will see in the tutorials etc.
That’s it! In theory this should allow you to read/write any file to a remote cloud store from any of your running python apps! 