Multiprocessing of dash python callbacks on Cloud Run?

Hello people. I hope you have a great day.

I came with a question regarding processing and parellelism:
At the beggining, my Dash app worked locally in every user machine. Everything was okay, but decided to go towards a Cloud deployment in order to apply some data solutions and avoid making users install things or download files in their machines.
So, now the app is deployed in Cloud Run. It works fine, some issue here and there with the code, nothing big. The thing that was noticed is that, when multiple users start using the app, there is some delay in the time the app takes to do the callbacks. More precisely, we try lauching the same process with 3 users, that should take 10 to 20 seconds for each (That was the time in the local machines). With one user, in the cloud it takes around 30 seconds, but with 3, the process quadruplicates.
What I have noticed, is that in the metrics of Cloud run, the instances of container deployed by the service do not surpass the amount of 1 active instance.
My question here is: What should be the problem here?
I understand that it should be related to the deploy of containers and how Cloud Run distributes the users between the instances. In the Dash documents, it is stated that, when working in a multiuser app, you should scale using Gunicorn. I guess, I should change the configuration of the docker display in Cloud Run, right?