Orca Server Not Parallelizing Effectively

Hi,

I’m running a local Plotly Orca server (e.g., localhost on port 7768).

I wanted to test Orca’s ability to use multiple CPU cores when a surge of requests is sent. I created the python code below to test.

The file I’m testing takes about 1s to process (“processingTime”).

For some reason, the Orca server is slower.
Output (time for each request, in ms):

[-3976, -4051, -4071, -4008, -4089, -4109, -4130]

But, if I use multiple servers on different ports [7768-7773] (set ap = bp + a), I see all cores get stressed and a notable time reduction for each request

[-1262, -1115, -1073, -1167, -1214, -1199, -1102]

Is there a reason for this? And is there anyway I can get faster performance?

import json
import time
import os
toRead = '9514_GPU0_PLOT1.json';
toSend = json.load(open(toRead,'r'));
jsondata = json.dumps(toSend);
jsondatabytes = jsondata.encode('utf-8');
loc = 'http://localhost:7768';

import urllib.request

now_time = lambda: int(round(time.time() * 1000))


def sendRequest(a):    
    global now_time, jsondatabytes
    start = now_time();    
    bp = 7768;
    ap = bp;
    newLoc = 'http://localhost:' + str(ap);
    req = urllib.request.Request(newLoc);
    req.add_header('Content-Type', 'application/json; charset=utf-8')
    req.add_header('Content-Length', len(jsondatabytes));
    response = urllib.request.urlopen(req,jsondatabytes);
    delta = start - now_time();
    return delta;
    
def useProcess(a):
    global now_time, jsondatabytes
    start = now_time();    
    os.system('orca graph 9514_GPU0_PLOT1.json -o ' + str(a) + '.png');
    delta = start - now_time();
    return delta;    
    
from multiprocessing.dummy import Pool as ThreadPool

n= 7;
pool = ThreadPool(n);
print(pool.map(sendRequest, [x for x in range(n)]))
#print(pool.map(useProcess, [x for x in range(n)]));