As mentioned here I’ve been using Apache Plasma to solve what is still one of Dash’s biggest problems: sharing large data between callbacks, apps, and pages.
Now I’ve roughly formalized some of that functionality in the PyPi package
brain-plasma. It’s a simple and easy-to-use way to store Python objects, even very large Pandas dataframes or dictionaries, in a shared memory space. This method offers (imperfect, but much better) thread safety, blazing speed relative to reading from disk or Redis, and a super simple, if corny, API. Basically, it uses Plasma to function as the “brain” of your app or other Python project by creating an indexed object namespace in Plasma.
brain.learn() new things,
brain.recall()old factoids and can
brain.forget() just like I too often do; I can tell my
brain.wake_up() if it’s been
brain.sleep()ing; sadly, sometimes it’s just
brain.dead(). But it can store quite a bit of
brain.knowledge() and it very good at remembering
Full basic docs at https://github.com/russellromney/brain-plasma
Basic usage is:
from brain_plasma import Brain brain = Brain() df = pd.DataFrame(numpy.random.randint(0,100,size=(1000000,4)) txt = 'my text string' # store the data brain.learn(df,'df') brain.learn(txt,'txt') # get the data again txt==brain.recall('txt') > True # delete a name's value brain.forget('df') # get all variable names currently available to brain vars = brain.names()
This is still a work in progress in EXTREME ALPHA i.e. I built it today and is only tested enough to confirm that the functionality works and is better than what I was using before. So, please don’t use this on your production apps until a) the Apache Plasma API is more stable (it’s not) or b) until this API is more stable and c) the functionality is hammered out a bit more (probably in
I’d love any help, requests, or critiques you have!