Recommended way to handle large dataframes? #1342
-
I am reading in a large dataframe for my wave app and just want to load it in once when the client opens the app. I want to keep access to it throughout my app since I will need to work with it to create graphs. Is the recommended way to do this: @app('/demo')
async def serve(q: Q):
if not q.client.initialized:
q.client.df = pd.read_csv("my_data.csv")
q.client.initialized = True
# Other code here
await q.page.save() Then in the rest of the app, I can use Are there any instances when this will be reloaded? For example, what happens if I call something with |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Kind of. If you want to keep a separate df for each browser (client), then yes, but if the dataset is large, you can easily end up using too much memory and crashing the app eventually (in case of insufficient RAM). If you need a single df for all the users, you can use
Yes,
It will be reloaded only if you specify it within your code. |
Beta Was this translation helpful? Give feedback.
Kind of. If you want to keep a separate df for each browser (client), then yes, but if the dataset is large, you can easily end up using too much memory and crashing the app eventually (in case of insufficient RAM). If you need a single df for all the users, you can use
q.app
- once per app.Yes,
q.client
will be always in your control and available.It will be reloaded only if you specify it within your code.
trigger=True
simply submits data to server…