You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implement proper logic for issuing tasks for different nodes within the network relying on "our" network of gpu's, Vast.ai network and other providers. The algorithm should prioritize our network first and attempting to access other resources in case of too big load (can be measured in expected delay for receiving an image for submitted request, f.e. longer than 1-2minutes).
After sync on Friday:
It looks more like the task of finding cab drivers in Uber
Server must orchestrate models
Problem:
different models are optimized for different video cards of miners, if we do batch queries, we need to take this into account
The text was updated successfully, but these errors were encountered:
Best solution for now: create a mocked node server that will emulate different pools of nodes (paid Vast.ai, Azure cloud, custom network with different GPUs).
Implement proper logic for issuing tasks for different nodes within the network relying on "our" network of gpu's, Vast.ai network and other providers. The algorithm should prioritize our network first and attempting to access other resources in case of too big load (can be measured in expected delay for receiving an image for submitted request, f.e. longer than 1-2minutes).
After sync on Friday:
Problem:
different models are optimized for different video cards of miners, if we do batch queries, we need to take this into account
The text was updated successfully, but these errors were encountered: