Replies: 2 comments
-
Thanks. we actually have plans to do this, but we don't know how many people need it. Could you please tell me why you have these requirements and where your remote host is located? |
Beta Was this translation helpful? Give feedback.
-
I want to be able to do inference of my own custom trained loras/lycoris/embeddings but also use some very specific models from civitai like "Objective Reality". Using Google and Kaggle aren't very reliable because of their limitations and bans. And also, there are other free gpu access out there like paperspace, but it is also plagued by the lack of availability. And then, we have spot paid services like runpod, vast.ai, but they also don't always have guaranteed availability for the price we want. So we might be jumping around from platform to platform looking for the best deal, and they all have different easiness of use of their storage (Google Colab is the best, it syncs to google drive), but Kaggle doesn't sync with Google Drive well, etc. Which leads to having instances of automatic1111 spread around. We mitigate that by at least having the images synced to our local automatic1111. This is because we can use local plugins on CPU to visualize images, and do other tasks locally, keeping our UI preference settings, etc, without having to boot up a remote just to see the UI and our files, specially for the image viewers that allows us to see the prompts, such as https://github.com/AlUlkesh/stable-diffusion-webui-images-browser/ and manage other files for other extensions. There is a free distributed-computing project for inference called "Stable Horde", but the queues are 4 minutes long, the list of models is limited and there is not even support for embeddings, let alone for loras or checkpoints. Of course, in sort, improving user experience with unreliable or expensive remote computing by enabling at least the images to be brought back right away to a safe local. And, perhaps if the budget allows, enabling other kinds of syncing, so we keep state among many providers by choosing one (in my case Local) as the client/main. So here are some reasonable priorities:Exposing the available resources & Retrieving generated results from a remoteDisplay what resources the remote provides (model, loras, and embedding, etc.) and using them to fulfill the request and deliver to the user the images as they are generated in a batch, and if possible allow live preview. But to keep it streamlined it would be better to just do detection of the resources the user is requesting in their prompt as opposed to selecting from the list, if possible. Automatic1111 already does it, if it's available to it in case of an Remote/Host instance I manage by simply using For example, in a workflow someone trains on Google Colab or Kaggle, and your time is up, but you can resume training somewhere else, or you can keep doing inference with those checkpoints as you train them, because you had the checkpoints getting synced somewhere else. Uploading/Downloading ResourcesThis extension already does part of the job: https://github.com/etherealxx/batchlinks-webui Of course I could use a script to do that, but that allows me to not have to close and reopen automatic1111 on the remote while it is running, it makes it more user friendly like how a comfortable usage at a desktop computer would be, as opposed to a limited python kernel with complex setups for even the most basic things. Also, it doesn't allow me to sync easily anything that I trained somewhere and the source images for the training, etc, keeping a local automatic1111 will allow me to keep it centralized, make sure that I always own it even if the remote goes down, and allows for some basic local prototyping, managing the files locally by being able to delete and move around files, syncs with a remote on-way or two-way.
Using embeddingsSo, I'll leave this part as an suggestion for omniinfer. Because they are really small, it can be uploaded easily to the remote to fulfill a request/batch. It can be a good cheap/fast alternative to loras to use on a remote inference provider. The creator behind Stable Horde mentioned that on a reddit post. |
Beta Was this translation helpful? Give feedback.
-
I'd like to have a host running at a remote and being able to connect through a tunneling service
Perhaps the "--api" flag already seems to expose it, but I'm not aware of any extension that allows me to connect to another instance of automatic1111. It's for the main the purpose of being able to have the images being sent directly to my automatic1111 local instance, as opposed to having to download them manually from a host, etc.
So basically the current extension does exactly that, so an additional provider of that kind would be neat. :-)
Beta Was this translation helpful? Give feedback.
All reactions