Skip to content
This repository has been archived by the owner on Aug 30, 2022. It is now read-only.

Is it possible to have Federated Learning on Cloud-Edge? #563

Open
Martiniann opened this issue Oct 14, 2020 · 2 comments
Open

Is it possible to have Federated Learning on Cloud-Edge? #563

Martiniann opened this issue Oct 14, 2020 · 2 comments

Comments

@Martiniann
Copy link

Hi everyone,

Currently I am working on a school project about federated learning and came across your framework during exploratory analysis. My project should utilize federated learning in this manner - I have an aggregation server (let's say in a cloud). I want this server to provide model to my 2 Raspberry PIs. These two RPIs would then train the model on a local data for x epochs and provide the trained models/gradients back to the global server. On this server, the results would be federated averaged and new model would be sent to the PIs. Is such a workflow possible with your framework? If so, could you provide me a hint?

Thank you,
Best regards

@Robert-Steiner
Copy link
Contributor

Hi @Martiniann,
in theory it is possible but unfortunately we don't provide the tools yet to easily support your setup.

We are currently working on building a client that runs on mobile phones. The client itself does not perform any machine learning, but only takes care of receiving / sending the global / local model and handling the PET protocol. To build your setup you will need to implement the machine learning part first and then integrate the client via FFI. However, this will require a lot of work. If you want to give it a try, let us know, we're happy to help.

In the near future we will provide a desktop client that will also support your use case.

@prathapkumarbaratam
Copy link

hi @Robert-Steiner , Iam currently working on project about federated learning and came across your framework during exploratory analysis. My project should utilize federated learning in this manner - I have an aggregation server (let's say in a cloud). I want this server to provide a model for my 2 Raspberry PIs. These two RPIs would then train the model on a local data for x epochs and provide the trained models/gradients back to the global server. On this server, the results would be federated averaged and a new model would be sent to the PIs. Is such a workflow possible with your framework? If so, could you provide me with a hint?any other examples using xaynet if possible?

Thank you,

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants