Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can’t collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floating-point operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.
-
Notifications
You must be signed in to change notification settings - Fork 0
Shashank-Prakash9/QuantizedCNNForDroughtPrediction
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published