[D] Paper Explained - Federated Learning for Mobile Keyboard Prediction

Ever wondered how your mobile keyboard gives you the next word suggestions? How do they give personalised suggestions, while at the same time ensuring the privacy of individuals?

Check out my blog post "Federated Learning for Mobile Keyboard Prediction", which talks about how this happens, in a privacy-preserving manner.

Blog Post - PPML Series #3 - Federated Learning for Mobile Keyboard Prediction

Annotated Paper - Annotated-ML-Papers/Federated Learning for Mobile Keyboard Prediction

πŸ‘︎ 24
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Dec 27 2021
🚨︎ report
Google AI Introduces β€˜Federated Reconstruction’ Framework That Enables Scalable Partially Local Federated Learning

Federated learning is a machine learning technique in which an algorithm is trained across numerous decentralized edge devices or servers, keeping local data samples without being exchanged. This prevents the collecting of personally identifiable information. It is frequently accomplished by learning a single global model for all users, although their data distributions may differ. Due to this variability, an algorithm that can personalize a global model for each user has been developed.

However, privacy concerns may prevent a truly global model from being learned in some cases. While sending user embedding updates to a central server may reveal the preferences encoded in the embeddings, it is required to train a completely global federated model. Even if models do not include user-specific embeddings, having some parameters local to user devices reduces server-client communication and allows for responsible personalization of those parameters for each user.

Google AI introduces an approach that enables scalable partially local federated learning in their work β€œFederated Reconstruction: Partially Local Federated Learning”. Some model parameters are never aggregated on the server in this approach. This strategy trains a piece of the model to be personal for each user while eliminating transmission of these parameters for models other than Matrix Factorization. In the case of matrix factorization, a recommender model is trained. The model retains user embeddings local to each user service.

Quick Read: https://www.marktechpost.com/2021/12/28/google-ai-introduces-federated-reconstruction-framework-that-enables-scalable-partially-local-federated-learning/

Paper: https://arxiv.org/pdf/2102.03448.pdf

Github: https://github.com/google-research/federated/tree/master/reconstruction

πŸ‘︎ 13
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 28 2021
🚨︎ report
AI Researchers Propose An Easy-To-Use Federated Learning Framework Called β€˜FedCV’ For Diverse Computer Vision Tasks

Federated Learning (FL) is a distributed learning paradigm that can learn a global or a personalized model for each user relying on decentralized data provided by edge devices. Since these edge devices do not need to share any data, FL can handle privacy issues that make centralized solutions unusable in specific domains (e.g., medical). You can think about a machine learning model for facial recognition. A centralized approach requires uploading the local data of each user externally (e.g., on a server), a solution that cannot ensure data privacy.

Considering FL in the Computer Vision (CV) domain, currently, only image classification in small-scale datasets and models has been evaluated, while most of the recent works focus on large-scale supervised/self-supervised pre-training models based on CNN or Transformers. At the moment, the research community lacks a library that connects different CV tasks with FL algorithms. For this reason, the researchers of this paper designed FedCV, a unified federated learning library that connects various FL algorithms with multiple important CV tasks, including image segmentation and object detection. To lighten the effort of CV researchers, FedCV provides representative FL algorithms through easy-to-use APIs. Moreover, the framework is flexible in exploring new protocols of distributed computing (e.g., customizing the exchange information among clients) and defining specialized training procedures.

Paper Summary: https://www.marktechpost.com/2021/12/24/ai-researchers-propose-an-easy-to-use-federated-learning-framework-called-fedcv-for-diverse-computer-vision-tasks/

Paper: https://arxiv.org/pdf/2111.11066.pdf

GitHub: https://github.com/FedML-AI/FedCV

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 25 2021
🚨︎ report
[Discussion] Federated Learning in practice

Hi!

Does anyone know of any in-detail descriptions/surveys of FL deployments in practice? What type of aggregations do people use and how they ensure privacy? Do most deployments rely on tf-federated?

I tried googling around, but am struggling to find much information.

Thanks a lot!

πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/SuchOccasion457
πŸ“…︎ Nov 26 2021
🚨︎ report
Hierarchical Federated Learning-Based Anomaly Detection Using Digital Twins For Internet of Medical Things (IoMT)

Smart healthcare services can be provided by using Internet of Things (IoT) technologies that monitor the health conditions of patients and their vital body parameters. The majority of IoT solutions used to enable such services are wearable devices, such as smartwatches, ECG monitors, and blood pressure monitors. The huge amount of data collected from smart medical devices leads to major security and privacy issues in the IoT domain. Considering Remote Patient Monitoring (RPM) applications, we will focus on Anomaly Detection (AD) models, whose purpose is to identify events that differ from the typical user behavior patterns. Generally, while designing centralized AD models, the researchers face security and privacy challenges (e.g., patient data privacy, training data poisoning).

To overcome these issues, the researchers of this paper propose an Anomaly Detection (AD) model based on Federated Learning (FL). Federated Learning (FL) allows different devices to collaborate and perform training locally in order to build Anomaly Detection (AD) models without sharing patients’ data. Specifically, the researchers propose a hierarchical Federated Learning (FL) that enables collaboration among different organizations, by building various Anomaly Detection (AD) models for patients with similar health conditions.

Continue Reading the Paper Summary: https://www.marktechpost.com/2022/01/01/hierarchical-federated-learning-based-anomaly-detection-using-digital-twins-for-internet-of-medical-things-iomt/

Full Paper: https://arxiv.org/pdf/2111.12241.pdf

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Jan 01 2022
🚨︎ report
Researchers Propose β€˜ProxyFL’: A Novel Decentralized Federated Learning Scheme For Multi-Institutional Collaborations Without Sacrificing Data Privacy

Tight rules generally govern data sharing in highly regulated industries like finance and healthcare. Federated learning is a distributed learning system that allows multi-institutional collaborations on decentralized data while also protecting the data privacy of each collaborator. Institutions in these disciplines are unable to aggregate and communicate their data, limiting research and model development progress. More robust and accurate models would result from sharing information between institutions while maintaining individual data privacy.

For example, in the healthcare industry, histopathology has undergone increasing digitization, providing a unique opportunity to improve the objectivity and accuracy of diagnostic interpretations through machine learning. The preparation, fixation, and staining techniques utilized at the preparation site, among other things, cause significant variation in digital photographs of tissue specimens.

Because of this diversity, medical data must be integrated across numerous organizations. On the other hand, medical data centralization involves regulatory constraints as well as workflow and technical challenges, such as managing and distributing the data. Because each histopathology image is often a gigapixel file, often one or more gigabytes in size, the latter is very important in digital pathology.

Quick Read: https://www.marktechpost.com/2021/12/12/researchers-propose-proxyfl-a-novel-decentralized-federated-learning-scheme-for-multi-institutional-collaborations-without-sacrificing-data-privacy/

Paper: https://arxiv.org/pdf/2111.11343v1.pdf

Github: https://github.com/layer6ai-labs/ProxyFL

https://preview.redd.it/vvw53reip5581.png?width=1234&format=png&auto=webp&s=7dfe1f77f084d457f71e778cfc81f0c96ff5bccb

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 12 2021
🚨︎ report
[R] Federated Learning - A decentralised form of Machine Learning

Introduced a few years ago by Google, Federated learning is an approach that downloads the current model and computes an updated model on the device itself (a little like edge computing) using local data. Updates from these locally trained models are then sent from the devices back to the central server where they are aggregated. Essentially, weights are averaged and then a single consolidated and improved global model is sent back to the devices.

This allows multiple organizations to collaborate on the development of models, exposing the model to a significantly wider range of data than what any single organization possesses in-house, while preserving data security - as only updates are shared with devices - not the actual data.

Original Article - https://blog.mindkosh.com/what-is-federated-learning/

πŸ‘︎ 20
πŸ’¬︎
πŸ‘€︎ u/ifcarscouldspeak
πŸ“…︎ Oct 14 2021
🚨︎ report
Google AI Improves The Performance Of Smart Text Selection Models By Using Federated Learning

Smart Text Selection is one of Android’s most popular features, assisting users in selecting, copying, and using text by anticipating the desired word or combination of words around a user’s tap and expanding the selection appropriately. Selections are automatically extended with this feature, and users are offered an app to open selections with defined classification categories, such as addresses and phone numbers, saving them even more time.

The Google team made efforts to improve the performance of Smart Text Selection by utilizingΒ federated learningΒ to train a neural network model responsible for user interactions while maintaining personal privacy. The research team was able to enhance the model’s selection accuracy by up toΒ 20% on some sorts of entities thanksΒ to this effort, which is part of Android’s new Private Compute Core safe environment.

The model is trained to only select a single word to reduce the incidence of making multi-word selections in error. The Smart Text Selection feature was first trained on proxy data derived from web pages that had schema.org annotations attached to them. While this method of training on schema.org annotations was effective, it had a number of drawbacks. The data was not at all like the text users viewed on their devices.

Quick Read: https://www.marktechpost.com/2021/11/29/google-ai-improves-the-performance-of-smart-text-selection-models-by-using-federated-learning/

Google Blog: https://ai.googleblog.com/2021/11/predicting-text-selections-with.html

πŸ‘︎ 13
πŸ’¬︎
πŸ‘€︎ u/techsucker
πŸ“…︎ Nov 30 2021
🚨︎ report
PPML Series #1 - An introduction to Federated Learning

I started a series on privacy-preserving Machine Learning. I wanted to do it for quite a long time and finally decided to start. The first post is a short introduction to Federated Learning. In this blog post, I have written a more detailed version of my Twitter thread.

Check it out - PPML Series #1 - An introduction to Federated Learning

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Dec 11 2021
🚨︎ report
[D] A high-level microblog on Federated Learning

I wrote a high-level (not too technical) thread on Federated Learning.

Twitter Thread

If you found it informative, do let me know!

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Nov 21 2021
🚨︎ report
Federated Learning microblog (Part 2) + Annotated Paper

I wrote another Twitter thread that goes deep on the math behind Federated Learning, how it is trained and how well it performs.

Twitter Thread

Annotated Paper - Communication-Efficient Learning of Deep Networks from Decentralized Data

If you like it or have any feedback, do let me know!

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Nov 24 2021
🚨︎ report
NVIDIA Open-Source β€˜FLARE’ (Federated Learning Application Runtime Environment), Providing A Common Computing Foundation For Federated Learning

Standard machine learning methods involve storing training data on a single machine or in a data center. Federated learning is a privacy-preserving technique that is especially useful when the training data is sparse, confidential, or less diverse.

NVIDIA open-source NVIDIA FLARE, which stands for Federated Learning Application Runtime Environment. It is a software development kit that enables remote parties to collaborate for developing more generalizable AI models. NVIDIA FLARE is the underlying engine in the NVIDIA Clara Train’s federated learning software, which has been utilized for diverse AI applications such as medical imaging, genetic analysis, cancer, and COVID-19 research.

Researchers can use the SDK to customize their method for domain-specific applications by choosing from a variety of federated learning architectures. NVIDIA FLARE can also be used by platform developers to give consumers the distributed infrastructure needed to create a multi-party collaborative application.

Quick Read: https://www.marktechpost.com/2021/11/29/nvidia-open-source-flare-federated-learning-application-runtime-environment-providing-a-common-computing-foundation-for-federated-learning/

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/techsucker
πŸ“…︎ Nov 29 2021
🚨︎ report
Federated Learning implementation

Hi,

I was wondering if anyone has come across an implementation of a federated machine learning system

I want to build one for the hospital system and hardware is not my forte.

Could I can spin VM on the cloud system of the respective hospitals and make sure they can communicate between each other?

Thanks!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/fella85
πŸ“…︎ Nov 24 2021
🚨︎ report
Google AI Introduces β€˜Federated Reconstruction’ Framework That Enables Scalable Partially Local Federated Learning

Federated learning is a machine learning technique in which an algorithm is trained across numerous decentralized edge devices or servers, keeping local data samples without being exchanged. This prevents the collecting of personally identifiable information. It is frequently accomplished by learning a single global model for all users, although their data distributions may differ. Due to this variability, an algorithm that can personalize a global model for each user has been developed.

However, privacy concerns may prevent a truly global model from being learned in some cases. While sending user embedding updates to a central server may reveal the preferences encoded in the embeddings, it is required to train a completely global federated model. Even if models do not include user-specific embeddings, having some parameters local to user devices reduces server-client communication and allows for responsible personalization of those parameters for each user.

Google AI introduces an approach that enables scalable partially local federated learning in their work β€œFederated Reconstruction: Partially Local Federated Learning”. Some model parameters are never aggregated on the server in this approach. This strategy trains a piece of the model to be personal for each user while eliminating transmission of these parameters for models other than Matrix Factorization. In the case of matrix factorization, a recommender model is trained. The model retains user embeddings local to each user service.

Quick Read: https://www.marktechpost.com/2021/12/28/google-ai-introduces-federated-reconstruction-framework-that-enables-scalable-partially-local-federated-learning/

Paper: https://arxiv.org/pdf/2102.03448.pdf

Github: https://github.com/google-research/federated/tree/master/reconstruction

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 28 2021
🚨︎ report
Paper Explained - Federated Learning for Mobile Keyboard Prediction

Ever wondered how your mobile keyboard gives you the next word suggestions? How do they give personalised suggestions, while at the same time ensuring the privacy of individuals?

Check out my blog post "Federated Learning for Mobile Keyboard Prediction", which talks about how this happens, in a privacy-preserving manner.

Blog Post - PPML Series #3 - Federated Learning for Mobile Keyboard Prediction

Annotated Paper - Annotated-ML-Papers/Federated Learning for Mobile Keyboard Prediction

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Dec 27 2021
🚨︎ report
Federated Learning

What do you think about federated learning for users' privacy-preserving?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Repulsive-Ebb242
πŸ“…︎ Dec 22 2021
🚨︎ report
AI Researchers Propose An Easy-To-Use Federated Learning Framework Called β€˜FedCV’ For Diverse Computer Vision Tasks

Federated Learning (FL) is a distributed learning paradigm that can learn a global or a personalized model for each user relying on decentralized data provided by edge devices. Since these edge devices do not need to share any data, FL can handle privacy issues that make centralized solutions unusable in specific domains (e.g., medical). You can think about a machine learning model for facial recognition. A centralized approach requires uploading the local data of each user externally (e.g., on a server), a solution that cannot ensure data privacy.

Considering FL in the Computer Vision (CV) domain, currently, only image classification in small-scale datasets and models has been evaluated, while most of the recent works focus on large-scale supervised/self-supervised pre-training models based on CNN or Transformers. At the moment, the research community lacks a library that connects different CV tasks with FL algorithms. For this reason, the researchers of this paper designed FedCV, a unified federated learning library that connects various FL algorithms with multiple important CV tasks, including image segmentation and object detection. To lighten the effort of CV researchers, FedCV provides representative FL algorithms through easy-to-use APIs. Moreover, the framework is flexible in exploring new protocols of distributed computing (e.g., customizing the exchange information among clients) and defining specialized training procedures.

Paper Summary: https://www.marktechpost.com/2021/12/24/ai-researchers-propose-an-easy-to-use-federated-learning-framework-called-fedcv-for-diverse-computer-vision-tasks/

Paper: https://arxiv.org/pdf/2111.11066.pdf

GitHub: https://github.com/FedML-AI/FedCV

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 25 2021
🚨︎ report
Google AI Introduces β€˜Federated Reconstruction’ Framework That Enables Scalable Partially Local Federated Learning

Federated learning is a machine learning technique in which an algorithm is trained across numerous decentralized edge devices or servers, keeping local data samples without being exchanged. This prevents the collecting of personally identifiable information. It is frequently accomplished by learning a single global model for all users, although their data distributions may differ. Due to this variability, an algorithm that can personalize a global model for each user has been developed.

However, privacy concerns may prevent a truly global model from being learned in some cases. While sending user embedding updates to a central server may reveal the preferences encoded in the embeddings, it is required to train a completely global federated model. Even if models do not include user-specific embeddings, having some parameters local to user devices reduces server-client communication and allows for responsible personalization of those parameters for each user.

Google AI introduces an approach that enables scalable partially local federated learning in their work β€œFederated Reconstruction: Partially Local Federated Learning”. Some model parameters are never aggregated on the server in this approach. This strategy trains a piece of the model to be personal for each user while eliminating transmission of these parameters for models other than Matrix Factorization. In the case of matrix factorization, a recommender model is trained. The model retains user embeddings local to each user service.

Quick Read: https://www.marktechpost.com/2021/12/28/google-ai-introduces-federated-reconstruction-framework-that-enables-scalable-partially-local-federated-learning/

Paper: https://arxiv.org/pdf/2102.03448.pdf

Github: https://github.com/google-research/federated/tree/master/reconstruction

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 28 2021
🚨︎ report
Hierarchical Federated Learning-Based Anomaly Detection Using Digital Twins For Internet of Medical Things (IoMT)

Smart healthcare services can be provided by using Internet of Things (IoT) technologies that monitor the health conditions of patients and their vital body parameters. The majority of IoT solutions used to enable such services are wearable devices, such as smartwatches, ECG monitors, and blood pressure monitors. The huge amount of data collected from smart medical devices leads to major security and privacy issues in the IoT domain. Considering Remote Patient Monitoring (RPM) applications, we will focus on Anomaly Detection (AD) models, whose purpose is to identify events that differ from the typical user behavior patterns. Generally, while designing centralized AD models, the researchers face security and privacy challenges (e.g., patient data privacy, training data poisoning).

To overcome these issues, the researchers of this paper propose an Anomaly Detection (AD) model based on Federated Learning (FL). Federated Learning (FL) allows different devices to collaborate and perform training locally in order to build Anomaly Detection (AD) models without sharing patients’ data. Specifically, the researchers propose a hierarchical Federated Learning (FL) that enables collaboration among different organizations, by building various Anomaly Detection (AD) models for patients with similar health conditions.

Continue Reading the Paper Summary: https://www.marktechpost.com/2022/01/01/hierarchical-federated-learning-based-anomaly-detection-using-digital-twins-for-internet-of-medical-things-iomt/

Full Paper: https://arxiv.org/pdf/2111.12241.pdf

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Jan 01 2022
🚨︎ report
Paper Explained - Federated Learning for Mobile Keyboard Prediction

Ever wondered how your mobile keyboard gives you the next word suggestions? How do they give personalised suggestions, while at the same time ensuring the privacy of individuals?

Check out my blog post "Federated Learning for Mobile Keyboard Prediction", which talks about how this happens, in a privacy-preserving manner.

Blog Post - PPML Series #3 - Federated Learning for Mobile Keyboard Prediction

Annotated Paper - Annotated-ML-Papers/Federated Learning for Mobile Keyboard Prediction

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Dec 27 2021
🚨︎ report
AI Researchers Propose An Easy-To-Use Federated Learning Framework Called β€˜FedCV’ For Diverse Computer Vision Tasks

Federated Learning (FL) is a distributed learning paradigm that can learn a global or a personalized model for each user relying on decentralized data provided by edge devices. Since these edge devices do not need to share any data, FL can handle privacy issues that make centralized solutions unusable in specific domains (e.g., medical). You can think about a machine learning model for facial recognition. A centralized approach requires uploading the local data of each user externally (e.g., on a server), a solution that cannot ensure data privacy.

Considering FL in the Computer Vision (CV) domain, currently, only image classification in small-scale datasets and models has been evaluated, while most of the recent works focus on large-scale supervised/self-supervised pre-training models based on CNN or Transformers. At the moment, the research community lacks a library that connects different CV tasks with FL algorithms. For this reason, the researchers of this paper designed FedCV, a unified federated learning library that connects various FL algorithms with multiple important CV tasks, including image segmentation and object detection. To lighten the effort of CV researchers, FedCV provides representative FL algorithms through easy-to-use APIs. Moreover, the framework is flexible in exploring new protocols of distributed computing (e.g., customizing the exchange information among clients) and defining specialized training procedures.

Paper Summary: https://www.marktechpost.com/2021/12/24/ai-researchers-propose-an-easy-to-use-federated-learning-framework-called-fedcv-for-diverse-computer-vision-tasks/

Paper: https://arxiv.org/pdf/2111.11066.pdf

GitHub: https://github.com/FedML-AI/FedCV

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 25 2021
🚨︎ report
Paper Explained - Federated Learning for Mobile Keyboard Prediction

Ever wondered how your mobile keyboard gives you the next word suggestions? How do they give personalised suggestions, while at the same time ensuring the privacy of individuals?

Check out my blog post "Federated Learning for Mobile Keyboard Prediction", which talks about how this happens, in a privacy-preserving manner.

Blog Post - PPML Series #3 - Federated Learning for Mobile Keyboard Prediction

Annotated Paper - Annotated-ML-Papers/Federated Learning for Mobile Keyboard Prediction

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Dec 27 2021
🚨︎ report
Researchers Propose β€˜ProxyFL’: A Novel Decentralized Federated Learning Scheme For Multi-Institutional Collaborations Without Sacrificing Data Privacy

Tight rules generally govern data sharing in highly regulated industries like finance and healthcare. Federated learning is a distributed learning system that allows multi-institutional collaborations on decentralized data while also protecting the data privacy of each collaborator. Institutions in these disciplines are unable to aggregate and communicate their data, limiting research and model development progress. More robust and accurate models would result from sharing information between institutions while maintaining individual data privacy.

For example, in the healthcare industry, histopathology has undergone increasing digitization, providing a unique opportunity to improve the objectivity and accuracy of diagnostic interpretations through machine learning. The preparation, fixation, and staining techniques utilized at the preparation site, among other things, cause significant variation in digital photographs of tissue specimens.

Because of this diversity, medical data must be integrated across numerous organizations. On the other hand, medical data centralization involves regulatory constraints as well as workflow and technical challenges, such as managing and distributing the data. Because each histopathology image is often a gigapixel file, often one or more gigabytes in size, the latter is very important in digital pathology.

Quick Read: https://www.marktechpost.com/2021/12/12/researchers-propose-proxyfl-a-novel-decentralized-federated-learning-scheme-for-multi-institutional-collaborations-without-sacrificing-data-privacy/

Paper: https://arxiv.org/pdf/2111.11343v1.pdf

Github: https://github.com/layer6ai-labs/ProxyFL

https://preview.redd.it/eqf24d5hp5581.png?width=1234&format=png&auto=webp&s=5f0a099df069feb32d34ffb927ac8af36068ed98

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 12 2021
🚨︎ report
[R] Researchers Propose β€˜ProxyFL’: A Novel Decentralized Federated Learning Scheme For Multi-Institutional Collaborations Without Sacrificing Data Privacy

Tight rules generally govern data sharing in highly regulated industries like finance and healthcare. Federated learning is a distributed learning system that allows multi-institutional collaborations on decentralized data while also protecting the data privacy of each collaborator. Institutions in these disciplines are unable to aggregate and communicate their data, limiting research and model development progress. More robust and accurate models would result from sharing information between institutions while maintaining individual data privacy.

For example, in the healthcare industry, histopathology has undergone increasing digitization, providing a unique opportunity to improve the objectivity and accuracy of diagnostic interpretations through machine learning. The preparation, fixation, and staining techniques utilized at the preparation site, among other things, cause significant variation in digital photographs of tissue specimens.

Because of this diversity, medical data must be integrated across numerous organizations. On the other hand, medical data centralization involves regulatory constraints as well as workflow and technical challenges, such as managing and distributing the data. Because each histopathology image is often a gigapixel file, often one or more gigabytes in size, the latter is very important in digital pathology.

Paper: https://arxiv.org/pdf/2111.11343v1.pdf

Github: https://github.com/layer6ai-labs/ProxyFL

Short Summary by Nitish: https://www.marktechpost.com/2021/12/12/researchers-propose-proxyfl-a-novel-decentralized-federated-learning-scheme-for-multi-institutional-collaborations-without-sacrificing-data-privacy/

https://preview.redd.it/d67spocrp5581.png?width=1234&format=png&auto=webp&s=5b7224f9fe4beb5b4e8d1f1d55231d1c9f6fea24

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 12 2021
🚨︎ report
Researchers Propose β€˜ProxyFL’: A Novel Decentralized Federated Learning Scheme For Multi-Institutional Collaborations Without Sacrificing Data Privacy

Tight rules generally govern data sharing in highly regulated industries like finance and healthcare. Federated learning is a distributed learning system that allows multi-institutional collaborations on decentralized data while also protecting the data privacy of each collaborator. Institutions in these disciplines are unable to aggregate and communicate their data, limiting research and model development progress. More robust and accurate models would result from sharing information between institutions while maintaining individual data privacy.

For example, in the healthcare industry, histopathology has undergone increasing digitization, providing a unique opportunity to improve the objectivity and accuracy of diagnostic interpretations through machine learning. The preparation, fixation, and staining techniques utilized at the preparation site, among other things, cause significant variation in digital photographs of tissue specimens.

Because of this diversity, medical data must be integrated across numerous organizations. On the other hand, medical data centralization involves regulatory constraints as well as workflow and technical challenges, such as managing and distributing the data. Because each histopathology image is often a gigapixel file, often one or more gigabytes in size, the latter is very important in digital pathology.

Quick Read: https://www.marktechpost.com/2021/12/12/researchers-propose-proxyfl-a-novel-decentralized-federated-learning-scheme-for-multi-institutional-collaborations-without-sacrificing-data-privacy/

Paper: https://arxiv.org/pdf/2111.11343v1.pdf

Github: https://github.com/layer6ai-labs/ProxyFL

https://preview.redd.it/rdpz9ruhp5581.png?width=1234&format=png&auto=webp&s=a784c736799c916b5f6dc687c1ecfcb9bfe85065

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/ai-lover
πŸ“…︎ Dec 12 2021
🚨︎ report
A high-level microblog on Federated Learning

I wrote a high-level (not too technical) thread on Federated Learning.

Twitter Thread

If you found it informative, do let me know!

πŸ‘︎ 12
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Nov 21 2021
🚨︎ report
PPML Series #1 - An introduction to Federated Learning

I started a series on privacy-preserving Machine Learning. I wanted to do it for quite a long time and finally decided to start. The first post is a short introduction to Federated Learning. In this blog post, I have written a more detailed version of my Twitter thread.

Check it out - PPML Series #1 - An introduction to Federated Learning

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Dec 11 2021
🚨︎ report
A high-level microblog on Federated Learning

I wrote a high-level (not too technical) thread on Federated Learning.

Twitter Thread

If you found it informative, do let me know!

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Nov 21 2021
🚨︎ report
Federated Learning microblog (Part 2) + Annotated Paper

I wrote another Twitter thread that goes deep on the math behind Federated Learning, how it is trained and how well it performs.

Twitter Thread

Annotated Paper - Communication-Efficient Learning of Deep Networks from Decentralized Data

If you like it or have any feedback, do let me know!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/shreyansh26
πŸ“…︎ Nov 24 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.