AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
We propose a possible solution to these challenges: secure federated learning

Federated Machine Learning: Concept and Applications.

ACM TIST, no. 2 (2019): 12

引用2098|浏览698
EI
下载 PDF 全文
引用
微博一下

摘要

Today’s artificial intelligence still faces two major challenges. One is that, in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these challenges: secure federated learning. Beyond the federated-learning framework first proposed b...更多

代码

数据

0
简介
  • 2016 is the year when artificial intelligence (AI) came of age. With AlphaGo[59] defeating the top human Go players, the authors have truly witnessed the huge potential in artificial intelligence (AI), and have began to expect more complex, cutting-edge AI technology in many applications, including driverless cars, medical care, finance, etc.
  • Homomorphic Encryption [53] is adopted to protect user data privacy through parameter exchange under the encryption mechanism during machine learning [24, 26, 48].
  • A secure aggregation scheme to protect the privacy of aggregated user updates under their federated learning framework is introduced [9].
重点内容
  • 2016 is the year when artificial intelligence (AI) came of age
  • Privacy-preserving machine learning algorithms have been proposed for vertically partitioned data, including Cooperative Statistical Analysis [15], association rule mining [65], secure linear regression [22, 32, 55], classification [16] and gradient descent [68]
  • Federated learning is the process of aggregating these different features and computing the training loss and gradients in a privacy-preserving manner to build a model with data from both parties collaboratively
  • Federated Transfer Learning applies to the scenarios that the two data sets differ in samples and in feature space
  • As an innovative modeling mechanism that could train a united model on data from multiple parties without compromising privacy and security of those data, federated learning has a promising application in sales, financial, and many other industries, in which data cannot be directly aggregated for training machine learning models due to factors such as intellectual property rights, privacy protection, and data security
  • It is expected that in the near future, federated learning would break the barriers between industries and establish a community where data and knowledge could be shared together with safety, and the benefits would be fairly distributed according to the contribution of each participant
结果
  • Privacy-preserving machine learning algorithms have been proposed for vertically partitioned data, including Cooperative Statistical Analysis [15], association rule mining [65], secure linear regression [22, 32, 55], classification [16] and gradient descent [68].
  • Ref [27, 49] proposed a vertical federated learning scheme to train a privacy-preserving logistic regression model.
  • Federated learning is the process of aggregating these different features and computing the training loss and gradients in a privacy-preserving manner to build a model with data from both parties collaboratively.
  • Federated Transfer Learning applies to the scenarios that the two data sets differ in samples and in feature space.
  • It may be subject to attack in another security model by a malicious participant training a Generative Adversarial Network (GAN) in the collaborative learning process [29].
  • Federated learning enables multiple parties to collaboratively construct a machine learning model while keeping their private training data private.
  • Distributed machine learning covers many aspects, including distributed storage of training data, distributed operation of computing tasks, distributed distribution of model results, etc.
  • Federated learning emphasizes the data privacy protection of the data owner during the model training process.
  • As an innovative modeling mechanism that could train a united model on data from multiple parties without compromising privacy and security of those data, federated learning has a promising application in sales, financial, and many other industries, in which data cannot be directly aggregated for training machine learning models due to factors such as intellectual property rights, privacy protection, and data security.
  • By exploiting the characteristics of federated learning, the authors can build a machine learning model for the three parties without exporting the enterprise data, which fully protects data privacy and data security, and provides customers with personalized and targeted services and thereby achieves mutual benefits.
结论
  • The business model of federated learning has provided a new paradigm for applications of big data.
  • It is expected that in the near future, federated learning would break the barriers between industries and establish a community where data and knowledge could be shared together with safety, and the benefits would be fairly distributed according to the contribution of each participant.
  • The bonus of artificial intelligence would be brought to every corner of the lives
表格
  • Table1: Training Steps for Vertical Federated Learning : Linear Regression step 1 step 2 step 3 party A initialize ΘA
  • Table2: Evaluation Steps for Vertical Federated Learning : Linear Regression step 0 step 1 party A
Download tables as Excel
相关工作
  • Federated learning enables multiple parties to collaboratively construct a machine learning model while keeping their private training data private. As a novel technology, federated learning has several threads of originality, some of which are rooted on existing fields. Below we explain the relationship between federated learning and other related concepts from multiple perspectives.

    3.1 Privacy-preserving machine learning

    Federated learning can be considered as privacy-preserving decentralized collaborative machine learning, therefore it is tightly related to multi-party privacy-preserving machine learning. Many research efforts have been devoted to this area in the past. For example, Ref [17, 67] proposed algorithms for secure multi-party decision tree for vertically partitioned data. Vaidya and Clifton proposed secure association mining rules [65], secure k-means [66], Naive Bayes classifier [64] for vertically partitioned data. Ref [31] proposed an algorithm for association rules on horizontally partitioned data. Secure Support Vector Machines algorithms are developed for vertically partitioned data [73] and horizontally partitioned data [74]. Ref [16] proposed secure protocols for multi-party linear regression and classification. Ref [68] proposed secure multi-party gradient descent methods. The above works all used secure multi-party computation (SMC) [25, 72] for privacy guarantees.
引用论文
  • [2] Abbas Acar, Hidayet Aksu, A. Selcuk Uluagac, and Mauro Conti. 2018. A Survey on Homomorphic Encryption Schemes: Theory and Implementation. ACM Comput. Surv. 51, 4, Article 79 (July 2018), 35 pages. https://doi.org/10.1145/3214303
    Locate open access versionFindings
  • [3] Rakesh Agrawal and Ramakrishnan Srikant. 2000. Privacy-preserving Data Mining. In Proceedings of the 2000
    Google ScholarLocate open access versionFindings
  • [4] Yoshinori Aono, Takuya Hayashi, Le Trieu Phong, and Lihua Wang. 2016. Scalable and Secure Logistic Regression via
    Google ScholarFindings
  • [5] Toshinori Araki, Jun Furukawa, Yehuda Lindell, Ariel Nof, and Kazuma Ohara. 2016. High-Throughput Semi-Honest
    Google ScholarFindings
  • [6] Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2018. How To Backdoor
    Google ScholarFindings
  • [8] Dan Bogdanov, Sven Laur, and Jan Willemson. 2008. Sharemind: A Framework for Fast Privacy-Preserving Computations. In Proceedings of the 13th European Symposium on Research in Computer Security: Computer Security (ESORICS
    Google ScholarLocate open access versionFindings
  • [10] Florian Bourse, Michele Minelli, Matthias Minihold, and Pascal Paillier. 201Fast Homomorphic Evaluation of Deep
    Google ScholarFindings
  • [11] Hervé Chabanne, Amaury de Wargny, Jonathan Milgram, Constance Morel, and Emmanuel Prouff. 2017. Privacy-Preserving Classification on Deep Neural Network. IACR Cryptology ePrint Archive 2017 (2017), 35.
    Google ScholarLocate open access versionFindings
  • [12] Kamalika Chaudhuri and Claire Monteleoni. 200Privacy-preserving logistic regression. In Advances in Neural
    Google ScholarLocate open access versionFindings
  • [13] Fei Chen, Zhenhua Dong, Zhenguo Li, and Xiuqiang He. 2018. Federated Meta-Learning for Recommendation. CoRR
    Google ScholarFindings
  • [15] W. Du and M. Atallah. 2001. Privacy-Preserving Cooperative Statistical Analysis. In Proceedings of the 17th Annual
    Google ScholarLocate open access versionFindings
  • [16] Wenliang Du, Yunghsiang Sam Han, and Shigang Chen. 2004. Privacy-Preserving Multivariate Statistical Analysis: Linear Regression and Classification. In SDM.
    Google ScholarFindings
  • [17] Wenliang Du and Zhijun Zhan. 2002. Building Decision Tree Classifier on Private Data. In Proceedings of the IEEE
    Google ScholarLocate open access versionFindings
  • [18] Cynthia Dwork. 2008. Differential Privacy: A Survey of Results. In Proceedings of the 5th International Conference on Theory and Applications of Models of Computation (TAMC’08). Springer-Verlag, Berlin, Heidelberg, 1–19. http:
    Google ScholarLocate open access versionFindings
  • [20] Boi Faltings, Goran Radanovic, and Ronald Brachman. 2017. Game Theory for Data Science: Eliciting Truthful Information.
    Google ScholarFindings
  • [21] Jun Furukawa, Yehuda Lindell, Ariel Nof, and Or Weinstein. 20High-Throughput Secure Three-Party Computation for Malicious Adversaries and an Honest Majority. Cryptology ePrint Archive, Report 2016/944. https://eprint.iacr.
    Findings
  • [23] Robin C. Geyer, Tassilo Klein, and Moin Nabi. 20Differentially Private Federated Learning: A Client Level Perspective.
    Google ScholarFindings
  • [24] Irene Giacomelli, Somesh Jha, Marc Joye, C. David Page, and Kyonghwan Yoon. 2017. Privacy-Preserving Ridge
    Google ScholarFindings
  • [25] O. Goldreich, S. Micali, and A. Wigderson. 1987. How to Play ANY Mental Game. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing (STOC ’87). ACM, New York, NY, USA, 218–229. https://doi.org/10.1145/28395.28420
    Locate open access versionFindings
  • [26] Rob Hall, Stephen E. Fienberg, and Yuval Nardi. 2011. Secure multiple linear regression based on homomorphic encryption. Journal of Official Statistics 27, 4 (2011), 669–691.
    Google ScholarLocate open access versionFindings
  • [27] Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Richard Nock, Giorgio Patrini, Guillaume Smith, and Brian Thorne. 2017. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. CoRR abs/1711.10677 (2017).
    Findings
  • [28] Ehsan Hesamifard, Hassan Takabi, and Mehdi Ghasemi. 2017. CryptoDL: Deep Neural Networks over Encrypted Data. CoRR abs/1711.05189 (2017). arXiv:1711.05189 http://arxiv.org/abs/1711.05189
    Findings
  • [29] Briland Hitaj, Giuseppe Ateniese, and Fernando Pérez-Cruz. 2017. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. CoRR abs/1702.07464 (2017).
    Findings
  • [30] Qirong Ho, James Cipar, Henggang Cui, Jin Kyu Kim, Seunghak Lee, Phillip B. Gibbons, Garth A. Gibson, Gregory R. Ganger, and Eric P. Xing. 2013. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1 (NIPS’13). Curran Associates Inc., USA, 1223–1231. http://dl.acm.org/citation.cfm?id=2999611.2999748
    Locate open access versionFindings
  • [31] Murat Kantarcioglu and Chris Clifton. 2004. Privacy-Preserving Distributed Mining of Association Rules on Horizontally Partitioned Data. IEEE Trans. on Knowl. and Data Eng. 16, 9 (Sept. 2004), 1026–1037. https://doi.org/10.1109/TKDE.2004.45
    Locate open access versionFindings
  • [32] Alan F. Karr, X. Sheldon Lin, Ashish P. Sanil, and Jerome P. Reiter. 2004. Privacy-Preserving Analysis of Vertically Partitioned Data Using Secure Matrix Products.
    Google ScholarFindings
  • [33] Niki Kilbertus, Adria Gascon, Matt Kusner, Michael Veale, Krishna Gummadi, and Adrian Weller. 2018. Blind Justice: Fairness with Encrypted Sensitive Attributes. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 2630–2639. http://proceedings.mlr.press/v80/kilbertus18a.html
    Locate open access versionFindings
  • [34] Hyesung Kim, Jihong Park, Mehdi Bennis, and Seong-Lyun Kim. 2018. On-Device Federated Learning via Blockchain and its Latency Analysis. arXiv:cs.IT/1808.03949
    Google ScholarFindings
  • [35] Miran Kim, Yongsoo Song, Shuang Wang, Yuhou Xia, and Xiaoqian Jiang. 2018. Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation. JMIR Med Inform 6, 2 (17 Apr 2018), e19. https://doi.org/10.2196/medinform.8805
    Locate open access versionFindings
  • [36] Jakub Konecný, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016. Federated Optimization: Distributed Machine Learning for On-Device Intelligence. CoRR abs/1610.02527 (2016). arXiv:1610.02527 http://arxiv.org/abs/1610.02527
    Findings
  • [37] Jakub Konecný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated Learning: Strategies for Improving Communication Efficiency. CoRR abs/1610.05492 (2016). arXiv:1610.05492 http://arxiv.org/abs/1610.05492
    Findings
  • [38] Gang Liang and Sudarshan S Chawathe. 2004. Privacy-preserving inter-database operations. In International Conference on Intelligence and Security Informatics. Springer, 66–82.
    Google ScholarLocate open access versionFindings
  • [39] Yujun Lin, Song Han, Huizi Mao, Yu Wang, and William J. Dally. 2017. Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training. CoRR abs/1712.01887 (2017). arXiv:1712.01887 http://arxiv.org/abs/1712.01887
    Findings
  • [40] Jian Liu, Mika Juuti, Yao Lu, and N. Asokan. 2017. Oblivious Neural Network Predictions via MiniONN Transformations. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS ’17). ACM, New York, NY, USA, 619–631. https://doi.org/10.1145/3133956.3134056
    Locate open access versionFindings
  • [41] H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. 2016. Federated Learning of Deep Networks using Model Averaging. CoRR abs/1602.05629 (2016). arXiv:1602.05629 http://arxiv.org/abs/1602.05629
    Findings
  • [42] H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2017. Learning Differentially Private Language Models Without Losing Accuracy. CoRR abs/1710.06963 (2017).
    Findings
  • [43] Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2018. Inference Attacks Against Collaborative Learning. CoRR abs/1805.04049 (2018). arXiv:1805.04049 http://arxiv.org/abs/1805.04049
    Findings
  • [44] Payman Mohassel and Peter Rindal. 2018. ABY3: A Mixed Protocol Framework for Machine Learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS ’18). ACM, New York, NY, USA, 35–52. https://doi.org/10.1145/3243734.3243760
    Locate open access versionFindings
  • [45] Payman Mohassel, Mike Rosulek, and Ye Zhang. 2015. Fast and Secure Three-party Computation: The Garbled Circuit Approach. In Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security (CCS ’15). ACM, New York, NY, USA, 591–602. https://doi.org/10.1145/2810103.2813705
    Locate open access versionFindings
  • [46] Payman Mohassel and Yupeng Zhang. 2017. SecureML: A System for Scalable Privacy-Preserving Machine Learning. In IEEE Symposium on Security and Privacy. IEEE Computer Society, 19–38.
    Google ScholarLocate open access versionFindings
  • [47] Payman Mohassel and Yupeng Zhang. 2017. SecureML: A System for Scalable Privacy-Preserving Machine Learning. IACR Cryptology ePrint Archive 2017 (2017), 396.
    Google ScholarLocate open access versionFindings
  • [48] Valeria Nikolaenko, Udi Weinsberg, Stratis Ioannidis, Marc Joye, Dan Boneh, and Nina Taft. 2013. Privacy-Preserving Ridge Regression on Hundreds of Millions of Records. In Proceedings of the 2013 IEEE Symposium on Security and Privacy (SP ’13). IEEE Computer Society, Washington, DC, USA, 334–348. https://doi.org/10.1109/SP.2013.30
    Locate open access versionFindings
  • [49] Richard Nock, Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Giorgio Patrini, Guillaume Smith, and Brian Thorne. 2018. Entity Resolution and Federated Learning get a Federated Resolution. CoRR abs/1803.04035 (2018). arXiv:1803.04035 http://arxiv.org/abs/1803.04035
    Findings
  • [50] Sinno Jialin Pan and Qiang Yang. 2010. A Survey on Transfer Learning. IEEE Trans. on Knowl. and Data Eng. 22, 10 (Oct. 2010), 1345–1359. https://doi.org/10.1109/TKDE.2009.191
    Locate open access versionFindings
  • [51] Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2018. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. IEEE Trans. Information Forensics and Security 13, 5 (2018), 1333–1345.
    Google ScholarLocate open access versionFindings
  • [52] M. Sadegh Riazi, Christian Weinert, Oleksandr Tkachenko, Ebrahim M. Songhori, Thomas Schneider, and Farinaz Koushanfar. 2018. Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications. CoRR abs/1801.03239 (2018).
    Findings
  • [53] R L Rivest, L Adleman, and M L Dertouzos. 1978. On Data Banks and Privacy Homomorphisms. Foundations of Secure Computation, Academia Press (1978), 169–179.
    Google ScholarFindings
  • [54] Bita Darvish Rouhani, M. Sadegh Riazi, and Farinaz Koushanfar. 2017. DeepSecure: Scalable Provably-Secure Deep Learning. CoRR abs/1705.08963 (2017). arXiv:1705.08963 http://arxiv.org/abs/1705.08963
    Findings
  • [55] Ashish P. Sanil, Alan F. Karr, Xiaodong Lin, and Jerome P. Reiter. 2004. Privacy Preserving Regression Modelling via Distributed Computation. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’04). ACM, New York, NY, USA, 677–682. https://doi.org/10.1145/1014052.1014139
    Locate open access versionFindings
  • [56] Monica Scannapieco, Ilya Figotin, Elisa Bertino, and Ahmed K. Elmagarmid. 2007. Privacy Preserving Schema and Data Matching. In Proceedings of the 2007 ACM SIGMOD International Conference on Management of Data (SIGMOD ’07). ACM, New York, NY, USA, 653–664. https://doi.org/10.1145/1247480.1247553
    Locate open access versionFindings
  • [57] Amit P. Sheth and James A. Larson. 1990. Federated Database Systems for Managing Distributed, Heterogeneous, and Autonomous Databases. ACM Comput. Surv. 22, 3 (Sept. 1990), 183–236. https://doi.org/10.1145/96602.96604
    Locate open access versionFindings
  • [58] Reza Shokri and Vitaly Shmatikov. 2015. Privacy-Preserving Deep Learning. In Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security (CCS ’15). ACM, New York, NY, USA, 1310–1321. https://doi.org/10.1145/2810103.2813687
    Locate open access versionFindings
  • [59] David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529 (2016), 484–503. http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html
    Locate open access versionFindings
  • [60] Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. 2017. Federated Multi-Task Learning. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4424–4434. http://papers.nips.cc/paper/7029-federated-multi-task-learning.pdf
    Locate open access versionFindings
  • [61] Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. 2013. Stochastic gradient descent with differentially private updates. 2013 IEEE Global Conference on Signal and Information Processing (2013), 245–248.
    Google ScholarLocate open access versionFindings
  • [62] Lili Su and Jiaming Xu. 2018. Securing Distributed Machine Learning in High Dimensions. CoRR abs/1804.10140 (2018). arXiv:1804.10140 http://arxiv.org/abs/1804.10140
    Findings
  • [63] Latanya Sweeney. 2002. K-anonymity: A Model for Protecting Privacy. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 10, 5 (Oct. 2002), 557–570. https://doi.org/10.1142/S0218488502001648
    Locate open access versionFindings
  • [64] Jaideep Vaidya and Chris Clifton. [n. d.]. Privacy Preserving Naive Bayes Classifier for Vertically Partitioned Data. In in Proceedings of the fourth SIAM Conference on Data Mining, 2004. 330–334.
    Google ScholarLocate open access versionFindings
  • [65] Jaideep Vaidya and Chris Clifton. 2002. Privacy Preserving Association Rule Mining in Vertically Partitioned Data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’02). ACM, New York, NY, USA, 639–644. https://doi.org/10.1145/775047.775142
    Locate open access versionFindings
  • [66] Jaideep Vaidya and Chris Clifton. 2003. Privacy-preserving K-means Clustering over Vertically Partitioned Data. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’03). ACM, New York, NY, USA, 206–215. https://doi.org/10.1145/956750.956776
    Locate open access versionFindings
  • [67] Jaideep Vaidya and Chris Clifton. 2005. Privacy-Preserving Decision Trees over Vertically Partitioned Data. In Data and Applications Security XIX, Sushil Jajodia and Duminda Wijesekera (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 139–152.
    Google ScholarLocate open access versionFindings
  • [68] Li Wan, Wee Keong Ng, Shuguo Han, and Vincent C. S. Lee. 2007. Privacy-preservation for Gradient Descent Methods. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’07). ACM, New York, NY, USA, 775–783. https://doi.org/10.1145/1281192.1281275
    Locate open access versionFindings
  • [69] Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan. 2018. When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning. CoRR abs/1804.05271 (2018). arXiv:1804.05271 http://arxiv.org/abs/1804.05271
    Findings
  • [71] Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2018. Federated Learning. Communications of The CCF 14, 11 (2018), 49–55.
    Google ScholarLocate open access versionFindings
  • [72] Andrew C. Yao. 1982. Protocols for Secure Computations. In Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (SFCS ’82). IEEE Computer Society, Washington, DC, USA, 160–164. http://dl.acm.org/citation.cfm?id=1382436.1382751
    Locate open access versionFindings
  • [73] Hwanjo Yu, Xiaoqian Jiang, and Jaideep Vaidya. 2006. Privacy-preserving SVM Using Nonlinear Kernels on Horizontally Partitioned Data. In Proceedings of the 2006 ACM Symposium on Applied Computing (SAC ’06). ACM, New York, NY, USA, 603–610. https://doi.org/10.1145/1141277.1141415
    Locate open access versionFindings
  • [74] Hwanjo Yu, Jaideep Vaidya, and Xiaoqian Jiang. 2006. Privacy-Preserving SVM Classification on Vertically Partitioned Data. In Proceedings of the 10th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining (PAKDD’06). Springer-Verlag, Berlin, Heidelberg, 647–656. https://doi.org/10.1007/11731139_74
    Locate open access versionFindings
  • [75] Jiawei Yuan and Shucheng Yu. 2014. Privacy Preserving Back-Propagation Neural Network Learning Made Practical with Cloud Computing. IEEE Trans. Parallel Distrib. Syst. 25, 1 (Jan. 2014), 212–221. https://doi.org/10.1109/TPDS.2013.18
    Locate open access versionFindings
  • [76] Qingchen Zhang, Laurence T. Yang, and Zhikui Chen. 2016. Privacy Preserving Deep Computation Model on Cloud for Big Data Feature Learning. IEEE Trans. Comput. 65, 5 (May 2016), 1351–1362. https://doi.org/10.1109/TC.2015.2470255
    Locate open access versionFindings
  • [77] Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated Learning with Non-IID Data. arXiv:cs.LG/1806.00582
    Google ScholarFindings
0
您的评分 :

暂无评分

标签
评论
avatar
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn