研究兴趣

论文共 370 篇作者统计合作学者相似作者

按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
Parjanya Vyas, Asim Waheed,Yousra Aafer,N. Asokan
引用0浏览0EI引用
0
0
Buse G. A. Tekgul,N. Asokan
引用0浏览0EI引用
0
0
加载更多
作者统计
  • 合作者
  • 学生
  • 导师
暂无相似学者,你可以通过学者研究领域进行搜索筛选
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
s architecture or any other information about it beyond its intended task. We evaluate the effectiveness of our attacks using three different instances of two popular categories of image translation: (1) Selfie-to-Anime and (2) Monet-to-Photo (image style transfer), and (3) Super-Resolution (super resolution). Using standard performance metrics for GANs, we show that our attacks are effective. Furthermore, we conducted a large scale (125 participants) user study on Selfie-to-Anime and Monet-to-Photo to show that human perception of the images produced by $F_V$ and $F_A$ can be considered equivalent, within an equivalence bound of Cohen's d = 0.3. Finally, we show that existing defenses against model extraction attacks (watermarking, adversarial examples, poisoning) do not extend to image translation models. ","authors":[{"name":"Sebastian Szyller"},{"id":"63735e4aec88d95668d71c3e","name":"Vasisht Duddu"},{"id":"6182eec28672f1a7da394096","name":"Tommi Gröndahl"},{"id":"54329aefdabfaeb4c6a924cf","name":"N. Asokan"}],"create_time":"2021-04-27T14:34:28.926Z","flags":[{"flag":"affirm_author","person_id":"54329aefdabfaeb4c6a924cf"}],"hashs":{"h1":"gacga","h2":"smea","h3":"itgan"},"id":"6087fec591e011e25a316e2f","num_citation":0,"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F21\u002F2104\u002F2104.12623.pdf","pdf_src":["https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.12623"],"title":"Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against\n Image Translation Models","urls":["https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12623"],"versions":[{"id":"6087fec591e011e25a316e2f","sid":"2104.12623","src":"arxiv","year":2023}],"year":2023},{"abstract":" Code generation tools driven by artificial intelligence have recently become more popular due to advancements in deep learning and natural language processing that have increased their capabilities. The proliferation of these tools may be a double-edged sword because while they can increase developer productivity by making it easier to write code, research has shown that they can also generate insecure code. In this paper, we perform a user-centered evaluation GitHub's Copilot to better understand its strengths and weaknesses with respect to code security. We conduct a user study where participants solve programming problems, which have potentially vulnerable solutions, with and without Copilot assistance. The main goal of the user study is to determine how the use of Copilot affects participants' security performance. In our set of participants (n=25), we find that access to Copilot accompanies a more secure solution when tackling harder problems. For the easier problem, we observe no effect of Copilot access on the security of solutions. We also observe no disproportionate impact of Copilot use on particular kinds of vulnerabilities. ","authors":[{"id":"65253dba55b3f8ac466dab53","name":"Owura Asare"},{"id":"53f45dcedabfaee02ad73978","name":"Meiyappan Nagappan"},{"id":"54329aefdabfaeb4c6a924cf","name":"N. Asokan"}],"create_time":"2023-08-15T05:06:57.112Z","hashs":{"h1":"csus"},"id":"64dafb293fda6d7f064e2b8f","num_citation":0,"pdf":"https:\u002F\u002Fcz5waila03cyo0tux1owpyofgoryroob.aminer.cn\u002F81\u002FAA\u002FD4\u002F81AAD4B9B6089DE3C44166F5B37EF1B0.pdf","title":"Copilot Security: A User Study","update_times":{"u_c_t":"2023-09-27T06:53:18.159Z"},"urls":["db\u002Fjournals\u002Fcorr\u002Fcorr2308.html#abs-2308-06587","https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2308.06587","https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.06587"],"venue":{"info":{"name":"CoRR"},"volume":"abs\u002F2308.06587"},"versions":[{"id":"64dafb293fda6d7f064e2b8f","sid":"2308.06587","src":"arxiv","year":2023},{"id":"64f561823fda6d7f06f27997","sid":"journals\u002Fcorr\u002Fabs-2308-06587","src":"dblp","year":2023}],"year":2023},{"abstract":"We propose FLARE, the first fingerprinting mechanism to verify whether a suspected Deep Reinforcement Learning (DRL) policy is an illegitimate copy of another (victim) policy. We first show that it is possible to find non-transferable, universal adversarial masks, i.e., perturbations, to generate adversarial examples that can successfully transfer from a victim policy to its modified versions but not to independently trained policies. FLARE employs these masks as fingerprints to verify the true ownership of stolen DRL policies by measuring an action agreement value over states perturbed by such masks. Our empirical evaluations show that FLARE is effective (100% action agreement on stolen copies) and does not falsely accuse independent policies (no false positives). FLARE is also robust to model modification attacks and cannot be easily evaded by more informed adversaries without negatively impacting agent performance. We also show that not all universal adversarial masks are suitable candidates for fingerprints due to the inherent characteristics of DRL policies. The spatio-temporal dynamics of DRL problems and sequential decision-making process make characterizing the decision boundary of DRL policies more difficult, as well as searching for universal masks that capture the geometry of it.","authors":[{"name":"Buse G. A. Tekgul","org":"Network Systems and Security Research, Nokia Bell Labs, Finland and Computer Science, Aalto University, Finland","orgs":["Network Systems and Security Research, Nokia Bell Labs, Finland and Computer Science, Aalto University, Finland"]},{"id":"54329aefdabfaeb4c6a924cf","name":"N. Asokan","org":"David R. Cheriton School of Computer Science, University of Waterloo, Canada and Computer Science, Aalto University, Finland","orgs":["David R. Cheriton School of Computer Science, University of Waterloo, Canada and Computer Science, Aalto University, Finland"]}],"create_time":"2023-12-05T11:02:13.587Z","hashs":{"h1":"ffdrl","h3":"auam"},"id":"656eb370939a5f4082a577f7","num_citation":0,"pages":{"end":"505","start":"492"},"title":"FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal Adversarial Masks","urls":["https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3627106.3627128"],"venue":{"info":{"name":"ACSAC '23: Proceedings of the 39th Annual Computer Security Applications Conference"}},"versions":[{"id":"656eb370939a5f4082a577f7","sid":"10.1145\u002F3627106.3627128","src":"acm","year":2023}],"year":2023}],"profilePubsTotal":370,"profilePatentsPage":0,"profilePatents":null,"profilePatentsTotal":null,"profilePatentsEnd":false,"profileProjectsPage":1,"profileProjects":{"success":true,"msg":"","data":null,"log_id":"2ZOGFaheDTGXRRbadw4clHg4zX0"},"profileProjectsTotal":0,"newInfo":null,"checkDelPubs":[]}};