• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

evaluating-adversarial-robustness/adv-eval-paper: LaTeX source for the paper &qu ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

evaluating-adversarial-robustness/adv-eval-paper

开源软件地址(OpenSource Url):

https://github.com/evaluating-adversarial-robustness/adv-eval-paper

开源编程语言(OpenSource Language):

TeX 99.4%

开源软件介绍(OpenSource Introduction):

On Evaluating Adversarial Robustness

This repository contains the LaTeX source for the paper On Evaluating Adversarial Robustness. It is a paper written with the intention of helping everyone---from those designing their own neural networks, to those reviewing defense papers, to those just wondering what goes into a defense evaluation---learn more about methods for evaluating adversarial robustness.

This is a Living Document

We do not intend for this to be a traditional paper where it is written once and never updated. While the fundamentals for how to evaluate adversarial robustness will not change, most of the specific advice we give today on evaluating adversarial robustness may quickly become out of date. We therefore expect to update this document from time to time in order to match the currently accepted best practices in the research community.

Abstract

Correctly evaluating defenses against adversarial examples has proven to be extremely difficult. Despite the significant amount of recent work attempting to design defenses that withstand adaptive attacks, few have succeeded; most papers that propose defenses are quickly shown to be incorrect.

We believe a large contributing factor is the difficulty of performing security evaluations. In this paper, we discuss the methodological foundations, review commonly accepted best practices, and suggest new methods for evaluating defenses to adversarial examples. We hope that both researchers developing defenses as well as readers and reviewers who wish to understand the completeness of an evaluation consider our advice in order to avoid common pitfalls.

Contributing

We welcome any contributions to the paper through both issues and pull requests. Please prefer issues for topics which warrant initial discussion (such as suggesting a new item to be added to the checklist) and pull requests for changes that will require less discussion (fixing typos or writing content for a topic discussed previously in an issue).

Contributors

  • Nicholas Carlini (Google Brain)
  • Anish Athalye (MIT)
  • Nicolas Papernot (Google Brain)
  • Wieland Brendel (University of Tubingen)
  • Jonas Rauber (University of Tubingen)
  • Dimitris Tsipras (MIT)
  • Ian Goodfellow (Google Brain)
  • Aleksander Madry (MIT)
  • Alexey Kurakin (Google Brain)

NOTE: contributors are ordered according to the amount of their contribution to the text of the paper, similar to the Cleverhans tech report. List of contributors may be expanded and order may change with the new revisions of the paper.

Changelog

2018-02-20: Explain author order (#5)

2018-02-18: Initial Revision

Citation

If you use this paper in academic research, you may cite the following:

@article{carlini2019evaluating,
  title={On Evaluating Adversarial Robustness},
  author={Carlini, Nicholas and Athalye, Anish and Papernot, Nicolas and Brendel, Wieland and Rauber, Jonas and Tsipras, Dimitris and Goodfellow, Ian and Madry, Aleksander and Kurakin, Alexey},
  journal={arXiv preprint arXiv:1902.06705},
  year={2019}
}



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap