您正在使用IE低版浏览器,为了您的雷峰网账号安全和更好的产品体验,强烈建议使用更快更安全的浏览器
此为临时链接,仅用于文章预览,将在时失效
人工智能 正文
发私信给奕欣
发送

0

Ian Goodfellow撰文总结:谷歌的 ICLR 2017 硕果累累

本文作者:奕欣 2017-04-25 16:14 专题:ICLR 2017
导语:谷歌大脑团队的 Ian Goodfellow 今日在研究院官网上撰文,总结了谷歌在 ICLR 2017 上所做的学术贡献。

Ian Goodfellow撰文总结:谷歌的 ICLR 2017 硕果累累

雷锋网消息,谷歌大脑团队的 Ian Goodfellow 今日在研究院官网上撰文,总结了谷歌在 ICLR 2017 上所做的学术贡献。雷锋网编译全文如下,未经许可不得转载。

本周,第五届国际学习表征会议(ICLR 2017)在法国土伦召开,这是一个关注机器学习领域如何从数据中习得具有意义及有用表征的会议。ICLR 包括 conference track 及 workshop track 两个项目,邀请了获得 oral 及 poster 的研究者们进行分享,涵盖深度学习、度量学习、核学习、组合模型、非线性结构化预测,及非凸优化问题。

站在神经网络及深度学习领域浪潮之巅,谷歌关注理论与实践,并致力于开发理解与总结的学习方法。作为 ICLR 2017 的白金赞助商,谷歌有超过 50 名研究者出席本次会议(大部分为谷歌大脑团队及谷歌欧洲研究分部的成员),通过在现场展示论文及海报的方式,为建设一个更完善的学术研究交流平台做出了贡献,也是一个互相学习的过程。此外,谷歌的研究者也是 workshops 及组委会构建的中坚力量。

如果你来到 ICLR 2017,我们希望你能在我们的展台前驻足,并与我们的研究者进行交流,探讨如何为数十亿人解决有趣的问题。

以下为谷歌在 ICLR 2017 展示的论文内容(其中的谷歌研究者已经加粗表示)

区域主席

George Dahl, Slav Petrov, Vikas Sindhwani

程序主席(雷锋网此前已经做过相关介绍)

Hugo Larochelle, Tara Sainath

受邀演讲论文

  • Understanding Deep Learning Requires Rethinking Generalization (Best Paper Award)

    Chiyuan Zhang*, Samy Bengio, Moritz Hardt, Benjamin Recht*, Oriol Vinyals

  • Semi-Supervised Knowledge Transfer for Deep Learning from Private Training Data (Best Paper Award)

    Nicolas Papernot*, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar

  • Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic

    Shixiang (Shane) Gu*, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine

  • Neural Architecture Search with Reinforcement Learning

    Barret Zoph, Quoc Le

Poster 论文

  • Adversarial Machine Learning at Scale

    Alexey Kurakin, Ian J. Goodfellow†, Samy Bengio

  • Capacity and Trainability in Recurrent Neural Networks

    Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo

  • Improving Policy Gradient by Exploring Under-Appreciated Rewards

    Ofir Nachum, Mohammad Norouzi, Dale Schuurmans

  • Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

    Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean

  • Unrolled Generative Adversarial Networks

    Luke Metz, Ben Poole*, David Pfau, Jascha Sohl-Dickstein

  • Categorical Reparameterization with Gumbel-Softmax

    Eric Jang, Shixiang (Shane) Gu*, Ben Poole*

  • Decomposing Motion and Content for Natural Video Sequence Prediction

    Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee

  • Density Estimation Using Real NVP

    Laurent Dinh*, Jascha Sohl-Dickstein, Samy Bengio

  • Latent Sequence Decompositions

    William Chan*, Yu Zhang*, Quoc Le, Navdeep Jaitly*

  • Learning a Natural Language Interface with Neural Programmer

    Arvind Neelakantan*, Quoc V. Le, Martín Abadi, Andrew McCallum*, Dario Amodei*

  • Deep Information Propagation

    Samuel Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein

  • Identity Matters in Deep Learning

    Moritz Hardt, Tengyu Ma

  • A Learned Representation For Artistic Style

    Vincent Dumoulin*, Jonathon Shlens, Manjunath Kudlur

  • Adversarial Training Methods for Semi-Supervised Text Classification

    Takeru Miyato, Andrew M. Dai, Ian Goodfellow†

  • HyperNetworks

    David Ha, Andrew Dai, Quoc V. Le

  • Learning to Remember Rare Events

    Lukasz Kaiser, Ofir Nachum, Aurko Roy*, Samy Bengio

Workshop Track

  • Particle Value Functions

    Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh

  • Neural Combinatorial Optimization with Reinforcement Learning

    Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio

  • Short and Deep: Sketching and Neural Networks

    Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar

  • Explaining the Learning Dynamics of Direct Feedback Alignment

    Justin Gilmer, Colin Raffel, Samuel S. Schoenholz, Maithra Raghu, Jascha Sohl-Dickstein

  • Training a Subsampling Mechanism in Expectation

    Colin Raffel, Dieterich Lawson

  • Tuning Recurrent Neural Networks with Reinforcement Learning

    Natasha Jaques*, Shixiang (Shane) Gu*, Richard E. Turner, Douglas Eck

  • REBAR: Low-Variance, Unbiased Gradient Estimates for Discrete Latent Variable Models

    George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein

  • Adversarial Examples in the Physical World

    Alexey Kurakin, Ian Goodfellow†, Samy Bengio

  • Regularizing Neural Networks by Penalizing Confident Output Distributions

    Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton

  • Unsupervised Perceptual Rewards for Imitation Learning

    Pierre Sermanet, Kelvin Xu, Sergey Levine

  • Changing Model Behavior at Test-time Using Reinforcement Learning

    Augustus Odena, Dieterich Lawson, Christopher Olah

* 工作内容在谷歌就职时完成

† 工作内容在 OpenAI 任职时完成

详细信息可访问 research.googleblog 了解,雷锋网编译。

雷峰网版权文章,未经授权禁止转载。详情见转载须知

Ian Goodfellow撰文总结:谷歌的 ICLR 2017 硕果累累

分享:
相关文章
当月热门文章
最新文章
请填写申请人资料
姓名
电话
邮箱
微信号
作品链接
个人简介
为了您的账户安全,请验证邮箱
您的邮箱还未验证,完成可获20积分哟!
请验证您的邮箱
立即验证
完善账号信息
您的账号已经绑定,现在您可以设置密码以方便用邮箱登录
立即设置 以后再说