• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

FluxML/Optimisers.jl: Optimisers.jl defines many standard optimisers and utiliti ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称:

FluxML/Optimisers.jl

开源软件地址:

https://github.com/FluxML/Optimisers.jl

开源编程语言:

Julia 100.0%

开源软件介绍:

Optimisers.jl

Optimisers.jl defines many standard gradient-based optimisation rules, and tools for applying them to deeply nested models.

This is the future of training for Flux.jl neural networks, and the present for Lux.jl. But it can be used separately on any array, or anything else understood by Functors.jl.

Installation

] add Optimisers

Usage

The core idea is that optimiser state (such as momentum) is explicitly handled. It is initialised by setup, and then at each step, update returns both the new state, and the model with its trainable parameters adjusted:

state = Optimisers.setup(Optimisers.Adam(), model)  # just once

grad = Zygote.gradient(m -> loss(m(x), y), model)[1]

state, model = Optimisers.update(state, model, grad)  # at every step

For models with deeply nested layers containing the parameters (like Flux.jl models), this state is a similarly nested tree. As is the gradient: if using Zygote, you must use the "explicit" style as shown, not the "implicit" one with Params.

The function destructure collects all the trainable parameters into one vector, and returns this along with a function to re-build a similar model:

vector, re = Optimisers.destructure(model)

model2 = re(2 .* vector)

The documentation explains usage in more detail, describes all the optimization rules, and shows how to define new ones.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap