在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:FluxML/Optimisers.jl开源软件地址:https://github.com/FluxML/Optimisers.jl开源编程语言:Julia 100.0%开源软件介绍:Optimisers.jlOptimisers.jl defines many standard gradient-based optimisation rules, and tools for applying them to deeply nested models. This is the future of training for Flux.jl neural networks, and the present for Lux.jl. But it can be used separately on any array, or anything else understood by Functors.jl. Installation] add Optimisers UsageThe core idea is that optimiser state (such as momentum) is explicitly handled.
It is initialised by state = Optimisers.setup(Optimisers.Adam(), model) # just once
grad = Zygote.gradient(m -> loss(m(x), y), model)[1]
state, model = Optimisers.update(state, model, grad) # at every step For models with deeply nested layers containing the parameters (like Flux.jl models),
this state is a similarly nested tree. As is the gradient: if using Zygote, you must use the "explicit" style as shown,
not the "implicit" one with The function vector, re = Optimisers.destructure(model)
model2 = re(2 .* vector) The documentation explains usage in more detail, describes all the optimization rules, and shows how to define new ones. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论