• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

smichalowski/google_inception_v3_for_caffe: Google Inception (deepdream) v3 for ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

smichalowski/google_inception_v3_for_caffe

开源软件地址(OpenSource Url):

https://github.com/smichalowski/google_inception_v3_for_caffe

开源编程语言(OpenSource Language):


开源软件介绍(OpenSource Introduction):

Google Inception V3 for Caffe

revision 2

Introduction

This model is a replication of the model described in the Rethinking the Inception Architecture for Computer Vision

If you wish to train this model on ILSVRC2012 dataset remember to prepare LMDB with 300px images instead of 256px.

Hardware and Training

Original implementation from paper uses 32 batch size for 100 epochs using RMSProp with learning rate of 0.045. You need some Titan X or K40 with more than 10GB of RAM. Provided train_val.prototxt uses batch_size = 22 and fits into Titan X.

Please use NVIDIA/caffe for training. I was UNABLE to achieve good results using regular Caffe (probably because of different BatchNorm implementation).

Training on TINY SET

I have trained this model on ImageNet subset of 18 categories for 11 epochs using NVIDIA/caffe branch 0.15.5:

I0628 08:35:16.726322 26414 solver.cpp:362] Iteration 11704, Testing net (#0)
I0628 08:35:56.465996 26414 solver.cpp:429]     Test net output #0: acc/top-1 = 0.796649
I0628 08:35:56.466174 26414 solver.cpp:429]     Test net output #1: acc/top-5 = 0.962012
I0628 08:35:56.466193 26414 solver.cpp:429]     Test net output #2: loss = 1.17044 (* 1 = 1.17044 loss)

If you want to try it yourself you can find it here. Remember that this link provides model trained using ONLY 18 categories.

DIGITS

If you want to train this network using NVIDIA/DIGITS compatible train_val.prototxt is provided in digits folder for your pleasure. Please be advised that currently DIGITS web interface doesnt allow you to set following parameters for solver that allowed me to achieve such good results on tiny set:

  • rms_decay set to 0.9
  • clip_gradients set to 80
  • weight_decay set to 0.0004 (currently DIGITS calculate it automatically)

You can force DIGITS to use these parameters hardcoding these values into train_caffe.py

Just paste this code:

solver.rms_decay=0.9
solver.clip_gradients=80
solver.weight_decay=0.0004

Also if you will use DIGITS to create "New Image Classification Dataset" be sure to set Image Encoding to None.

Training on full ImageNet

Right now Im training it on full ImageNet set using provided solver.txt. I will publish it when it`s done.

License

This model is released for unrestricted use.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap