• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

vadimkantorov/contextlocnet: ContextLocNet: Context-aware Deep Network Models fo ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

vadimkantorov/contextlocnet

开源软件地址(OpenSource Url):

https://github.com/vadimkantorov/contextlocnet

开源编程语言(OpenSource Language):

Lua 81.6%

开源软件介绍(OpenSource Introduction):

Information & Contact

If you use this code, please cite our work:

@inproceedings{kantorov2016,
      title = {ContextLocNet: Context-aware Deep Network Models for Weakly Supervised Localization},
      author = {Kantorov, V., Oquab, M., Cho M. and Laptev, I.},
      booktitle = {Proc. European Conference on Computer Vision (ECCV), 2016},
      year = {2016}
}

The results are available on the project website and in the paper (arXiv page). Please submit bugs and ask questions on GitHub directly, for other inquiries please contact Vadim Kantorov.

This is a joint work of Vadim Kantorov, Maxime Oquab, Minsu Cho, and Ivan Laptev.

Running the code

  1. Install the dependencies: Torch with cuDNN support; HDF5; matio; protobuf; Luarocks packages rapidjson, hdf5, matio, loadcaffe, xml; MATLAB or octave binary in PATH (for computing detection mAP).

We strongly recommend using wigwam for this (fix the paths to nvcc and libcudnn.so before running the command):

wigwam install torch hdf5 matio protobuf octave -DPATH_TO_NVCC="/path/to/cuda/bin/nvcc" -DPATH_TO_CUDNN_SO="/path/to/cudnn/lib64/libcudnn.so"
wigwam install lua-rapidjson lua-hdf5 lua-matio lua-loadcaffe lua-xml
wigwam in # execute this to make the installed libraries available
  1. Clone this repository, change the current directory to contextlocnet, and compile the ROI pooling module:
git clone https://github.com/vadimkantorov/contextlocnet
cd contextlocnet
(cd ./model && luarocks make)
  1. Download the VOC 2007 dataset and Koen van de Sande's selective search windows for VOC 2007 and the VGG-F model by running the first command. Optionally download the VOC 2012 and Ross Girshick's selective search windows by manually downloading the VOC 2012 test data tarball to data/common and then running the second command:
make -f data/common/Makefile download_and_extract_VOC2007 download_VGGF
# make -f data/common/Makefile download_and_extract_VOC2012
  1. Choose a dataset, preprocess it, and convert the VGG-F model to the Torch format:
export DATASET=VOC2007
th preprocess.lua VOC VGGF
  1. Select a GPU and train a model (our best model is model/contrastive_s.lua, other choices are model/contrastive_a.lua, model/additive.lua, and model/wsddn_repro.lua):
export CUDA_VISIBLE_DEVICES=0
th train.lua model/contrastive_s.lua				# will produce data/model_epoch30.h5 and data/log.json
  1. Test the trained model and compute CorLoc and mAP:
SUBSET=trainval th test.lua data/model_epoch30.h5 # will produce data/scores_trainval.h5
th corloc.lua data/scores_trainval.h5			    # will produce data/corloc.json
SUBSET=test th test.lua data/model_epoch30.h5	    # will produce data/scores_test.h5
th detection_mAP.lua data/scores_test.h5		    # will produce data/detection_mAP.json

Pretrained models for VOC 2007

Model model_epoch30.h5 log.json corloc.json detection_mAP.json
contrastive_s link link link link
wsddn_repro link link link link

Acknowledgements & Notes

We greatly thank Hakan Bilen, Relja Arandjelović and Soumith Chintala for fruitful discussion and help.

This work would not have been possible without prior work: Hakan Bilen's WSDDN, Spyros Gidaris's LocNet, Sergey Zagoruyko's loadcaffe, Facebook FAIR's fbnn/Optim.lua.

The code is released under the MIT license.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap