• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

skhadem/3D-BoundingBox: PyTorch implementation for 3D Bounding Box Estimation Us ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

skhadem/3D-BoundingBox

开源软件地址(OpenSource Url):

https://github.com/skhadem/3D-BoundingBox

开源编程语言(OpenSource Language):

Python 44.6%

开源软件介绍(OpenSource Introduction):

3D Bounding Box Estimation Using Deep Learning and Geometry

If interested, join the slack workspace where the paper is discussed, issues are worked through, and more! Click this link to join.

Introduction

PyTorch implementation for this paper.

example-image

At the moment, it takes approximately 0.4s per frame, depending on the number of objects detected. An improvement will be speed upgrades soon. Here is the current fastest possible: example-video

Requirements

  • PyTorch
  • Cuda
  • OpenCV >= 3.4.3

Usage

In order to download the weights:

cd weights/
./get_weights.sh

This will download pre-trained weights for the 3D BoundingBox net and also YOLOv3 weights from the official yolo source.

If script is not working: pre trained weights and YOLO weights

To see all the options:

python Run.py --help

Run through all images in default directory (eval/image_2/), optionally with the 2D bounding boxes also drawn. Press SPACE to proceed to next image, and any other key to exit.

python Run.py [--show-yolo]

Note: See training for where to download the data from

There is also a script provided to download the default video from Kitti in ./eval/video. Or, download any Kitti video and corresponding calibration and use --image-dir and --cal-dir to specify where to get the frames from.

python Run.py --video [--hide-debug]

Training

First, the data must be downloaded from Kitti. Download the left color images, the training labels, and the camera calibration matrices. Total is ~13GB. Unzip the downloads into the Kitti/ directory.

python Train.py

By default, the model is saved every 10 epochs in weights/. The loss is printed every 10 batches. The loss should not converge to 0! The loss function for the orientation is driven to -1, so a negative loss is expected. The hyper-parameters to tune are alpha and w (see paper). I obtained good results after just 10 epochs, but the training script will run until 100.

How it works

The PyTorch neural net takes in images of size 224x224 and predicts the orientation and relative dimension of that object to the class average. Thus, another neural net must give the 2D bounding box and object class. I chose to use YOLOv3 through OpenCV. Using the orientation, dimension, and 2D bounding box, the 3D location is calculated, and then back projected onto the image.

There are 2 key assumptions made:

  1. The 2D bounding box fits very tightly around the object
  2. The object has ~0 pitch and ~0 roll (valid for cars on the road)

Future Goals

  • Train custom YOLO net on the Kitti dataset
  • Some type of Pose visualization (ROS?)

Credit

I originally started from a fork of this repo, and some of the original code still exists in the training script.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap