• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

gestom/whycon-orig: WhyCon localization system - non-ROS version used in the exp ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

gestom/whycon-orig

开源软件地址(OpenSource Url):

https://github.com/gestom/whycon-orig

开源编程语言(OpenSource Language):

C 54.3%

开源软件介绍(OpenSource Introduction):

https://uloz.to/file/zF5w5Tw4WbGz/test-avi

The latest version of the system is in the whycon-ros-full branch.

WhyCon

A precise, efficient and low-cost localization system

WhyCon is a version of a vision-based localization system that can be used with low-cost web cameras, and achieves millimiter precision with very high performance. The system is capable of efficient real-time detection and precise position estimation of several circular markers in a video stream. It can be used both off-line, as a source of ground-truth for robotics experiments, or on-line as a component of robotic systems that require real-time, precise position estimation. WhyCon is meant as an alternative to widely used and expensive localization systems. It is fully open-source. WhyCon-orig is WhyCon's original, minimalistic version that was supposed to be ROS and openCV independent.

WhyCon example application (video) Scenario description
WhyCon applications -precise docking to a charging station (EU project STRANDS),
-fitness evaluation for self-evolving robots (EU proj. SYMBRION),
-relative localization of UAV-UGV formations (CZ-USA project COLOS),
-energy source localization in (EU proj REPLICATOR),
-robotic swarm localization (EU proj HAZCEPT).

The WhyCon system was developed as a joint project between the University of Buenos Aires, Czech Technical University and University of Lincoln, UK. The main contributors were Matias Nitsche, Tom Krajnik, Peter Lightbody and Jan Faigl. Each of these contributors maintains a slightly different version of WhyCon.

WhyCon version Application Main features Maintainer
WhyCon-orig general 2D, 3D, ROS, lightweight, autocalibration Tom Krajnik
WhyCon-ROS general 2D, ROS Matias Nitsche
SwarmCon μ-swarms 2D, individual IDs, autocalibration Tom Krajnik
Caspa-WhyCon UAVs embedded, open HW-SW solution Jan Faigl
Social-card HRI ROS, allows to command a robot Tom Krajnik

Where is it described ?

WhyCon was first presented on International Conference on Advanced Robotics 2013 [2], later in the Journal of Intelligent and Robotics Systems [1] and finally at the Workshop on Open Source Aerial Robotics during the International Conference on Intelligent Robotic Systems, 2015 [3]. Its early version was also presented at the International Conference of Robotics and Automation, 2013 [4]. An extension of the system, which used a necklace code to add ID's to the tags, achieved a best paper award at the SAC 2017 conference [5]. If you decide to use this software for your research, please cite WhyCon using the one of the references provided in this bibtex file.


Setting up WhyCon

Prepare prerequisities

1 Make sure your system is up to date: sudo apt-get update.

  1. Install the required libraries.: sudo apt-get install libsdl1.2-dev libsdl-ttf2.0-dev libncurses5-dev.
  2. Install git, guvcview etc sudo apt install git guvcview.
  3. Run guvcview and check if you see your camera feed, adjust your camera settings (exposure, brightness etc) and check the available resolutions.

Compile, run and test

  1. Download the software from GitHub git clone https://github.com/gestom/whycon-orig.git and go to the src directory.
  2. Adjust the camera resolution in the main/whycon.cpp.
  3. Compile the software - just type make.
  4. Download, resize and print one circular pattern - you have the pattern also in the whycon-orig/etc/test.pdf file.
  5. Try a test run - you need to run the binary in the bin directory. Type ../bin/whycon /dev/videoX 1, where X is the number of the camera and 1 tells the system to track one pattern.
  6. You should see the image with some numbers below the circle. Pressing D shows the segmentation result.
  7. At this point, you can also change camera brightness, exposure, contrast by pressing (SHIFT) b, e, c respectively. These settings are stored in etc/camera.cfg and reloaded on restart.
  8. Open your browser to view localhost:6666. You should see the circle position.

Setting up the coordinate system

  1. Calibrate your camera using the MATLAB (or Octave) calibration toolbox and put the Calib_Results.m in the etc directory.
  2. If you have resized the markers (their default size is 122mm), then adjust their diameter in the main/whycon.cpp file.
  3. Print additional four circular markers and place to the corners of your (reclangular) operational space.
  4. Modify the dimensions of the operation space in the main/whycon.cpp and call make to recompile - the system will now assume that the four markers are at positions [0,0],[fieldLength,0], [0,fieldWidth],[fieldLength,fieldWidth].
  5. Position and fixate your camera so that it has all four circles in it's field of view.
  6. Go to bin directory and run ./whycon /dev/videoX Y, where X is the number of your camera and Y is the number of patterns you want to track, i.e. Y=NxM+4.
  7. Once all the patterns are found, press a and the four outermost patterns will be used to calculate the coordinate system.
  8. Alternatively, you can press r and then click the four circles that define the coordinate system.
  9. Pressing 1 should show you the patterns' positions in camera-centric coordinates (x-axis equals to camera optical axis), pressing 2 and 3 will display marker coordinates in user-defined 2D or 3D coordinate systems.
  10. Pressing +,- changes the number of localized patterns.

To postprocess the stored videos

  1. To create a log of the robot positions, simply create an output folder at the directory where you run the whycon.
  2. If your camera supports the MJPEG format, then the system will create a video in the output folder as well.
  3. If your camera does not support MJPEG, whycon will save the video feed as a serie of bitmaps, that you can process later as well.
  4. You can run whycon video_file_name Y to process that video in the same way as when using the camera, i.e. video_file_name instead of /dev/videoX.
  5. Processing a saved video rather than the camera feed is likely to provide more precise results.
  6. Running the system with a nogui argument e.g. ./whycon /dev/video0 1 nogui causes text-only output - this can speed-up postprocessing.
  7. Logs and videos might be large - to prevent saving logs and videos, run the system with nolog or novideo argument.

Some additional remarks

  1. At this point, you can start experimenting with the syste by adding whatever features you might think useful.
  2. We have tried to comment the code so an experienced programmer should be able to alter the system accordingly. However, if you have any questions regarding the code, feel free to contact Tom Krajnik or Matias Nitsche
  3. If you use this localization system for your research, please don't forget to cite at least one relevant paper from these bibtex records.
  4. Have fun!

Dependencies

All the following libraries are probably in your packages.

  1. libsdl1.2-dev for graphical user interface.
  2. libsdl-ttf2.0-dev to print stuff in the GUI.
  3. libncurses5-dev to print stuff on the terminal.
  4. guvcview to set-up the camera.

References

  1. T. Krajník, M. Nitsche et al.: A Practical Multirobot Localization System. Journal of Intelligent and Robotic Systems (JINT), 2014. [bibtex].
  2. T. Krajník, M. Nitsche et al.: External localization system for mobile robotics. International Conference on Advanced Robotics (ICAR), 2013. [bibtex].
  3. M. Nitsche, T. Krajník et al.: WhyCon: An Efficent, Marker-based Localization System. IROS Workshop on Open Source Aerial Robotics, 2015. [bibtex].
  4. J. Faigl, T. Krajník et al.: Low-cost embedded system for relative localization in robotic swarms. International Conference on Robotics and Automation (ICRA), 2013. [bibtex].
  5. P. Lightbody, T. Krajník et al.: A versatile high-performance visual fiducial marker detection system with scalable identity encoding.Symposium on Applied Computing, 2017.[bibtex].

Acknowledgements

The development of this work is currently supported by the Czech Science Foundation project 17-27006Y STRoLL. In the past, the work was supported by EU within its Seventh Framework Programme project ICT-600623 STRANDS. The Czech Republic and Argentina have given support through projects 7AMB12AR022, ARC/11/11 and 13-18316P. We sincerely acknowledge Jean Pierre Moreau for his excellent libraries for numerical analysis that we use in our project.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap