Benutzer:Kroemjuwelen/Visuelle Odometrie

aus Wikipedia, der freien Enzyklopädie
< Benutzer:Kroemjuwelen
Dies ist die aktuelle Version dieser Seite, zuletzt bearbeitet am 18. Mai 2021 um 18:56 Uhr durch imported>Elendur(207244) (Elendur verschob die Seite Benutzer:Kroemjuwelen/Visual odometry nach Benutzer:Kroemjuwelen/Visuelle Odometrie, ohne dabei eine Weiterleitung anzulegen).
(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Dieser Artikel (Visuelle Odometrie) ist im Entstehen begriffen und noch nicht Bestandteil der freien Enzyklopädie Wikipedia.
Wenn du dies liest:
  • Der Text kann teilweise in einer Fremdsprache verfasst, unvollständig sein oder noch ungeprüfte Aussagen enthalten.
  • Wenn du Fragen zum Thema hast, nimm am besten Kontakt mit dem Autor Kroemjuwelen auf.
Wenn du diesen Artikel überarbeitest:
  • Bitte denke daran, die Angaben im Artikel durch geeignete Quellen zu belegen und zu prüfen, ob er auch anderweitig den Richtlinien der Wikipedia entspricht (siehe Wikipedia:Artikel).
  • Nach erfolgter Übersetzung kannst du diese Vorlage entfernen und den Artikel in den Artikelnamensraum verschieben. Die entstehende Weiterleitung kannst du schnelllöschen lassen.
  • Importe inaktiver Accounts, die länger als drei Monate völlig unbearbeitet sind, werden gelöscht.
The optical flow vector of a moving object in a video sequence.

In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers.[1]

Overview

In navigation, odometry is the use of data from the movement of actuators to estimate change in position over time through devices such as rotary encoders to measure wheel rotations. While useful for many wheeled or tracked vehicles, traditional odometry techniques cannot be applied to mobile robots with non-standard locomotion methods, such as legged robots. In addition, odometry universally suffers from precision problems, since wheels tend to slip and slide on the floor creating a non-uniform distance traveled as compared to the wheel rotations. The error is compounded when the vehicle operates on non-smooth surfaces. Odometry readings become increasingly unreliable as these errors accumulate and compound over time.

Visual odometry is the process of determining equivalent odometry information using sequential camera images to estimate the distance traveled. Visual odometry allows for enhanced navigational accuracy in robots or vehicles using any type of locomotion on anyVorlage:Citation needed surface.

Types

There are various types of VO.

Monocular and stereo

Depending on the camera setup, VO can be categorized as Monocular VO (single camera), Stereo VO (two camera in stereo setup).

VIO is widely used in commercial quadcopters, which provide localization in GPS denied situations

Feature-based and direct method

Traditional VO's visual information is obtained by the feature-based method, which extracts the image feature points and tracks them in the image sequence. Recent developments in VO research provided an alternative, called the direct method, which uses pixel intensity in the image sequence directly as visual input. There are also hybrid methods.

Visual inertial odometry

If an inertial measurement unit (IMU) is used within the VO system, it is commonly referred to as Visual Inertial Odometry (VIO).

Algorithm

Most existing approaches to visual odometry are based on the following stages.

  1. Acquire input images: using either single cameras.,[2][3] stereo cameras,[3][4] or omnidirectional cameras.[5][6]
  2. Image correction: apply image processing techniques for lens distortion removal, etc.
  3. Feature detection: define interest operators, and match features across frames and construct optical flow field.
    1. Use correlation to establish correspondence of two images, and no long term feature tracking.
    2. Feature extraction and correlation.
    3. Construct optical flow field (Lucas–Kanade method).
  4. Check flow field vectors for potential tracking errors and remove outliers.[7]
  5. Estimation of the camera motion from the optical flow.[8][9][10][11]
    1. Choice 1: Kalman filter for state estimate distribution maintenance.
    2. Choice 2: find the geometric and 3D properties of the features that minimize a cost function based on the re-projection error between two adjacent images. This can be done by mathematical minimization or random sampling.
  6. Periodic repopulation of trackpoints to maintain coverage across the image.

An alternative to feature-based methods is the "direct" or appearance-based visual odometry technique which minimizes an error directly in sensor space and subsequently avoids feature matching and extraction.[4][12][13]

Another method, coined 'visiodometry' estimates the planar roto-translations between images using Phase correlation instead of extracting features.[14][15]

Egomotion

Egomotion estimation using corner detection

Egomotion is defined as the 3D motion of a camera within an environment.[16] In the field of computer vision, egomotion refers to estimating a camera's motion relative to a rigid scene.[17] An example of egomotion estimation would be estimating a car's moving position relative to lines on the road or street signs being observed from the car itself. The estimation of egomotion is important in autonomous robot navigation applications.[18]

Overview

The goal of estimating the egomotion of a camera is to determine the 3D motion of that camera within the environment using a sequence of images taken by the camera.[19] The process of estimating a camera's motion within an environment involves the use of visual odometry techniques on a sequence of images captured by the moving camera.[20] This is typically done using feature detection to construct an optical flow from two image frames in a sequence[16] generated from either single cameras or stereo cameras.[20] Using stereo image pairs for each frame helps reduce error and provides additional depth and scale information.[21][22]

Features are detected in the first frame, and then matched in the second frame. This information is then used to make the optical flow field for the detected features in those two images. The optical flow field illustrates how features diverge from a single point, the focus of expansion. The focus of expansion can be detected from the optical flow field, indicating the direction of the motion of the camera, and thus providing an estimate of the camera motion.

There are other methods of extracting egomotion information from images as well, including a method that avoids feature detection and optical flow fields and directly uses the image intensities.[16]

See also

References

Vorlage:Reflist

[[Category:Robotic sensing]] [[Category:Motion in computer vision]] [[Category:Surveying]]

  1. Maimone, M.: Two years of Visual Odometry on the Mars Exploration Rovers. In: Journal of Field Robotics. 24, Nr. 3, 2007, S. 169–186. doi:10.1002/rob.20184.
  2. Chhaniyara, Savan: Visual Odometry Technique Using Circular Marker Identification For Motion Parameter Estimation. In: The Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines. World Scientific, 2008, 2008.
  3. a b : Visual Odometry. In: Computer Vision and Pattern Recognition, 2004. CVPR 2004.., S. I–652 – I–659 Vol.1. doi:10.1109/CVPR.2004.1315094
  4. a b Comport, A.I.: Real-time Quadrifocal Visual Odometry. In: International Journal of Robotics Research. 29, 2010, S. 245–266. doi:10.1177/0278364909356601.
  5. Scaramuzza, D.: Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles. In: IEEE Transactions on Robotics. 24, Nr. 5, October 2008, S. 1015–1026. doi:10.1109/TRO.2008.2004490.
  6. Corke, P.: Omnidirectional visual odometry for a planetary rover. . doi:10.1109/IROS.2004.1390041
  7. Campbell, J.: Techniques for evaluating optical flow for visual odometry in extreme terrain. . doi:10.1109/IROS.2004.1389991
  8. Sunderhauf, N.: Visual odometry using sparse bundle adjustment on an autonomous outdoor vehicle. In: Tagungsband Autonome Mobile Systeme 2005 (=  Reihe Informatik aktuell). Springer Verlag, 2005, S. 157–163. Archiviert vom Original am 11. Februar 2009 (Abgerufen am 10. Juli 2008).
  9. Konolige, K.: Outdoor mapping and navigation using stereo vision. In: Proc. Of the Intl. Symp. On Experimental Robotics (ISER). 39, 2006, S. 179–190. doi:10.1007/978-3-540-77457-0_17.
  10. Olson, C.F.: Rover navigation using stereo ego-motion. In: Robotics and Autonomous Systems. 43, Nr. 4, 2002, S. 215–229. doi:10.1016/s0921-8890(03)00004-6.
  11. Cheng, Y.: Visual Odometry on the Mars Exploration Rovers. In: IEEE Robotics and Automation Magazine. 13, Nr. 2, 2006, S. 54–62. doi:10.1109/MRA.2006.1638016.
  12. , Thomas Schöps, Daniel Cremers: LSD-SLAM: Large-Scale Direct Monocular SLAM. In: European Conference on Computer Vision 2014.. doi:10.1007/978-3-319-10605-2_54
  13. , Jürgen Sturm, Daniel Cremers: Semi-Dense Visual Odometry for a Monocular Camera. . doi:10.1109/ICCV.2013.183
  14. Zaman, M.: High Precision Relative Localization Using a Single Camera. . doi:10.1109/ROBOT.2007.364078
  15. Zaman, M.: High resolution relative localisation using two cameras. In: Journal of Robotics and Autonomous Systems (JRAS). 55, Nr. 9, 2007, S. 685–692. doi:10.1016/j.robot.2007.05.008.
  16. a b c Irani, M.: Recovery of Ego-Motion Using Image Stabilization. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition. June 1994, S. 21–23.
  17. Burger, W.: Estimating 3D egomotion from perspective image sequence. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 12, Nr. 11, Nov 1990, S. 1040–1058. doi:10.1109/34.61704.
  18. Shakernia, O.: Omnidirectional Egomotion Estimation From Back-projection Flow. In: Conference on Computer Vision and Pattern Recognition Workshop. 7, 2003, S. 82. doi:10.1109/CVPRW.2003.10074.
  19. Tian, T.: Comparison of Approaches to Egomotion Computation. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 1996, S. 315.
  20. a b Milella, A.: Stereo-Based Ego-Motion Estimation Using Pixel Tracking and Iterative Closest Point. In: IEEE International Conference on Computer Vision Systems. January 2006, S. 21.
  21. Olson, C. F.: Rover navigation using stereo ego-motion. In: Robotics and Autonomous Systems. 43, Nr. 9, June 2003, S. 215–229. doi:10.1016/s0921-8890(03)00004-6.
  22. Sudin Dinesh, Koteswara Rao, K. ; Unnikrishnan, M. ; Brinda, V. ; Lalithambika, V.R. ; Dhekane, M.V. "Improvements in Visual Odometry Algorithm for Planetary Exploration Rovers". IEEE International Conference on Emerging Trends in Communication, Control, Signal Processing & Computing Applications (C2SPCA), 2013