Vision-based positioning and navigation with 3D maps: concepts and analysis

Download files
Access & Terms of Use
open access
Copyright: Olesk, Aire
Altmetric
Abstract
Three-dimensional (3D) representations of objects are steadily working their way into the mainstream in the modern navigation world and increasingly gaining ground due to the number of potential applications. Realistic 3D maps have recently become a common feature in the World Wide Web, car navigation systems as well as in mobile phones as they provide dimensions with imagery and thus connect people to the world. In addition to the visual representations of the surrounding environment, 3D navigable maps can also be used for 3D navigation when integrated with a vision sensor. This thesis is an investigation into the integration of digital camera with 3D maps to determine user’s 3D position and orientation for vision-based 3D navigation. One focus point in this research is on the historical developments of 3D maps, navigable maps and the concept of how to use these to enhance navigation. In addition the principles of using visual measurements for 3D navigation have been defined, experimented and analysed. After defining the properties of 3D navigable maps, an in-depth analysis has been carried out to outline the error sources affecting vision-based navigation and to assess the performance of the algorithms. The core of this thesis is the study on accuracy, sensitivity and reliability of the navigation solutions using a 3D map as a navigation sensor and a single image from the user’s digital camera as information about his/her current position and orientation. Experiments were carried out with PhotoModeler® software to provide camera position estimation and with Matlab® software for Direct Linear Transformation, Space Resection and Least Squares Estimation. This study shows that the performance of the algorithm is promising and the estimation of the user’s camera position can be within a decimetre level accuracy. In order to achieve that, it is crucial to consider the fact that the space resection algorithm has high sensitivity to the initial orientation parameters. According to the accuracy of camera position estimation from space resection it is recommended that the vision-based navigation and positioning should be reinforced by inertial sensors, which provide more accurate orientation parameters. The lower accuracy of the camera position and orientation estimation are also influenced by the calibration of the camera, the image quality and resolution, the number and distribution of the ground control points, and also the map itself, together with the accuracy of the surveyed ground control points.
Persistent link to this record
Link to Publisher Version
Link to Open Access Version
Additional Link
Author(s)
Olesk, Aire
Supervisor(s)
Wang, Jinling
Trinder, John
Creator(s)
Editor(s)
Translator(s)
Curator(s)
Designer(s)
Arranger(s)
Composer(s)
Recordist(s)
Conference Proceedings Editor(s)
Other Contributor(s)
Corporate/Industry Contributor(s)
Publication Year
2011
Resource Type
Thesis
Degree Type
Masters Thesis
UNSW Faculty
Files
download whole.pdf 1.48 MB Adobe Portable Document Format
Related dataset(s)