閱讀全文 | |
篇名 |
Real-time Reconstruction of Unstructured Scenes Based on Binocular Vision Depth
|
---|---|
並列篇名 | Real-time Reconstruction of Unstructured Scenes Based on Binocular Vision Depth |
作者 | Ying-Gang Xie、林瑞榮、Guangjun Liu、Jiangyu Lan、林曉蓉 |
英文摘要 | Binocular vision is a method that simulates the principle of human vision and uses a computer to passively perceive distance. By obtaining the depth of field information of the object, the actual distance between the object and the camera can be calculated. Based on the improved SURF (Speeded-Up Robust Features) algorithm, this paper implements image feature extraction from different perspectives. The image fusion based on improved Sobel algorithm is used to achieve image fusion and image feature point matching. The triangulation principle is used to calculate the offset between pixels to obtain the three-dimensional information of the object, reconstruct the threedimensional coordinates, analyze the actual depth, establish the bad point culling rule based on the numerical relationship of the image sequence, and finally use the visual depth information to construct the unstructured 3D (3 dimensions) real-time scene. The experimental results show that for the target in the actual unstructured scene, the average error of 0.1m to 3m is 2.99%; the average error of 3m to 10m is 5.81%, and the system achieves higher measurement accuracy and better 3D reconstruction effect. |
起訖頁 | 1611-1623 |
關鍵詞 | Binocular vision、Three-dimensional reconstruction、Visual depth、SURF algorithm |
刊名 | 網際網路技術學刊 |
期數 | 201909 (20:5期) |
出版單位 | 台灣學術網路管理委員會 |
DOI |
|
QR Code | |
該期刊 上一篇
| Novel Dynamic KNN with Adaptive Weighting Mechanism for Beacon-based Indoor Positioning System |
該期刊 下一篇
| MPR Based Secure Content Routing Scheme for NDN-MANET |