news 2026/5/11 14:59:52

yolo-ORBSLAM2复现

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
yolo-ORBSLAM2复现

这个也是一个经典的问题了,我是想复现,再进行修改,因为我不使用yolo作为检测,但要先搞清楚检测框是怎么送入slam的,所以先复现各位大佬们的。主要参考:

https://github.com/JinYoung6/orbslam_addsemantic

https://blog.csdn.net/jiny_yang/article/details/116308845

(这个博主其实还实现了python和c++的通信,实时读取检测框,这个也是我后面要实现的,但是我要通过ros来实现)

感谢大佬们的工作!!

在这个之前最好先去跑一下官方orbslam2,把环境啥的配好,可以参照我之前的文章。

我的系统:ubuntu20.04, ROs Noetic

一、编译

有报错的话,找这个:

https://blog.csdn.net/weixin_52519143/article/details/127000332

https://github.com/NeSC-IV/cube_slam-on-ubuntu20/blob/master/%E7%BC%96%E8%AF%91%E6%8C%87%E5%8D%97CubeSLAM%20Monocular%203D%20Object%20SLAM.md

其实报错还是来源于opencv的版本问题,因为现在大多数人都用opencv4了,orbslam还是用的opencv3,导致很多函数头文件都不匹配,还有cmakelist里的find不匹配,改下版本就好了,按照报错一个个修改就好了。(这已经是我改的第二次了,之前还改了一个ros版本的,QAQ,熟能生巧了)。后续要是调试完了,我会把改完后的代码放在github

cd your path chmod +x build.sh ./build.sh

编译成功:

二、运行

1. 匹配RGB和深度图

使用TUM数据集

官方associate.py,将深度图与rgb图的时间戳匹配,我做了修改解决python2和python3语法不兼容问题,这是修改后的版本

#!/usr/bin/python # Software License Agreement (BSD License) # # Copyright (c) 2013, Juergen Sturm, TUM # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following # disclaimer in the documentation and/or other materials provided # with the distribution. # * Neither the name of TUM nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. # # Requirements: # sudo apt-get install python-argparse """ The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images. For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches. """ import argparse import sys import os import numpy def read_file_list(filename): """ Reads a trajectory from a text file. File format: The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched) and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. Input: filename -- File name Output: dict -- dictionary of (stamp,data) tuples """ file = open(filename) data = file.read() lines = data.replace(","," ").replace("\t"," ").split("\n") list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"] list = [(float(l[0]),l[1:]) for l in list if len(l)>1] return dict(list) def associate(first_list, second_list,offset,max_difference): """ Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim to find the closest match for every input tuple. Input: first_list -- first dictionary of (stamp,data) tuples second_list -- second dictionary of (stamp,data) tuples offset -- time offset between both dictionaries (e.g., to model the delay between the sensors) max_difference -- search radius for candidate generation Output: matches -- list of matched tuples ((stamp1,data1),(stamp2,data2)) """ first_keys = list(first_list.keys()) second_keys = list(second_list.keys()) potential_matches = [(abs(a - (b + offset)), a, b) for a in first_keys for b in second_keys if abs(a - (b + offset)) < max_difference] potential_matches.sort() matches = [] for diff, a, b in potential_matches: if a in first_keys and b in second_keys: first_keys.remove(a) second_keys.remove(b) matches.append((a, b)) matches.sort() return matches if __name__ == '__main__': # parse command line parser = argparse.ArgumentParser(description=''' This script takes two data files with timestamps and associates them ''') parser.add_argument('first_file', help='first text file (format: timestamp data)') parser.add_argument('second_file', help='second text file (format: timestamp data)') parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true') parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0) parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02) args = parser.parse_args() first_list = read_file_list(args.first_file) second_list = read_file_list(args.second_file) matches = associate(first_list, second_list,float(args.offset),float(args.max_difference)) if args.first_only: for a,b in matches: print("%f %s"%(a," ".join(first_list[a]))) else: for a,b in matches: print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))

运行命令:

python associate.py /home/yz/orbslam_addsemantic/dataset/TUM/rgbd_dataset_freiburg3_walking_xyz/rgb.txt /home/yz/orbslam_addsemantic/dataset/TUM/rgbd_dataset_freiburg3_walking_xyz/depth.txt > /home/yz/orbslam_addsemantic/dataset/TUM/rgbd_dataset_freiburg3_walking_xyz/walking_xyz_associations.txt 记得改为你自己的路径

2. 运行

./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM3.yaml /home/yz/orbslam_addsemantic/dataset/TUM/rgbd_dataset_freiburg3_walking_xyz /home/yz/orbslam_addsemantic/dataset/TUM/rgbd_dataset_freiburg3_walking_xyz/walking_xyz_associations.txt /home/yz/orbslam_addsemantic/detect_result/TUM_f3xyz_yolov5m/detect_result/ 记得改为自己的路径

这里会出现一个经典报错:

New map created with 310 points Segmentation fault (core dumped)核心已转储

解决方案:

一是:

在frame.cc中第1162行左右 const float d = imDepth.at<float>(v,u);修改为: if(v<0 || u<0) continue; const float d = imDepth.at<float>(v,u);

二是:https://github.com/raulmur/ORB_SLAM2/issues/341

删除掉ORBSLAM的cmakelists中的 -march=native 以及 g2o 的cmakelists中的-march=native
重新执行ORBSLAM目录下的./build.sh 。之前我就是忘记了g2o 的cmakelists,导致一直没找到原因,哈哈。

现在可以运行了:可以看到行人动态点被剔除了

————————————

感谢大佬们的贡献,救我狗命。

接下来我会找找这里面是怎么把检测框送到slam中,然后去做相应的修改

版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/5/11 14:58:34

python基于大数据技术的购房推荐系统的设计与实现

Python基于大数据技术的购房推荐系统的设计与实现是一个复杂但具有广泛应用前景的项目。以下是对该系统的详细介绍&#xff1a; 一、系统概述 购房推荐系统利用Python编程语言的强大功能和丰富的大数据技术&#xff0c;结合机器学习算法和推荐算法&#xff0c;对购房数据进行深…

作者头像 李华
网站建设 2026/5/11 14:58:29

介观交通流仿真软件:DynusT_(20).DynusT在实际项目中的应用

DynusT在实际项目中的应用 在上一节中&#xff0c;我们已经了解了DynusT的基本功能和使用方法。本节将详细介绍如何在实际项目中应用DynusT进行交通流仿真。我们将通过具体的案例来展示如何设置仿真参数、导入交通网络数据、模拟交通流量以及分析仿真结果。这些案例将涵盖城市交…

作者头像 李华
网站建设 2026/5/11 14:58:33

深入JVM(三):JVM执行引擎

JVM执行引擎 一、JVM前后端编译 前端编译&#xff1a;使用编译器将Java文件编译成class字节码文件后端编译&#xff1a;将class字节码文件编译成机器码指令java 跨平台直接理解&#xff1a;前端编译将java文件编译成class文件&#xff0c; 然后使用jvm&#xff08;后端编译&…

作者头像 李华
网站建设 2026/5/11 14:58:28

通信系统仿真:通信系统基础理论_(8).抗干扰技术

抗干扰技术 1. 引言 在通信系统中,信号的传输会受到各种干扰的影响,这些干扰可能来自自然环境(如电磁波、雷电等)或人为因素(如其他通信系统、电子设备等)。这些干扰会降低通信系统的性能,导致信号失真、误码率增加等问题。因此,研究和应用抗干扰技术是非常重要的。本…

作者头像 李华
网站建设 2026/5/7 6:52:27

Python 爬虫实战:从零搭建第一个网页爬虫

前言 在数据驱动的时代&#xff0c;网页爬虫作为获取互联网公开数据的核心技术&#xff0c;已成为 Python 开发者必备的技能之一。无论是数据分析、竞品调研还是内容聚合&#xff0c;爬虫都能高效地将分散在网页中的结构化、非结构化数据整合为可利用的格式。本文将从零基础出…

作者头像 李华