视觉SLAM:ORB视觉里程计的简单实现(一)

概述

  • 一个ORB-SLAM系统,其基础部分是其里程计部分。对于 一个简单的ORB视觉里程计,首先需要做的是对采用的相机进行标定,不过为了简单起见,采用数据集进行实验,先熟悉整个算法流程,然后在逐个模块地推敲,比较好。
  • 单目相机和双目相机视觉里程计都需要计算空间三维点的深度信息,而RGB-D相机可以直接获得深度信息,因此我们采用RGB-D相机拍摄的数据集进行实验。
  • 通过视觉SLAM14讲中的第九章节,对代码进行一个简单的修改与整理,在win10系统上使用vs2017实现基础的视觉里程计。
  • 一个SLAM学习的基础文章,内容丰富,值得一看。

一、基础视觉里程计的算法实现步骤

  1. 提取图像中的特征点,比如ORB特征。
  2. 进行系统初始化,即计算每个ORB特征点在初始的世界坐标系下的三维坐标(世界坐标系一般以第一帧图像所在的相机坐标系为准)。
  3. 将初始的三维点保存在地图中(需要构建一个所有地图地点的类),用于后续的优化。对于下一帧同样采取相同的方式提取特征点,将两帧图像的特征点进行匹配,通过PNP算法可以获得当前帧与前一帧图像之间的位置变换关系。
  4. 将当前帧图像的位置关系转换到世界坐标系下,并通过界面显示当前的位置信息与所有的三维点。

二、具体实现细节

1.公用

  • common_include.h,其代码如下:
#pragma once

#include <opencv2/opencv.hpp>
#include <vector>
#include <list>
#include <memory>
#include <string>
#include <iostream>
#include <set>
#include <unordered_map>
#include <map>

2.相机类

  • 构建一个相机类,用于二维点与三维点之间的数据转换。
  • camera.h,其代码如下:
#ifndef CAMERA_H
#define CAMERA_H

#include "common_include.h"

namespace myslam
{
	class Camera
	{
	public:
		Camera(std::string strSettingPath);
		Camera(float fx, float fy, float cx, float cy, float depth_scale) :
			fx_(fx), fy_(fy), cx_(cx), cy_(cy), depth_scale_(depth_scale) {}

		/**
		*@brief world2camera 将世界坐标系下的三维点转换到相机坐标系下
		*@param 输入 p_w:世界坐标系下的三维点
		*@param 输入 T_c_w:世界坐标系到相机坐标系的变换关系
		*@return  转换后的三维点
		*/
		cv::Point3d world2camera(cv::Point3d p_w, cv::Mat T_c_w);

		/**
		*@brief camera2world 将相机坐标系下的三维点转换到世界坐标系下
		*@param 输入 p_c:相机坐标系下的三维点
		*@param 输入 T_w_c:相机坐标系到世界坐标系的变换关系
		*@return  转换后的三维点
		*/
		cv::Point3d camera2world(cv::Point3d p_c, cv::Mat T_w_c);

		/**
		*@brief camera2pixel 将相机坐标系下的三维点转换到图像坐标系下
		*@param 输入 p_c:相机坐标系下的三维点
		*@return  转换后的图像坐标系下的二维点
		*/
		cv::Point2d camera2pixel(cv::Point3d p_c);

		/**
		*@brief pixel2camera 将图像坐标系下的二维点转换到相机坐标系下
		*@param 输入 p_p:图像坐标系下的二维点
		*@param 输入 depth:该二维点在相机坐标系下的深度值
		*@return  转换后的相机坐标系下的三维点
		*/
		cv::Point3d pixel2camera(cv::Point2d p_p, double depth = 1.0);

		/**
		*@brief pixel2world 将图像坐标系下的二维点转换到世界坐标系下
		*@param 输入 p_p:图像坐标系下的二维点
		*@param 输入 T_w_c:相机坐标系到世界坐标系的变换关系
		*@param 输入 depth:该二维点在相机坐标系下的深度值
		*@return  转换后的世界坐标系下的三维点
		*/
		cv::Point3d pixel2world(cv::Point2d p_p, cv::Mat T_w_c, double depth = 1.0);

		/**
		*@brief world2pixel 将世界坐标系下的三维点转换到图像坐标系下
		*@param 输入 p_w:世界坐标系下的三维点
		*@param 输入 T_c_w:世界坐标系到相机坐标系的变换关系
		*@return  转换后的图像坐标系下的二维点
		*/
		cv::Point2d world2pixel(cv::Point3d p_w, cv::Mat T_c_w);

	public:
		//相机内部参数
		float fx_, fy_, cx_, cy_;

		//将深度图转换为实际深度信息的尺度值
		float depth_scale_;
	};
}
#endif
  • 说明:T_c_w是世界坐标系到相机坐标系的变换关系,T_w_c是相机坐标系到世界坐标系的变换关系
  • camera.cpp,其代码如下:
#include "camera.h"

namespace myslam
{
	Camera::Camera(std::string strSettingPath)
	{
		cv::FileStorage fSettings(strSettingPath, cv::FileStorage::READ);

		fx_ = fSettings["Camera.fx"];
		fy_ = fSettings["Camera.fy"];
		cx_ = fSettings["Camera.cx"];
		cy_ = fSettings["Camera.cy"];
		depth_scale_ = fSettings["DepthMapFactor"];
	}

	cv::Point3d Camera::world2camera(cv::Point3d p_w, cv::Mat T_c_w)
	{
		cv::Mat pts3_w(p_w);
		cv::Mat pts4_w(4, 1, CV_64FC1);
		pts3_w.copyTo(pts4_w(cv::Rect(0, 0, 1, 3)));
		pts4_w.at<double>(3, 0) = 1.0;

		cv::Mat pts_convert = T_c_w*pts4_w;
		return cv::Point3d(pts_convert(cv::Rect(0, 0, 1, 3)));
	}

	cv::Point3d Camera::camera2world(cv::Point3d p_c, cv::Mat T_w_c)
	{
		cv::Mat pts3_c(p_c);
		cv::Mat pts4_c(4, 1, CV_64FC1);
		pts3_c.copyTo(pts4_c(cv::Rect(0, 0, 1, 3)));
		pts4_c.at<double>(3, 0) = 1.0;

		cv::Mat pts_convert = T_w_c*pts4_c;
		return cv::Point3d(pts_convert(cv::Rect(0, 0, 1, 3)));
	}

	cv::Point2d Camera::camera2pixel(cv::Point3d p_c)
	{
		cv::Mat pts3_c(p_c);
		return cv::Point2d(
			fx_*pts3_c.at<double>(0, 0) / pts3_c.at<double>(2, 0) + cx_,
			fy_*pts3_c.at<double>(1, 0) / pts3_c.at<double>(2, 0) + cy_
		);
	}

	cv::Point3d Camera::pixel2camera(cv::Point2d p_p, double depth)
	{
		cv::Mat pts_p(p_p);
		return cv::Point3d(
			(pts_p.at<double>(0, 0) - cx_)*depth / fx_,
			(pts_p.at<double>(1, 0) - cy_)*depth / fy_,
			depth
		);
	}

	cv::Point2d Camera::world2pixel(cv::Point3d p_w, cv::Mat T_c_w)
	{
		return camera2pixel(world2camera(p_w, T_c_w));
	}

	cv::Point3d Camera::pixel2world(cv::Point2d p_p, cv::Mat T_w_c, double depth)
	{
		return camera2world(pixel2camera(p_p, depth), T_w_c);
	}
}

3.图像帧

  • 在跟踪过程中,需要知道每一帧的位置信息,特征点信息等,因此需要构建一个图像帧类,用于保存每帧图像的数据。
  • frame.h,其代码如下:
#ifndef FRAME_H
#define FRAME_H

#include "common_include.h"
#include "camera.h"
#include "mappoints.h"

namespace myslam
{
	class MapPoints;
	class Frame
	{
	public:
		unsigned long id_;         //图像帧的id
		double time_stamp_;        //时间戳
		cv::Mat T_c_w_;            //世界坐标系到相机坐标系的变换关系
		Camera* camera_;           //相机类指针
		cv::Mat color_, depth_;    //彩图与深度图
	public:
		//默认构造函数
		Frame();
		
		//构造图像帧
		Frame(long id, double time_stamp = 0, cv::Mat T_c_w = cv::Mat(), Camera* camera = nullptr, cv::Mat color = cv::Mat(), cv::Mat depth = cv::Mat());

		//找到某个特征点在深度图中对应的深度值
		double findDepth(const cv::KeyPoint& kp);

		//获得相机坐标系到世界坐标系的平移向量
		cv::Mat getCamCenter() const;

		//判断某个三维点是否在图像的视野范围内
		bool isInFrame(const cv::Point3d pt_world);
	};
}

#endif
  • frame.cpp,其代码如下:
#include "frame.h"

namespace myslam
{
	Frame::Frame() :id_(-1), time_stamp_(-1), camera_(nullptr) {}
	Frame::Frame(long id, double time_stamp, cv::Mat T_c_w, Camera* camera, cv::Mat color, cv::Mat depth) :
		id_(id), time_stamp_(time_stamp), T_c_w_(T_c_w), camera_(camera), color_(color), depth_(depth) 
	{

	}

	double Frame::findDepth(const cv::KeyPoint& kp)
	{
		int x = round(kp.pt.x);
		int y = round(kp.pt.y);

		ushort d = depth_.ptr<ushort>(y)[x];
		if (d != 0)
			return d / camera_->depth_scale_;
		else
		{
			//没有找到合适的深度值时,在其四领域内再次查找
			int dx[4] = { -1,0,1,0 };
			int dy[4] = { 0,-1, 0, 1 };
			for (int i=0; i<4; i++)
			{
				d = depth_.ptr<ushort>(y + dy[i])[x + dx[i]];
				if (d!=0)
					return d / camera_->depth_scale_;
			}
		}
		return -1.0;
	}

	cv::Mat Frame::getCamCenter() const
	{
		cv::Mat T_w_c = T_c_w_.inv(cv::DecompTypes::DECOMP_SVD);
		return T_w_c(cv::Rect(3, 0, 1, 3));
	}

	bool Frame::isInFrame(const cv::Point3d pt_world)
	{
		cv::Point3d p_cam = camera_->world2camera(pt_world, T_c_w_);
		if (p_cam.z < 0)
			return false;
		cv::Point2d pixel = camera_->camera2pixel(p_cam);
		return (pixel.x > 0 && pixel.y > 0
			&& pixel.x < color_.cols&&pixel.y < color_.rows);
	}
}

4.系统类

  • 将所有的初始化类结构,以及线程的开启都放在系统类中,便于管理各个类的初始化,以及配置参数的读取
  • system.h,其代码如下:
#ifndef SYSTEM_H
#define SYSTEM_H

#include <fstream>
#include <string.h>
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>

#include <thread>
#include <iomanip>

#include "viewer.h"
#include "visual_odometry.h"

namespace myslam
{
	using namespace std;
	class System
	{
	public:
		System(const string& strSettingPath);
		
		cv::Mat TrackingRGBD(cv::Mat im, cv::Mat imD, double tframe);
	public:
		Viewer* mpViewer;
		std::thread* mptViewer;

		VisualOdometry* m_pVO;
	};
}

#endif 
  • system.cpp,其代码如下:
#include "system.h"

namespace myslam
{
	System::System(const string& strSettingPath)
	{
		//初始化界面显示线程
		mpViewer = new Viewer(strSettingPath);
		mptViewer = new thread(&Viewer::Run, mpViewer);

		//初始化视觉里程计指针
		m_pVO = new myslam::VisualOdometry(strSettingPath, mpViewer);
	}

	cv::Mat System::TrackingRGBD(cv::Mat im, cv::Mat imD, double tframe)
	{
		//开始跟踪
		return m_pVO->Tracking(im, imD, tframe);
	}
}

5.视觉里程计类

  • 通过读取的图像与配置文件信息,实现图像帧的构建,并返回当前帧的位置信息
  • visual_odometry.h,其代码如下:
#ifndef VISUAL_ODOMETRY_H
#define VISUAL_ODOMETRY_H

#include "common_include.h"
#include "map.h"
#include "viewer.h"
#include <thread>
#include <mutex>
#include <opencv2/opencv.hpp>

namespace myslam
{
   class Viewer;
   class VisualOdometry
   {
   public:
   	//里程计状态
   	enum VOState
   	{
   		INITIALZING = -1,
   		OK=0,
   		LOST
   	};

   	VOState state_;

   	//地图
   	Map* map_;

   	//图像帧,参考帧,当前帧
   	Frame* ref_;
   	Frame* curr_;

   	//所有的图像帧
   	std::vector<Frame*> mvAllFrame;

   	Viewer* mpViewer;
   	Camera* mp_camera;

   	cv::Ptr<cv::ORB> orb_;

   	//参考帧三维点,所有帧三维点
   	std::vector<cv::Point3f> pts_3d_ref;
   	std::vector<cv::Point3f> pts_3d_all;

   	//当前帧特征点
   	std::vector<cv::KeyPoint> keypoints_curr_;

   	//当前帧、参考帧描述子
   	cv::Mat descriptors_curr_, descriptors_ref_;

   	//特征匹配
   	std::vector<cv::DMatch> feature_matches_;

   	//参考帧到当前帧的位姿信息
   	cv::Mat T_c_r_estimate;

   	//id
   	int frameid;

   	//内点数量
   	int num_inliers_;        
   	int num_lost_;     

   	//orb特征提取参数
   	int num_of_features_;
   	double scale_factor_;
   	int level_pyramid_;
   	float match_ratio_;
   	int max_num_lost_;
   	int min_inliers_;

   	//关键帧的最小旋转和平移
   	double key_frame_min_rot;   
   	double key_frame_min_trans; 

   public:
   	VisualOdometry(std::string strSettingPath, Viewer* pViewer);
   	~VisualOdometry();

   	cv::Mat Tracking(cv::Mat im, cv::Mat imD, double tframe);
   private:
   	cv::Mat addFrame(Frame* frame);

   	void ExtractORB();
   	void featureMatching();
   	void poseEstimationPnP();
   	void setRef3DPoints();

   	void addKeyFrame();
   	bool checkEstimatedPose();
   	bool checkKeyFrame();
   };
}
#endif
  • visual_odometry.cpp,其代码如下:
#include "visual_odometry.h"

namespace myslam
{
	VisualOdometry::VisualOdometry(std::string strSettingPath, Viewer* pViewer) :
		state_(INITIALZING), ref_(nullptr), curr_(nullptr), map_(new Map), num_lost_(0), num_inliers_(0), mpViewer(pViewer)
	{
		//读取配置文件
		cv::FileStorage fSettings(strSettingPath, cv::FileStorage::READ);

		num_of_features_ = fSettings["number_of_features"];
		scale_factor_ = fSettings["scale_factor"];
		level_pyramid_ = fSettings["level_pyramid"];
		match_ratio_ = fSettings["match_ratio"];
		max_num_lost_ = fSettings["max_num_lost"];
		min_inliers_ = fSettings["min_inliers"];
		key_frame_min_rot = fSettings["keyframe_rotation"];
		key_frame_min_trans = fSettings["keyframe_translation"];
		fSettings.release();

		//构建相机类
		mp_camera = new myslam::Camera(strSettingPath);

		//构建orb特征提取类
		orb_ = cv::ORB::create(num_of_features_, scale_factor_, level_pyramid_);

		frameid = 0;
	}

	VisualOdometry::~VisualOdometry()
	{

	}

	//跟踪
	cv::Mat VisualOdometry::Tracking(cv::Mat im, cv::Mat imD, double tframe)
	{
		//构建图像帧
		Frame* pFrame = new Frame(frameid);
		pFrame->color_ = im;
		pFrame->depth_ = imD;
		pFrame->camera_ = mp_camera;
		pFrame->time_stamp_ = tframe;

		//获得跟踪位置信息
		cv::Mat T_c_w = addFrame(pFrame);

		frameid++;

		return T_c_w;
	}

	cv::Mat VisualOdometry::addFrame(Frame* frame)
	{
		cv::Mat T_c_w;

		//未初始化
		if (state_ == INITIALZING)
		{
			std::unique_lock<std::mutex> lock(mpViewer->mMutexViewer);

			//当前帧,参考帧构建
			curr_ = ref_ = frame;
			ref_->T_c_w_ = cv::Mat::eye(4, 4, CV_32FC1);
			curr_->T_c_w_ = cv::Mat::eye(4, 4, CV_32FC1);

			//当前帧插入地图
			map_->insertKeyFrame(frame);

			//提取ORB特征
			ExtractORB();

			//获得特征点对应的三维点
			setRef3DPoints();

			state_ = OK;
			return curr_->T_c_w_;
		}
		else
		{
			//跟踪丢失
			if (state_ == LOST)
			{
				std::cout << "vo has lost." << std::endl;

				num_lost_++;
				if (num_lost_ > max_num_lost_)
				{
					state_ = LOST;
				}
				return ref_->T_c_w_;
			}
			else
			{
				//位姿跟踪
				curr_ = frame;
				ExtractORB();

				//特征匹配
				featureMatching();

				//估计上一帧与当前帧的位置关系
				poseEstimationPnP();

				//检测位姿是否正确
				if (checkEstimatedPose() == true)
				{
					//计算当前帧相对于世界坐标系的位姿
					curr_->T_c_w_ = T_c_r_estimate * ref_->T_c_w_;
					ref_ = curr_;

					//计算当前帧的三维点
					setRef3DPoints();
					num_lost_ = 0;

					//是否是关键帧
					if (checkKeyFrame() == true)
					{
						addKeyFrame();
					}
				}

				//所有的图像帧
				mvAllFrame.push_back(curr_);

				cv::Mat T_c_w = curr_->T_c_w_;

				//向显示类中添加相信的数据
				mpViewer->SetCurrentCameraPose(T_c_w);
				mpViewer->GetAllFrame(mvAllFrame);
				mpViewer->GetAll3dPoints(pts_3d_all);
				mpViewer->SetVisualOdometry(this);

				cv::waitKey(10);

				return T_c_w;
			}
		}
	}

	//提取特征点与描述子
	void VisualOdometry::ExtractORB()
	{
		orb_->detectAndCompute(curr_->color_, cv::Mat(), keypoints_curr_, descriptors_curr_);
	}

	//特征匹配,BF+距离阈值筛选
	void VisualOdometry::featureMatching()
	{
		std::vector<cv::DMatch> matches;
		cv::BFMatcher matcher(cv::NORM_HAMMING);

		matcher.match(descriptors_ref_, descriptors_curr_, matches);
		float max_dis = 0;
		for (int i = 0; i < matches.size(); i++)
		{
			if (matches[i].distance > max_dis)
				max_dis = matches[i].distance;
		}

		feature_matches_.clear();
		for (cv::DMatch& m : matches)
		{
			if (m.distance < max_dis*0.3)
				feature_matches_.push_back(m);
		}
		std::cout << "good matches: " << feature_matches_.size() << std::endl;
	}

	//计算每个特征点对应的三维点
	void VisualOdometry::setRef3DPoints()
	{
		pts_3d_ref.clear();
		descriptors_ref_ = cv::Mat();
		for (int i = 0; i < keypoints_curr_.size(); i++)
		{
			double d = ref_->findDepth(keypoints_curr_[i]);

			cv::Point3f p_cam = ref_->camera_->pixel2camera(cv::Point2f(keypoints_curr_[i].pt.x, keypoints_curr_[i].pt.y), d);
			pts_3d_ref.push_back(p_cam);
			pts_3d_all.push_back(p_cam);
			descriptors_ref_.push_back(descriptors_curr_.row(i));
		}
	}

	void VisualOdometry::poseEstimationPnP()
	{
		std::vector<cv::Point3f> pts3d;
		std::vector<cv::Point2f> pts2d;

		//上一帧与当前帧的匹配点
		for (int i = 0; i < feature_matches_.size(); i++)
		{
			if (pts_3d_ref[feature_matches_[i].queryIdx].z < 0)
				continue;

			//上一帧的三维点
			pts3d.push_back(pts_3d_ref[feature_matches_[i].queryIdx]);

			//该三维点在当前帧对应的二维点
			pts2d.push_back(keypoints_curr_[feature_matches_[i].trainIdx].pt);
		}

		//内参
		cv::Mat K = (cv::Mat_<float>(3, 3) << ref_->camera_->fx_, 0.0, ref_->camera_->cx_,
			0.0, ref_->camera_->fy_, ref_->camera_->cy_,
			0.0, 0.0, 1.0);

		cv::Mat rvec = cv::Mat::zeros(3, 1, CV_32FC1);
		cv::Mat tvec = cv::Mat::zeros(3, 1, CV_32FC1);
		std::vector<int> inliers;

		//pnp位姿计算
		cv::solvePnPRansac(pts3d, pts2d, K, cv::Mat(), rvec, tvec, false, 100, 4.0, 0.99, inliers);
		num_inliers_ = inliers.size();
		std::cout << "pnp inliers: " << inliers.size() << std::endl;

		cv::Rodrigues(rvec, rvec);

		//获得参考帧相对于当前帧的位置关系
		T_c_r_estimate = cv::Mat::eye(4, 4, CV_32FC1);
		rvec.copyTo(T_c_r_estimate(cv::Rect(0, 0, 3, 3)));
		tvec.copyTo(T_c_r_estimate(cv::Rect(3, 0, 1, 3)));
	}

	void VisualOdometry::addKeyFrame()
	{
		std::cout << "adding a key-frame" << std::endl;
		map_->insertKeyFrame(curr_);
	}

	bool VisualOdometry::checkEstimatedPose()
	{
		//内点数太少
		if (num_inliers_ < min_inliers_)
		{
			std::cout << "reject because inlier is too small: " << num_inliers_ << std::endl;
			return false;
		}

		cv::Mat rvec = T_c_r_estimate(cv::Rect(0, 0, 3, 3));
		cv::Mat tvec = T_c_r_estimate(cv::Rect(3, 0, 1, 3));

		//移动距离太大或是旋转矩阵计算错误
		if (tvec.at<double>(0, 0) > 20.0 || abs(1.0 - determinant(rvec)) > 0.01)
		{
			std::cout << "reject because motion is too large: " << std::endl;
			return false;
		}
		return true;
	}

	bool VisualOdometry::checkKeyFrame()
	{
		cv::Mat rvec = T_c_r_estimate(cv::Rect(0, 0, 3, 3));
		cv::Mat tvec = T_c_r_estimate(cv::Rect(3, 0, 1, 3));

		//计算旋转角度
		cv::Scalar t = cv::trace(rvec);
		double trR = t.val[0];
		double theta = acos((trR - 1.0) / 2.0);

		//超过最小的旋转或是平移阈值
		if (abs(theta) > key_frame_min_rot || norm(tvec) > key_frame_min_trans)
			return true;
		return false;
	}
}

6.其他

  • 在跟踪过程中需要显示位姿信息,以判断跟踪是否正确。
  • 本工程使用pangolin库来显示三维点与当前帧的位置信息,可以在这里找到编译与配置方式。
  • win10系统编译好的pangolin库可以在这里下载。

6.1 显示类

  • viewer.h,其代码如下:
#ifndef VIEWER_H
#define VIEWER_H

#include <thread>
#include <mutex>
#include <iomanip>
#include <opencv2/core/core.hpp>

#include "visual_odometry.h"

#include <opencv2/opencv.hpp>
#include <pangolin/pangolin.h>

namespace myslam
{
	class VisualOdometry;
	class Viewer
	{
	public:
		Viewer(std::string strSettingPath);

		void Run();

		//获得当前帧位姿
		void SetCurrentCameraPose(const cv::Mat &Tcw);

		//获得当前帧跟踪信息
		void SetVisualOdometry(myslam::VisualOdometry* pVisualOdometry);

		//获得所有的关键帧数据
		void GetAllFrame(std::vector<Frame*> vAllFrame);

		//获得所有的三维点数据
		void GetAll3dPoints(std::vector<cv::Point3f> All3dpts);

		std::mutex mMutexViewer;

	private:
		void DrawCurrentCamera(pangolin::OpenGlMatrix &Twc);
		void GetCurrentOpenGLCameraMatrix(pangolin::OpenGlMatrix &M);

		void DrawTrackingFrame();
		void DrawMapPoints();

		cv::Mat DrawFrame();

		pangolin::View d_cam;
		pangolin::OpenGlMatrix Tcw;
		pangolin::OpenGlRenderState s_cam;

		std::mutex mMutexCamera;
		std::mutex mMutexTrackingFrame;
		std::mutex mMutex3dPoints;
		std::mutex mMutex;

		std::vector<cv::Point3f> mAll3dpts;

		std::vector<cv::KeyPoint> mkps;
		std::vector<cv::DMatch> mfeature_matches;

		std::vector<Frame*> mvAllFrame;

		cv::Mat mIm;
		int n;

		int state;

		// 1/fps in ms
		double mT;
		float mImageWidth, mImageHeight;

		float mViewpointX, mViewpointY, mViewpointZ, mViewpointF;

		float mKeyFrameSize;
		float mKeyFrameLineWidth;
		float mGraphLineWidth;
		float mPointSize;
		float mCameraSize;
		float mCameraLineWidth;

		cv::Mat mCameraPose;

		myslam::VisualOdometry* VisualOdometry_;
	};
}
#endif
  • viewer.cpp,其代码如下:
#include "viewer.h"

namespace myslam
{
	Viewer::Viewer(std::string strSettingPath)
	{
		cv::FileStorage fSettings(strSettingPath, cv::FileStorage::READ);

		float fps = fSettings["Camera.fps"];
		if (fps < 1)
			fps = 30;
		mT = 1e3 / fps;

		mImageWidth = fSettings["Camera.width"];
		mImageHeight = fSettings["Camera.height"];
		if (mImageWidth < 1 || mImageHeight < 1)
		{
			mImageWidth = 640;
			mImageHeight = 480;
		}

		mViewpointX = fSettings["Viewer.ViewpointX"];
		mViewpointY = fSettings["Viewer.ViewpointY"];
		mViewpointZ = fSettings["Viewer.ViewpointZ"];
		mViewpointF = fSettings["Viewer.ViewpointF"];

		mKeyFrameSize = fSettings["Viewer.KeyFrameSize"];
		mKeyFrameLineWidth = fSettings["Viewer.KeyFrameLineWidth"];
		mGraphLineWidth = fSettings["Viewer.GraphLineWidth"];
		mPointSize = fSettings["Viewer.PointSize"];
		mCameraSize = fSettings["Viewer.CameraSize"];
		mCameraLineWidth = fSettings["Viewer.CameraLineWidth"];

		VisualOdometry_ = nullptr;
	}

	// pangolin库的文档:http://docs.ros.org/fuerte/api/pangolin_wrapper/html/namespacepangolin.html
	void Viewer::Run()
	{
		{
			//互斥做,初始化显示窗口时,不进行位姿运算,对应visual_odometry中的addframe函数
			std::unique_lock<std::mutex> lock(mMutexViewer);
			pangolin::CreateWindowAndBind("my-slam: Map Viewer", 1024, 768);

			// 启动深度测试,OpenGL只绘制最前面的一层,绘制时检查当前像素前面是否有别的像素,如果别的像素挡住了它,那它就不会绘制
			glEnable(GL_DEPTH_TEST);

			// 在OpenGL中使用颜色混合
			glEnable(GL_BLEND);

			// 选择混合选项
			glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

			// 新建按钮和选择框,第一个参数为按钮的名字,第二个为默认状态,第三个为是否有选择框
			pangolin::CreatePanel("menu").SetBounds(0.0, 1.0, 0.0, pangolin::Attach::Pix(175));
			pangolin::Var<bool> menuFollowCamera("menu.Follow Camera", true, true);
			pangolin::Var<bool> menuShowPoints("menu.Show Points", true, true);
			pangolin::Var<bool> menuShowKeyFrames("menu.Show KeyFrames", true, true);
			pangolin::Var<bool> menuShowGraph("menu.Show Graph", true, true);
			pangolin::Var<bool> menuLocalizationMode("menu.Localization Mode", false, true);
			pangolin::Var<bool> menuReset("menu.Reset", false, false);

			// 定义相机投影模型:ProjectionMatrix(w, h, fu, fv, u0, v0, zNear, zFar)
			// 定义观测方位向量:观测点位置:(mViewpointX mViewpointY mViewpointZ)
			//                观测目标位置:(0, 0, 0)
			//                观测的方位向量:(0.0,-1.0, 0.0)
			s_cam = pangolin::OpenGlRenderState(
				pangolin::ProjectionMatrix(1024, 768, mViewpointF, mViewpointF, 512, 389, 0.1, 1000),
				pangolin::ModelViewLookAt(mViewpointX, mViewpointY, mViewpointZ, 0, 0, 0, 0.0, -1.0, 0.0)
			);

			Tcw.SetIdentity();
			cv::namedWindow("my-slam: Current Frame");

			// 定义显示面板大小,orbslam中有左右两个面板,左边显示一些按钮,右边显示图形
			// 前两个参数(0.0, 1.0)表明宽度和面板纵向宽度和窗口大小相同
			// 中间两个参数(pangolin::Attach::Pix(175), 1.0)表明右边所有部分用于显示图形
			// 最后一个参数(-1024.0f/768.0f)为显示长宽比
			d_cam = pangolin::CreateDisplay()
				.SetBounds(0.0, 1.0, pangolin::Attach::Pix(175), 1.0, -1024.0f / 768.0f)
				.SetHandler(new pangolin::Handler3D(s_cam));
		}

		while (1)
		{
			// 清除缓冲区中的当前可写的颜色缓冲 和 深度缓冲
			glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

			// 得到最新的相机位姿
			GetCurrentOpenGLCameraMatrix(Tcw);

			s_cam.Follow(Tcw);
			d_cam.Activate(s_cam);

			// 设置为白色,glClearColor(red, green, blue, alpha),数值范围(0, 1)
			glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
			DrawCurrentCamera(Tcw);
			DrawTrackingFrame();
			DrawMapPoints();

			pangolin::FinishFrame();

			cv::Mat im = DrawFrame();
			if (im.empty())
				continue;

			cv::imshow("my-slam: Current Frame", im);
			cv::waitKey(10);
		}
	}

	void Viewer::DrawTrackingFrame()
	{
		const float &w = mKeyFrameSize*0.3;
		const float h = w*0.75;
		const float z = w*0.6;

		{
			//互斥锁,在使用关键帧信息时,不能修改
			std::unique_lock<std::mutex> lock(mMutexTrackingFrame);

			//绘制所有的关键帧位置
			for (int i = 0; i < mvAllFrame.size(); i++)
			{
				Frame* cur = mvAllFrame[i];
				cv::Mat Twc = cur->T_c_w_.inv();

				pangolin::OpenGlMatrix M;
				if (!Twc.empty())
				{
					cv::Mat Rwc(3, 3, CV_32F);
					cv::Mat twc(3, 1, CV_32F);
					{
						Rwc = Twc.rowRange(0, 3).colRange(0, 3);
						twc = Twc.rowRange(0, 3).col(3);
					}

					M.m[0] = Rwc.at<float>(0, 0);
					M.m[1] = Rwc.at<float>(1, 0);
					M.m[2] = Rwc.at<float>(2, 0);
					M.m[3] = 0.0;

					M.m[4] = Rwc.at<float>(0, 1);
					M.m[5] = Rwc.at<float>(1, 1);
					M.m[6] = Rwc.at<float>(2, 1);
					M.m[7] = 0.0;

					M.m[8] = Rwc.at<float>(0, 2);
					M.m[9] = Rwc.at<float>(1, 2);
					M.m[10] = Rwc.at<float>(2, 2);
					M.m[11] = 0.0;

					M.m[12] = twc.at<float>(0);
					M.m[13] = twc.at<float>(1);
					M.m[14] = twc.at<float>(2);
					M.m[15] = 1.0;
				}
				else
					M.SetIdentity();
				
				glPushMatrix();
				glMultMatrixd(M.m);

				glLineWidth(mKeyFrameLineWidth);
				glColor3f(0.0f, 0.0f, 1.0f);
				glBegin(GL_LINES);
				glVertex3f(0, 0, 0);
				glVertex3f(w, h, z);
				glVertex3f(0, 0, 0);
				glVertex3f(w, -h, z);
				glVertex3f(0, 0, 0);
				glVertex3f(-w, -h, z);
				glVertex3f(0, 0, 0);
				glVertex3f(-w, h, z);

				glVertex3f(w, h, z);
				glVertex3f(w, -h, z);

				glVertex3f(-w, h, z);
				glVertex3f(-w, -h, z);

				glVertex3f(-w, h, z);
				glVertex3f(w, h, z);

				glVertex3f(-w, -h, z);
				glVertex3f(w, -h, z);
				glEnd();

				glPopMatrix();
			}
		}
	}

	void Viewer::DrawMapPoints()
	{
		{
			//互斥锁,在使用三维点信息时,不能修改
			std::unique_lock<std::mutex> lock(mMutex3dPoints);
			glPointSize(mPointSize);
			glBegin(GL_POINTS);
			glColor3f(0.0, 0.0, 0.0);

			//绘制所有的三维点
			for (int i = 0; i < mAll3dpts.size(); i+=10)
			{
				cv::Mat pos = cv::Mat(mAll3dpts[i]);
				glVertex3f(pos.at<float>(0), pos.at<float>(1), pos.at<float>(2));
			}
			glEnd();
		}
	}

	cv::Mat Viewer::DrawFrame()
	{
		const float r = 5;

		if (VisualOdometry_ == nullptr)
			return cv::Mat();

		if (VisualOdometry_->keypoints_curr_.size() == 0 || VisualOdometry_->feature_matches_.size() == 0)
			return cv::Mat();

		{
			//互斥锁,在使用里程计信息时,不能修改
			std::unique_lock<std::mutex> lock(mMutex);

			mkps = VisualOdometry_->keypoints_curr_;
			mfeature_matches = VisualOdometry_->feature_matches_;
			mIm = VisualOdometry_->curr_->color_;

			n = mfeature_matches.size();
		}

		//在图像上绘制出二维点坐标
		for (int i = 0; i < n; i++)
		{
			int index = mfeature_matches[i].trainIdx;
			cv::Point2f pt1, pt2;
			pt1.x = mkps[index].pt.x - r;
			pt1.y = mkps[index].pt.y - r;
			pt2.x = mkps[index].pt.x + r;
			pt2.y = mkps[index].pt.y + r;

			cv::rectangle(mIm, pt1, pt2, cv::Scalar(0, 255, 0));
			cv::circle(mIm, mkps[index].pt, 2, cv::Scalar(0, 255, 0), -1);
		}

		return mIm;
	}

	void Viewer::SetCurrentCameraPose(const cv::Mat &Tcw)
	{
		//互斥锁,在使用相机位姿时,不能修改其值
		std::unique_lock<std::mutex> lock(mMutexCamera);
		mCameraPose = Tcw.clone();
		mCameraPose = mCameraPose.inv();
	}

	void Viewer::SetVisualOdometry(myslam::VisualOdometry* pVisualOdometry)
	{
		//互斥锁,在使用里程计信息时,不能修改
		std::unique_lock<std::mutex> lock(mMutex);
		VisualOdometry_ = pVisualOdometry;
	}

	void Viewer::GetAllFrame(std::vector<Frame*> vAllFrame)
	{
		//互斥锁,在使用关键帧信息时,不能修改
		std::unique_lock<std::mutex> lock(mMutexTrackingFrame);
		mvAllFrame = vAllFrame;
	}

	void Viewer::GetAll3dPoints(std::vector<cv::Point3f> All3dpts)
	{
		//互斥锁,在使用三维点信息时,不能修改
		std::unique_lock<std::mutex> lock(mMutex3dPoints);
		mAll3dpts = All3dpts;
	}

	//关于gl相关的函数,可直接google, 并加上msdn关键词
	void Viewer::DrawCurrentCamera(pangolin::OpenGlMatrix &Twc)
	{
		//相机模型大小
		const float &w = mCameraSize*0.5;
		const float h = w * 0.75;
		const float z = w * 0.6;

		glPushMatrix();

		//将4*4的矩阵Twc.m右乘一个当前矩阵
		//(由于使用了glPushMatrix函数,因此当前帧矩阵为世界坐标系下的单位矩阵)
		//因为OpenGL中的矩阵为列优先存储,因此实际为Tcw,即相机在世界坐标下的位姿
		glMultMatrixd(Twc.m);

		//设置绘制图形时线的宽度
		glLineWidth(mCameraLineWidth);

		//设置当前颜色为绿色(相机图标显示为绿色)
		glColor3f(0.0f, 1.0f, 0.0f);

		//用线将下面的顶点两两相连
		glBegin(GL_LINES);
		glVertex3f(0, 0, 0);
		glVertex3f(w, h, z);
		glVertex3f(0, 0, 0);
		glVertex3f(w, -h, z);
		glVertex3f(0, 0, 0);
		glVertex3f(-w, -h, z);
		glVertex3f(0, 0, 0);
		glVertex3f(-w, h, z);

		glVertex3f(w, h, z);
		glVertex3f(w, -h, z);

		glVertex3f(-w, h, z);
		glVertex3f(-w, -h, z);

		glVertex3f(-w, h, z);
		glVertex3f(w, h, z);

		glVertex3f(-w, -h, z);
		glVertex3f(w, -h, z);
		glEnd();

		glPopMatrix();
	}

	// 将相机位姿mCameraPose由Mat类型转化为OpenGlMatrix类型
	void Viewer::GetCurrentOpenGLCameraMatrix(pangolin::OpenGlMatrix &M)
	{
		if (!mCameraPose.empty())
		{
			cv::Mat Rcw(3, 3, CV_32F);
			cv::Mat tcw(3, 1, CV_32F);
			{
				//互斥锁,在使用相机位姿时,不能修改其值
				std::unique_lock<std::mutex> lock(mMutexCamera);
				Rcw = mCameraPose.rowRange(0, 3).colRange(0, 3);
				tcw = mCameraPose.rowRange(0, 3).col(3);
			}

			M.m[0] = Rcw.at<float>(0, 0);
			M.m[1] = Rcw.at<float>(1, 0);
			M.m[2] = Rcw.at<float>(2, 0);
			M.m[3] = 0.0;

			M.m[4] = Rcw.at<float>(0, 1);
			M.m[5] = Rcw.at<float>(1, 1);
			M.m[6] = Rcw.at<float>(2, 1);
			M.m[7] = 0.0;

			M.m[8] = Rcw.at<float>(0, 2);
			M.m[9] = Rcw.at<float>(1, 2);
			M.m[10] = Rcw.at<float>(2, 2);
			M.m[11] = 0.0;

			M.m[12] = tcw.at<float>(0);
			M.m[13] = tcw.at<float>(1);
			M.m[14] = tcw.at<float>(2);
			M.m[15] = 1.0;
		}
		else
			M.SetIdentity();
	}
}

6.2 地图点类

  • 当然还有地图点类和地图类,用于保存地图点的属性与所有的地图点(本次暂时不做使用,后面优化再用)
  • mapppoint.h代码如下所示:
#ifndef MAPPOINTS_H
#define MAPPOINTS_H

#include "Frame.h"

namespace myslam
{
	class Frame;
	class MapPoints
	{
	public:
		MapPoints();
		MapPoints(long id, cv::Point3d position, cv::Point3d norm);

	public:
		unsigned long id_;

		cv::Point3d pos_;    //位置
		cv::Point3d norm_;   //方向
		
		cv::Mat descriptor;  //描述子
		int observed_times_; //观测次数
		int correct_times_;  //校正次数

	public:
		MapPoints* creatMappoints();
	};
}
#endif
  • mapppoint.cpp代码如下所示:
#include "mappoints.h"

namespace myslam
{
	MapPoints::MapPoints() :id_(-1), pos_(cv::Point3d(0.0, 0.0, 0.0)), observed_times_(0), correct_times_(0) {}
	MapPoints::MapPoints(long id, cv::Point3d position, cv::Point3d norm) : id_(id), pos_(position), norm_(norm), observed_times_(0), correct_times_(0) {}

	MapPoints* MapPoints::creatMappoints()
	{
		static long factory_id = 0;
		return (new MapPoints(factory_id, cv::Point3d(0.0, 0.0, 0.0), cv::Point3d(0.0, 0.0, 0.0) ));
	}
} 

6.3 地图类

  • map.h代码如下所示:
#ifndef MAP_H
#define MAP_H

#include "common_include.h"
#include "Frame.h"
#include "mappoints.h"

namespace myslam
{
	class Map
	{
	public:

		std::unordered_map<unsigned long, MapPoints*> map_points_; //所有的地图点
		std::unordered_map<unsigned long, Frame*> keyframe_;       //所有的关键帧

		Map() {}

		//插入关键帧
		void insertKeyFrame(Frame* frame);

		//插入地图点
		void insertMapPoint(MapPoints* map_points);
	};
}


#endif
  • map.cpp代码如下所示:
#include "map.h"

namespace myslam
{
	void Map::insertKeyFrame(Frame* frame)
	{
		std::cout << "Key frame size = " << keyframe_.size() << std::endl;
		if (keyframe_.find(frame->id_) == keyframe_.end())
		{
			keyframe_.insert(std::make_pair(frame->id_, frame));
		}
		else
		{
			keyframe_[frame->id_] = frame;
		}
	}

	void Map::insertMapPoint(MapPoints* map_points)
	{
		if (map_points_.find(map_points->id_) == map_points_.end())
		{
			map_points_.insert(std::make_pair(map_points->id_, map_points));
		}
		else
		{
			map_points_[map_points->id_] = map_points;
		}
	}
}

6.4 主函数实现

  • run_vo.cpp如下所示
#include <iostream>
#include <algorithm>
#include <fstream>
#include <iomanip>
#include <windows.h>

#include <opencv2/core/core.hpp>

#include "system.h"

myslam::Viewer* mpViewer;
std::thread* mptViewer;

void LoadImages(const std::string &strFile, std::vector<std::string> &vstrImageFilenames, std::vector<double> &vTimestamps)
{
	std::ifstream f;
	f.open(strFile.c_str());

	// skip first three lines
	std::string s0;
	getline(f, s0);
	getline(f, s0);
	getline(f, s0);

	int i = 0;

	while (!f.eof())
	{
		std::string s;
		getline(f, s);
		if (!s.empty())
		{
			std::stringstream ss;
			ss << s;
			double t;
			std::string sRGB;
			ss >> t;
			vTimestamps.push_back(t);
			ss >> sRGB;
			vstrImageFilenames.push_back(sRGB);
		}
	}
}

int main()
{
	std::string Imagepath = "./data/rgbd_dataset_freiburg2_desk";
	std::string strImFile = Imagepath + "/rgb.txt";
	std::string strDepthFile = Imagepath + "/depth.txt";
	std::string strSettingPath = "./cfg/TUM2.yaml";

	std::vector<std::string> vstrImageFilenames, vstrDImageFilenames;
	std::vector<double> vTimestamps, vDTimestamps;

	LoadImages(strImFile, vstrImageFilenames, vTimestamps);
	LoadImages(strDepthFile, vstrDImageFilenames, vDTimestamps);

	int nImages = vstrImageFilenames.size();
	myslam::System system(strSettingPath);

	cv::Mat im, imD;
	for (int ni = 0; ni < nImages; ni++)
	{
		im = cv::imread(std::string(Imagepath) + "/" + vstrImageFilenames[ni], CV_LOAD_IMAGE_UNCHANGED);
		imD = cv::imread(std::string(Imagepath) + "/" + vstrDImageFilenames[ni], CV_LOAD_IMAGE_UNCHANGED);

		double tframe = vTimestamps[ni];
		if (im.data == nullptr || imD.data == nullptr)
			break;

		cv::Mat T_c_w = system.TrackingRGBD(im, imD, tframe);
		cv::waitKey(10);
	}
}

7.实验效果

  • 数据集需要在 https://vision.in.tum.de/data/datasets/rgbd-dataset/download 下载
  • 注意对配置文件进行简单的修改,参照上传的TUM2.yaml文件。
  • pangolin库可以在上方链接下载,或者是直接使用win10+vs2017编译即可,编译过程也很简单,使用win10系统+vs2017编译时不需要其他的依赖库,直接编译即可,会自动下载必要的依赖项。
  • 自己测试的效果图如下:
    在这里插入图片描述
  • 代码整体比较简单,具体在slam14讲中第九章节,如有疑问请评论区留言。
  • 看完了,觉得可以的话,点个赞再走呗!
  • 下一篇

版权声明:本文为qq_38589460原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。