OpenCV3学习(11.3)关键点的描述符KeyPoint对象与匹配类DMatch
corners:包含大量本地信息的像素块,并能够在另一张图中被快速识别keypoints:作为 corners 的扩展,它将像素块的信息进行编码从而使得更易辨识,至少在原则上唯一descriptors:它是对 keypoints 进一步处理的结果。通常它具有更低的维度,从而使得图像块能够在另一幅不同的图像中被更快地识别KeyPoints对象为了描述关键点,Opencv 关键点的类定...
corners:包含大量本地信息的像素块,并能够在另一张图中被快速识别
keypoints:作为 corners 的扩展,它将像素块的信息进行编码从而使得更易辨识,至少在原则上唯一
descriptors:它是对 keypoints 进一步处理的结果。通常它具有更低的维度,从而使得图像块能够在另一幅不同的图像中被更快地识别
KeyPoints对象
为了描述关键点,Opencv 关键点的类定义如下:
class cv::KeyPoint {
public:
cv::Point2f pt; // coordinates of the keypoint
float size; // diameter of the meaningful keypoint neighborhood
float angle; // computed orientation of the keypoint (-1 if none)
float response; // response for which the keypoints was selected
int octave; // octave (pyramid layer) keypoint was extracted from
int class_id; // object id, can be used to cluster keypoints by object
cv::KeyPoint(
cv::Point2f _pt,
float _size,
float _angle = -1,
float _response = 0,
int _octave = 0,
int _class_id = -1
);
cv::KeyPoint(
float x,
float y,
float _size,
float _angle = -1,
float _response = 0,
int _octave = 0,
int _class_id = -1
);
...
};
参数说明:
- pt:关键点的位置
- size:关键点的范围
- angle:关键点角度
- response:能够给某个关键点更强烈响应的检测器,有时能够被理解为特性实际存在的概率
- octave:标示了关键点被找到的层级,总是希望在相同的层级找到对应的关键点
- class_id:标示关键点来自于哪一个目标
为了查找并计算描述符,Opencv 定义了如下抽象类。
class cv::Feature2D : public cv::Algorithm {
public:
virtual void detect(
cv::InputArray image, // Image on which to detect
vector< cv::KeyPoint >& keypoints, // Array of found keypoints
cv::InputArray mask = cv::noArray()
) const;
virtual void detect(
cv::InputArrayOfArrays images, // Images on which to detect
vector<vector< cv::KeyPoint > >& keypoints, // keypoints for each image
cv::InputArrayOfArrays masks = cv::noArray()
) const;
virtual void compute(
cv::InputArray image, // Image where keypoints are located
std::vector<cv::KeyPoint>& keypoints, // input/output vector of keypoints
cv::OutputArray descriptors); // computed descriptors, M x N matrix,
// where M is the number of keypoints
// and N is the descriptor size
virtual void compute(
cv::InputArrayOfArrays image, // Images where keypoints are located
std::vector<std::vector<cv::KeyPoint> >& keypoints, //I/O vec of keypnts
cv::OutputArrayOfArrays descriptors); // computed descriptors,
// vector of (Mi x N) matrices, where
// Mi is the number of keypoints in
// the i-th image and N is the
// descriptor size
virtual void detectAndCompute(
cv::InputArray image, // Image on which to detect
cv::InputArray mask, // Optional region of interest mask
std::vector<cv::KeyPoint>& keypoints, // found or provided keypoints
cv::OutputArray descriptors, // computed descriptors
bool useProvidedKeypoints = false); // if true,
// the provided keypoints are used,
// otherwise they are detected
virtual int descriptorSize() const; // size of each descriptor in elements
virtual int descriptorType() const; // type of descriptor elements
virtual int defaultNorm() const; // the recommended norm to be used
// for comparing descriptors.
// Usually, it's NORM_HAMMING for
// binary descriptors and NORM_L2
// for all others.
virtual void read(const cv::FileNode&);
virtual void write(cv::FileStorage&) const;
...
};
函数说明:
- detect:用于计算 Keypoint
- compute:用于计算 descriptor
- detectAndCompute:不同的关键点检测算法对于同一幅图像常常得到不同的结果,而且在算法计算过程中需要一种特殊的图像表示,其计算量很大,如果分开进行此步骤将重复进行两次,因此如果需要得到描述符,通常建议直接使detectAndCompute 函数
- descriptorSize:返回描述符向量的长度
- descriptorType:描述符元素的类型
- defaultNorm:描述符的归一化方法,指定了如何比较两个描述符,比如对于01描述符,可以使用 NORM_HAMMING;而对于 SIFT 和 SURF,则可以使用 NORM_L2 或者 NORM_L1。
一个实际的实现可以只实现其中的某个或几个方法:
cv::Feature2D::detect():FAST(只寻找FAST关键点,因为FAST只能寻找关键点)
cv::Feature2D::compute():FREAK(对已知的关键点计算FREAK描述符,FREAK只能用于描述符)
cv::Feature2D::detectAndCompute():SIFT,SURF,ORB,BRISK。(4种方法既能够寻找关键点也可以生成对应的描述符)算法中将隐式调用检测和计算方法
cv::DMatch 对象的类定义
通常,一个匹配器尝试在一副或一组图中匹配一幅图中的关键点,如果匹配成功,将返回 cv::DMatch 的列表。
class cv::DMatch {
public:
DMatch(); // sets this->distance
// to std::numeric_limits<float>::max()
DMatch(int _queryIdx, int _trainIdx, float _distance);
DMatch(int _queryIdx, int _trainIdx, int _imgIdx, float _distance);
int queryIdx; // query descriptor index
int trainIdx; // train descriptor index
int imgIdx; // train image index
float distance;
bool operator<(const DMatch &m) const; // Comparison operator
// based on 'distance'
}
成员说明:
- queryIdx, trainIdx:指定了每幅图像中关键点与关键点列表中元素的匹配情况。其中,默认 query image 为新图片,而 training image 为旧图片
- imgIdx:指定要匹配哪一个训练图像
- distance:给出了匹配程度
- operator<():给定基于 distance 的比较方式
cv::DescriptorMatcher 抽象类
通常匹配器被应用在目标识别和跟踪两个场景中。其中目标识别需要我们训练匹配器——给出已知物体最大区分度的描述符,然后根据我们给出的描述符给出字典中哪个描述符与之相匹配。而跟踪则要求在给定两组描述符的条件下,给出它们之间的匹配情况。DescriptorMatcher 提供了 match(),knnMatch() 和 radiusMatch() 三个函数,其中每个函数都有针对目标检测和跟踪的两个版本,其中识别需要输入一个特性列表和训练好的字典,而跟踪则需输入两个特性列表。
class cv::DescriptorMatcher {
public:
virtual void add(InputArrayOfArrays descriptors); // Add train descriptors
virtual void clear(); // Clear train descriptors
virtual bool empty() const; // true if no descriptors
void train(); // Train matcher
virtual bool isMaskSupported() const = 0; // true if supports masks
const vector<cv::Mat>& getTrainDescriptors() const; // Get train descriptors
// methods to match descriptors from one list vs. "trained" set (recognition)
void match(
InputArray queryDescriptors,
vector<cv::DMatch>& matches,
InputArrayOfArrays masks = noArray()
);
void knnMatch(
InputArray queryDescriptors,
vector< vector<cv::DMatch> >& matches,
int k,
InputArrayOfArrays masks = noArray(),
bool compactResult = false
);
void radiusMatch(
InputArray queryDescriptors,
vector< vector<cv::DMatch> >& matches,
float maxDistance,
InputArrayOfArrays masks = noArray(),
bool compactResult = false
);
// methods to match descriptors from two lists (tracking)
//
// Find one best match for each query descriptor
void match(
InputArray queryDescriptors,
InputArray trainDescriptors,
vector<cv::DMatch>& matches,
InputArray mask = noArray()
) const;
// Find k best matches for each query descriptor (in increasing order of distances)
void knnMatch(
InputArray queryDescriptors,
InputArray trainDescriptors,
vector< vector<cv::DMatch> >& matches,
int k,
InputArray mask = noArray(),
bool compactResult = false
) const;
// Find best matches for each query descriptor with distance less than maxDistance
void radiusMatch(
InputArray queryDescriptors,
InputArray trainDescriptors,
vector< vector<cv::DMatch> >& matches,
float maxDistance,
InputArray mask = noArray(),
bool compactResult = false
) const;
virtual void read(const FileNode&); // Reads matcher from a file node
virtual void write(FileStorage&) const; // Writes matcher to a file storage
virtual cv::Ptr<cv::DescriptorMatcher> clone(
bool emptyTrainData = false
) const = 0;
static cv::Ptr<cv::DescriptorMatcher> create(
const string& descriptorMatcherType
);
...
};
函数说明:
- add:添加描述符,其中每个元素都是一个 Mat,每一行是一个描述符,列数是描述符的维数
- getTrainDescriptors:获得已添加的描述符
- clear,empty:清空和判断匹配器是否添加了描述符
- train:当加载完所有描述符时,通常需要运行 train 函数。它将基于使用的匹配方法生成指定数据结构,以便将来更高效地加速匹配过程。通常如果提供了 train 方法,必须在调用匹配方法之前调用 train 方法。
- match(), knnMatch(), and radiusMatch():用于目标识别的函数,其中 match() 方法只返回最优匹配,即查询列表上的每个关键点将于列表中的“最佳匹配”匹配;而 knnMatch() 将返回前 k 个最优匹配,返回值是vector< vector<cv::DMatch> >& matches,这里的matches[i][j]表示trainDescriptors中描述符的第j个最佳匹配(i表示查询列表中的描述符个数,j表示在字典中找到的匹配描述符个数,一共有k个,即matches[i]是由k个DMatch对象组成的vector,这里面的k个DMatch对象都是对应于queryDescriptors查询列表中的一个描述符)radiusMatch 则返回所有距离小于指定距离的匹配。
- read,write:存储和加载匹配器,特别对于大型数据库,不用再保存所有图片
- clone,create:其中 emptyTrainData 标示是否使用原始训练数据,而 create 可接受如下方法的字符串
第一组方法用于将图像与预先存储的描述集合进行匹配,目的是建立一个关键点字典1,2,3,4;
第二组方法是在目标识别中使用的一组匹配方法,它们每个都使用描述符列表,成为查询列表,与训练好的字典中的描述符进行比较,有3种方法match(), knnMatch(), and radiusMatch();
第三组方法,支持两个描述符列表,用于跟踪,这些方法忽略内部字典的任何描述符,而是将queryDescriptors列表中的描述符与trainDescriptors进行比较。
关键点滤波器
关键点滤波器用于从现有的关键点中查找更佳的关键点或者去除相同的关键点。
class cv::KeyPointsFilter {
public:
static void runByImageBorder(
vector< cv::KeyPoint >& keypoints, // in/out list of keypoints
cv::Size imageSize, // Size of original image
int borderSize // Size of border in pixels
);
static void runByKeypointSize(
vector< cv::KeyPoint >& keypoints, // in/out list of keypoints
float minSize, // Smallest keypoint to keep
float maxSize = FLT_MAX // Largest one to keep
);
static void runByPixelsMask(
vector< cv::KeyPoint >& keypoints, // in/out list of keypoints
const cv::Mat& mask // Keep where mask is nonzero
);
static void removeDuplicated(
vector< cv::KeyPoint >& keypoints // in/out list of keypoints
);
static void retainBest(
vector< cv::KeyPoint >& keypoints, // in/out list of keypoints
int npoints // Keep this many
);
}
函数说明:
- runByImageBorder():去除所有小于图像边缘大小的关键点,不过必须事先指定之前使用的 imageSize
- runByKeypointSize():去除所有小于 minSize 或者大于 maxSize 的关键点
- runByPixelsMask():去除所有 mask 中为零的关键点
- removeDuplicated():去除重复的关键点
- retainBest():去除关键点直到数量降为 npoints
匹配方法
Brute force matching with cv::BFMatcher
暴力搜索就是直接根据询问集从训练集中查找,唯一需要指定的是距离度量方法(normType),可用选项如下表:
class cv::BFMatcher : public cv::DescriptorMatcher {
public:
BFMatcher(int normType, bool crossCheck = false);
virtual ~BFMatcher() {}
virtual bool isMaskSupported() const { return true; }
virtual Ptr<DescriptorMatcher> clone(
bool emptyTrainData = false
) const;
...
};
其中 crosscheck 如果置 1,那么必须两者分别为对方的最近邻。这能够有效的降低错误匹配,不过将花费更多的时间。
Fast approximate nearest neighbors and cv::FlannBasedMatcher
快速近似最近邻,indexParams参数默认使用的索引方法是kdtree,tree的数目默认是4,
class cv::FlannBasedMatcher : public cv::DescriptorMatcher {
public:
FlannBasedMatcher(
const cv::Ptr< cv::flann::IndexParams>& indexParams
= new cv::flann::KDTreeIndexParams(),
const cv::Ptr< cv::flann::SearchParams>& searchParams
= new cv::flann::SearchParams()
);
virtual void add(const vector<Mat>& descriptors);
virtual void clear();
virtual void train();
virtual bool isMaskSupported() const;
virtual void read(const FileNode&); // Read from file node
virtual void write(FileStorage&) const; // Write to file storage
virtual cv::Ptr<DescriptorMatcher> clone(
bool emptyTrainData = false
) const;
...
};
参数说明:
SearchParams:
struct cv::flann::SearchParams : public cv::flann::IndexParams {
SearchParams(
int checks = 32, // Limit on NN candidates to check
float eps = 0, // (Not used right now)
bool sorted = true // Sort multiple returns if 'true'
);
};
IndexParams :
1、Linear indexing with cv::flann::LinearIndexParams:其等效于 cv::BFMatcher
// 等价于 cv::BFMatcher
cv::FlannBasedMatcher matcher(
new cv::flann::LinearIndexParams(), // Default index parameters
new cv::flann::SearchParams() // Default search parameters
);
2、KD-tree indexing with cv::flann::KDTreeIndexParams:使用随机 kd-trees 进行匹配,其默认值为 4,如果通常设置为 16
cv::FlannBasedMatcher matcher(
new cv::flann::KDTreeIndexParams(16), // Index using 16 kd-trees
new cv::flann::SearchParams() // Default search parameters
);
3、Hierarchical k-means tree indexing with cv::flann::KMeansIndexParams:索引使用层级 k-means 分簇
struct cv::flann::KMeansIndexParams : public cv::flann::IndexParams {
KMeansIndexParams(
int branching = 32, // Branching factor for tree
int iterations = 11, // Max for k-means stage
float cb_index = 0.2, // Probably don't mess with
cv::flann::flann_centers_init_t centers_init
= cv::flann::CENTERS_RANDOM
);
};
4、Combining KD-trees and k-means with cv::flann::CompositeIndexParams:混合使用 kd-trees 和 k-means 方法
struct cv::flann::CompositeIndexParams : public cv::flann::IndexParams {
CompositeIndexParams(
int trees = 4, // Number of trees
int branching = 32, // Branching factor for tree
int iterations = 11, // Max for k-means stage
float cb_index = 0.2, // Usually leave as-is
cv::flann::flann_centers_init_t centers_init
= cv::flann::CENTERS_RANDOM
);
};
5、Locality-sensitive hash (LSH) indexing with cv::flann::LshIndexParams:使用 hash 函数将相似的目标放置到相同的桶中,其只能被用于处理二值特性,比如汉明距离
struct cv::flann::LshIndexParams : public cv::flann::IndexParams {
LshIndexParams(
unsigned int table_number, // Number of hash tables to use, usually '10' to '30'
unsigned int key_size, // key bits, usually '10' to '20'
unsigned int multi_probe_level // Best to just set this to '2'
);
};
6、Automatic index selection with cv::flann::AutotunedIndexParams:让算法自主选择一个合适的索引方法
struct cv::flann::AutotunedIndexParams : public cv::flann::IndexParams {
AutotunedIndexParams(
float target_precision = 0.9, // Percentage of searches required
// to return an exact result
float build_weight = 0.01, // Priority for building fast
float memory_weight = 0.0, // Priority for saving memory
float sample_fraction = 0.1 // Fraction of training data to use
);
};
显示结果
Displaying keypoints with cv::drawKeypoints
void cv::drawKeypoints(
const cv::Mat& image, // Image to draw keypoints
const vector< cv::KeyPoint >& keypoints, // List of keypoints to draw
cv::Mat& outImg, // image and keypoints drawn
const Scalar& color = cv::Scalar::all(-1), // Use different colors
int flags = cv::DrawMatchesFlags::DEFAULT
);
参数说明:
- color:cv::Scalar::all(-1) 自动使用不同的颜色
- flags:cv::DrawMatchesFlags::DEFAULT 使用小圆圈,cv::DrawMatchesFlags:: DRAW_RICH_KEYPOINTS 标记为 size 大小的圆圈,并标记 angle 的方向
Displaying keypoint matches with cv::drawMatches
void cv::drawMatches(
const cv::Mat& img1, // "Left" image
const vector< cv::KeyPoint >& keypoints1, // Keypoints (lt. img)
const cv::Mat& img2, // "Right" image
const vector< cv::KeyPoint >& keypoints2, // Keypoints (rt. img)
const vector< cv::DMatch >& matches1to2, // List of matches
cv::Mat& outImg, // Result images
const cv::Scalar& matchColor = cv::Scalar::all(-1),
const cv::Scalar& singlePointColor = cv::Scalar::all(-1),
const vector<char>& matchesMask = vector<char>(),
int flags
= cv::DrawMatchesFlags::DEFAULT
)
void cv::drawMatches(
const cv::Mat& img1, // "Left" image
const vector< cv::KeyPoint >& keypoints1, // Keypoints (lt. img)
const cv::Mat& img2, // "Right" image
const vector< cv::KeyPoint >& keypoints2, // Keypoints (rt. img)
const vector< vector<cv::DMatch> >& matches1to2, // List of lists
// of matches
cv::Mat& outImg, // Result images
const cv::Scalar& matchColor // and connecting line
= cv::Scalar::all(-1),
const cv::Scalar& singlePointColor // unmatched ones
= cv::Scalar::all(-1),
const vector< vector<char> >& matchesMask // only draw for nonzero
= vector< vector<char> >(),
int flags = cv::DrawMatchesFlags::DEFAULT
);
参数说明:
- img1,img2,keypoints1,keypoints2:给出了两张图片和分别的关键点
- matches1to2:表示了关键点的匹配关系,其中 keypoints1[i] 与 keypoints2[matches[i]] 相匹配
- outImg:匹配结果
- matchcolor:匹配的关键点将用线连接,同时标记上此颜色
- singlePointColor:没有匹配的关键点将使用此颜色
- matcherMask:匹配上的关键点位置将被置一
- flags:cv::DrawMatchesFlags::DEFAULT(输出结果在 outImg 中,同时用小圆圈标记);cv::DrawMatchesFlags::DRAW_OVER_OUTIMG(并不重新分配 outImg 的空间,这样可以多次调用 cv::drawMatches(),将结果绘制在一张图上);cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS(并不绘制未匹配上的关键点);cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS(将关键点用带尺度和方向的圆标示)
from:Opencv 关键点和描述符(二)—— 通用关键点和描述符
实例:
#include <opencv2\opencv.hpp>
#include<opencv2\nonfree\nonfree.hpp>//SIFT
#include<opencv2\legacy\legacy.hpp>//BFMatch暴力匹配
#include <vector>
#include<iostream>
using namespace std;
using namespace cv;
void main()
{
Mat srcImg1 = imread("111.jpg");
Mat srcImg2 = imread("111temp.jpg");
//定义SIFT特征检测类对象
SiftFeatureDetector siftDetector;
//定义KeyPoint变量
vector<KeyPoint>keyPoints1;
vector<KeyPoint>keyPoints2;
//特征点检测
siftDetector.detect(srcImg1, keyPoints1);
siftDetector.detect(srcImg2, keyPoints2);
//绘制特征点(关键点)
Mat feature_pic1, feature_pic2;
drawKeypoints(srcImg1, keyPoints1, feature_pic1, Scalar::all(-1));
drawKeypoints(srcImg2, keyPoints2, feature_pic2, Scalar::all(-1));
//显示原图
//imshow("src1", srcImg1);
//imshow("src2", srcImg2);
//显示结果
imshow("feature1", feature_pic1);
imshow("feature2", feature_pic2);
//计算特征点描述符 / 特征向量提取
SiftDescriptorExtractor descriptor;
Mat description1;
descriptor.compute(srcImg1, keyPoints1, description1);
Mat description2;
descriptor.compute(srcImg2, keyPoints2, description2);
cout << description1.cols << endl;
cout << description1.rows << endl;
//进行BFMatch暴力匹配
BruteForceMatcher<L2<float>>matcher; //实例化暴力匹配器
vector<DMatch>matches; //定义匹配结果变量
matcher.match(description1, description2, matches); //实现描述符之间的匹配
//匹配结果筛选
nth_element(matches.begin(), matches.begin() + 29, matches.end()); //提取出前30个最佳匹配结果
matches.erase(matches.begin() + 30, matches.end()); //剔除掉其余的匹配结果
Mat result;
drawMatches(srcImg1, keyPoints1, srcImg2, keyPoints2, matches, result, Scalar(0, 255, 0), Scalar::all(-1));//匹配特征点绿色,单一特征点颜色随机
imshow("Match_Result", result);
waitKey(0);
}
结果:
开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!
更多推荐
所有评论(0)