I'd like to convert Mat tov Vec3f elegantly. Currently I do it like that:
Mat line;
Vec3f ln;
ln[0] = line.at<float>(0, 0);
ln[1] = line.at<float>(0, 1);
ln[2] = line.at<float>(0, 2);
Is there a better way to do it?
In your comment, you specify that this is a single channel floating point matrix with size 3x1. I'd be explicit about the data type in the code, so I'd represent it with cv::Mat1f.
Now, since it's a single channel matrix, we can't directly access elements as Vec3f, since if the input Mat was a submatrix we'd get incorrect results. We can use cv::Mat::reshape to efficiently turn the input into a 3-channel Mat, and then use cv::Mat::at to access the first element.
Sample code:
#include <opencv2/opencv.hpp>
int main()
{
cv::Mat1f m(3, 1);
m << 1.0f , 2.0f , 3.0f;
cv::Vec3f v(m.reshape(3).at<cv::Vec3f>());
std::cout << "m=" << m << "\n";
std::cout << "v=" << v << "\n";
return 0;
}
Output:
m=[1;
2;
3]
v=[1, 2, 3]
To be honest, it might be more efficient to just write a short utility function to do this. Something like
cv::Vec3f to_vec3f(cv::Mat1f const& m)
{
CV_Assert((m.rows == 3) && (m.cols == 1));
return cv::Vec3f(m.at<float>(0), m.at<float>(1), m.at<float>(2));
}
One easy way
Mat m = Mat::zeros(3,1,CV_32F);
Vec3f xyz((float*)m.data);
cout<<xyz<< endl;
Related
This is a simple program to change contrast and brightness of an image. I have noticed that there is a an another program with one simple difference:saturate_cast is added to code.
And I don't realize what is the reason of doing this and there is no need to converting to unsigned char or uchar both code (with saturate_cast<uchar> and to not use this) are outputting the same result. I appreciate if anyone help.
Here it is code :
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include "Source.h"
using namespace cv;
double alpha;
int beta;
int main(int, char** argv)
{
/// Read image given by user
Mat image = imread(argv[1]);
Mat image2 = Mat::zeros(image.size(), image.type());
/// Initialize values
std::cout << " Basic Linear Transforms " << std::endl;
std::cout << "-------------------------" << std::endl;
std::cout << "* Enter the alpha value [1.0-3.0]: ";std::cin >> alpha;
std::cout << "* Enter the beta value [0-100]: "; std::cin >> beta;
for (int x = 0; x < image.rows; x++)
{
for (int y = 0; y < image.cols; y++)
{
for (int c = 0; c < 3; c++)
{
image2.at<Vec3b>(x, y)[c] =
saturate_cast<uchar>(alpha*(image.at<Vec3b>(x, y)[c]) + beta);
}
}
/// Create Windows
namedWindow("Original Image", 1);
namedWindow("New Image", 1);
/// Show stuff
imshow("Original Image", image);
imshow("New Image", image2);
/// Wait until user press some key
waitKey();
return 0;
}
Since the result of your expression may go outside the valid range for uchar, i.e. [0,255], you'd better always use saturate_cast.
In your case, the result of the expression: alpha*(image.at<Vec3b>(x, y)[c]) + beta is a double, so it's safer to use saturate_cast<uchar> to clamp values correctly.
Also, this improves readability, since it's easy to see that you want a uchar out of an expression.
Without using saturate_cast you may have unexpected values:
uchar u1 = 257; // u1 = 1, why a very bright value is set to almost black?
uchar u2 = saturate_cast<uchar>(257); // u2 = 255, a very bright value is set to white
inline unsigned char saturate_cast_uchar(double val) {
val += 0.5; // to round the value
return unsigned char(val < 0 ? 0 : (val > 0xff ? 0xff : val));
}
if val lies between 0 to 255 than this function will return rounded value,
if val lies outside the range [0, 255] than it will return lower or upper boundary value.
this is my first post on stackoverflow, so I hope to do everything right, sorry if I don't.
I'm writing code for a function to convert a single RGB value into CIE L*a*b* color space. The function is supposed to take a 3 floats array (RGB channels with values in [0-255]) and to give in output a 3 floats array with the L*a*b* values. To do so, I'm using the cvtColor function available with OpenCV.
As suggested on the openCV website I'm creating the Mat structures (needed by cvtColor) by contructor.
My problem is that, although I think the code runs properly and performs the conversion, I'm unable to get the values contained in the Mat structure back.
Here's my code:
float * rgb2lab(float rgb[3]) {
// bring input in range [0,1]
rgb[0] = rgb[0] / 255;
rgb[1] = rgb[1] / 255;
rgb[2] = rgb[2] / 255;
// copy rgb in Mat data structure and check values
cv::Mat rgb_m(1, 1, CV_32FC3, cv::Scalar(rgb[0], rgb[1], rgb[2]));
std::cout << "rgb_m = " << std::endl << " " << rgb_m << std::endl;
cv::Vec3f elem = rgb_m.at<cv::Vec3f>(1, 1);
float R = elem[0];
float G = elem[1];
float B = elem[2];
printf("RGB =\n [%f, %f, %f]\n", R, G, B);
// create lab data structure and check values
cv::Mat lab_m(1, 1, CV_32FC3, cv::Scalar(0, 0, 0));
std::cout << "lab_m = " << std::endl << " " << lab_m << std::endl;
// convert
cv::cvtColor(rgb_m, lab_m, CV_RGB2Lab);
// check lab value after conversion
std::cout << "lab_m2 = " << std::endl << " " << lab_m << std::endl;
cv::Vec3f elem2 = lab_m.at<cv::Vec3f>(1, 1);
float l = elem2[0];
float a = elem2[1];
float b = elem2[2];
printf("lab =\n [%f, %f, %f]\n", l, a, b);
// generate the output and return
static float lab[] = { l, a, b };
return lab;
}
As you can see, I'm extracting all channels from the Mat structure by the at function and then accessing them individually from the vector. This is proposed as the solution in many places (one of them).
But if I run this code (input vector was {123,10,200}), on cout I correctly get the outputs of the Mat structures (from which I get the algorithm is converting correctly), but as you can see the extracted values are wrong:
rgb_m =
[0.48235294, 0.039215688, 0.78431374]
RGB =
[0.000000, 0.000000, -5758185472.000000]
lab_m =
[0, 0, 0]
lab_m2 =
[35.198029, 70.120964, -71.303688]
lab =
[0.000000, 0.000000, 4822177514157213323960797626368.000000]
Anyone have an idea of what I'm doing wrong?
Thank you so much for all your help!
The first element of a cv::Mat is always at (0, 0), so just correct cv::Vec3f elem = rgb_m.at<cv::Vec3f>(1, 1); by cv::Vec3f elem = rgb_m.at<cv::Vec3f>(0, 0); and cv::Vec3f elem2 = lab_m.at<cv::Vec3f>(1, 1); by cv::Vec3f elem2 = lab_m.at<cv::Vec3f>(0, 0);
I have a cv::Mat that I want to convert into a cv::Matx33f. I try to do it like this:
cv::Mat m;
cv::Matx33f m33;
.........
m33 = m;
but all the data gets lost! Any idea how to do this ?
UPDATE
here is a part of the code which causes my problem :
cv::Point2f Order::warpPoint(cv::Point2f pTmp){
cv::Matx33f warp = this->getTransMatrix() ; // the getter gives a cv::Mat back
transformMatrix.copyTo(warp); // because the first method didn't work, I tried to use the copyto function
// and the last try was
warp = cv::Matx33f(transformationMatrix); // and waro still 0
cv::Point3f warpPoint = cv::Matx33f(transformMatrix)*pTmp;
cv::Point2f result(warpPoint.x, warpPoint.y);
return result;
}
To convert from Mat to Matx, one can use the data pointer. For example,
cv::Mat m; // assume we know it is CV_32F type, and its size is 3x3
cv::Matx33f m33((float*)m.ptr());
This should do the job, assuming continuous memory in m. You can check it by:
std::cout << "m " << m << std::endl;
std::cout << "m33 " << m33 << std::endl;
I realise that the question is old, but this should also work:
auto m33 = Matx33f(m.at<float>(0, 0), m.at<float>(0, 1), m.at<float>(0, 2),
m.at<float>(1, 0), m.at<float>(1, 1), m.at<float>(1, 2),
m.at<float>(2, 0), m.at<float>(2, 1), m.at<float>(2, 2));
http://opencv.willowgarage.com/documentation/cpp/core_basic_structures.html says:
"If you need to do some operation on Matx that is not implemented, it is easy to convert the matrix to Mat and backwards."
Matx33f m(1, 2, 3,
4, 5, 6,
7, 8, 9);
cout << sum(Mat(m*m.t())) << endl;
There are now special conversion operators available in cv::Mat class for both ways:
cv::Mat {
template<typename _Tp, int m, int n> operator Matx<_Tp, m, n>() const;
}
cv::Mat tM = cv::getPerspectiveTransform(uvp, svp);
auto ttM = cv::Matx33f(tM);
...
tM = cv::Mat(ttM);
I am using openCV to implementing camera motion compensation for an application. I know I need to calculate the optical flow and then find the fundamental matrix between two frames to transform the image.
Here is what I have done so far:
void VideoStabilization::stabilize(Image *image) {
if (image->getWidth() != width || image->getHeight() != height) reset(image->getWidth(), image->getHeight());
IplImage *currImage = toCVImage(image);
IplImage *currImageGray = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);
cvCvtColor(currImage, currImageGray, CV_BGRA2GRAY);
if (baseImage) {
CvPoint2D32f currFeatures[MAX_CORNERS];
char featuresFound[MAX_CORNERS];
opticalFlow(currImageGray, currFeatures, featuresFound);
IplImage *result = transformImage(currImage, currFeatures, featuresFound);
if (result) {
updateImage(image, result);
cvReleaseImage(&result);
}
}
cvReleaseImage(&currImage);
if (baseImage) cvReleaseImage(&baseImage);
baseImage = currImageGray;
updateGoodFeatures();
}
void VideoStabilization::updateGoodFeatures() {
const double QUALITY_LEVEL = 0.05;
const double MIN_DISTANCE = 5.0;
baseFeaturesCount = MAX_CORNERS;
cvGoodFeaturesToTrack(baseImage, eigImage,
tempImage, baseFeatures, &baseFeaturesCount, QUALITY_LEVEL, MIN_DISTANCE);
cvFindCornerSubPix(baseImage, baseFeatures, baseFeaturesCount,
cvSize(10, 10), cvSize(-1,-1), TERM_CRITERIA);
}
void VideoStabilization::opticalFlow(IplImage *currImage, CvPoint2D32f *currFeatures, char *featuresFound) {
const unsigned int WIN_SIZE = 15;
const unsigned int PYR_LEVEL = 5;
cvCalcOpticalFlowPyrLK(baseImage, currImage,
NULL, NULL,
baseFeatures,
currFeatures,
baseFeaturesCount,
cvSize(WIN_SIZE, WIN_SIZE),
PYR_LEVEL,
featuresFound,
NULL,
TERM_CRITERIA,
0);
}
IplImage *VideoStabilization::transformImage(IplImage *image, CvPoint2D32f *features, char *featuresFound) const {
unsigned int featuresFoundCount = 0;
for (unsigned int i = 0; i < MAX_CORNERS; ++i) {
if (featuresFound[i]) ++featuresFoundCount;
}
if (featuresFoundCount < 8) {
std::cout << "Not enough features found." << std::endl;
return NULL;
}
CvMat *points1 = cvCreateMat(2, featuresFoundCount, CV_32F);
CvMat *points2 = cvCreateMat(2, featuresFoundCount, CV_32F);
CvMat *fundamentalMatrix = cvCreateMat(3, 3, CV_32F);
unsigned int pos = 0;
for (unsigned int i = 0; i < featuresFoundCount; ++i) {
while (!featuresFound[pos]) ++pos;
cvSetReal2D(points1, 0, i, baseFeatures[pos].x);
cvSetReal2D(points1, 1, i, baseFeatures[pos].y);
cvSetReal2D(points2, 0, i, features[pos].x);
cvSetReal2D(points2, 1, i, features[pos].y);
++pos;
}
int fmCount = cvFindFundamentalMat(points1, points2, fundamentalMatrix, CV_FM_RANSAC, 1.0, 0.99);
if (fmCount < 1) {
std::cout << "Fundamental matrix not found." << std::endl;
return NULL;
}
std::cout << fundamentalMatrix->data.fl[0] << " " << fundamentalMatrix->data.fl[1] << " " << fundamentalMatrix->data.fl[2] << "\n";
std::cout << fundamentalMatrix->data.fl[3] << " " << fundamentalMatrix->data.fl[4] << " " << fundamentalMatrix->data.fl[5] << "\n";
std::cout << fundamentalMatrix->data.fl[6] << " " << fundamentalMatrix->data.fl[7] << " " << fundamentalMatrix->data.fl[8] << "\n";
cvReleaseMat(&points1);
cvReleaseMat(&points2);
IplImage *result = transformImage(image, *fundamentalMatrix);
cvReleaseMat(&fundamentalMatrix);
return result;
}
MAX_CORNERS is 100 and it usually find around 70-90 features.
With this code, I get a weird fundamental matrix, like:
-0.000190809 -0.00114947 1.2487
0.00127824 6.57727e-05 0.326055
-1.22443 -0.338243 1
Since I just hold the camera with my hand and try not to shake it (and there werent any objects moving), I expected the matrix to be close to identity. What am I doing wrong?
Also, I'm not sure what to use to transform the image. cvWarpAffine need a 2x3 matrix, should I discard the last row or use another function?
What you're looking for is not the fundamental matrix but rather an affine or perspective transform.
The fundamental matrix describes the relation of two cameras having significantly different viewpoints. It is calculated such that if you have two points x (on one image) and x' (on another) that are projections of the same point in space, then x F x' (the product) is zero. If x and x' are nearly identical... then the only solution is to make F nearly zero (and practically useless). That's why you've got what you have.
The matrix that should indeed be near identity is a transformation A that transforms the points x to x'= A x (the old image into the new one). Depending on what types of transformations you want to include (affine or perspective), you could (theoretically) use the functions cvGetAffineTransform or cvGetPerspectiveTransform to calculate the transform. For that, you would need 3 or 4 point pairs, respectively.
However, the best choice (I think) is cvFindHomograpy. It estimates a perspective transform based on all of the point pairs available, using outlier filtering algorithms (RANSAC, for example), giving you a 3x3 matrix.
Then you can use cvWarpPerspective to transform the images themselves.
How to access elements by row, col in OpenCV 2.0's new "Mat" class? The documentation is linked below, but I have not been able to make any sense of it.
http://opencv.willowgarage.com/documentation/cpp/basic_structures.html#mat
On the documentation:
http://docs.opencv.org/2.4/modules/core/doc/basic_structures.html#mat
It says:
(...) if you know the matrix element
type, e.g. it is float, then you can
use at<>() method
That is, you can use:
Mat M(100, 100, CV_64F);
cout << M.at<double>(0,0);
Maybe it is easier to use the Mat_ class. It is a template wrapper for Mat.
Mat_ has the operator() overloaded in order to access the elements.
The ideas provided above are good. For fast access (in case you would like to make a real time application) you could try the following:
//suppose you read an image from a file that is gray scale
Mat image = imread("Your path", CV_8UC1);
//...do some processing
uint8_t *myData = image.data;
int width = image.cols;
int height = image.rows;
int _stride = image.step;//in case cols != strides
for(int i = 0; i < height; i++)
{
for(int j = 0; j < width; j++)
{
uint8_t val = myData[ i * _stride + j];
//do whatever you want with your value
}
}
Pointer access is much faster than the Mat.at<> accessing. Hope it helps!
Based on what #J. Calleja said, you have two choices
Method 1 - Random access
If you want to random access the element of Mat, just simply use
Mat.at<data_Type>(row_num, col_num) = value;
Method 2 - Continuous access
If you want to continuous access, OpenCV provides Mat iterator compatible with STL iterator and it's more C++ style
MatIterator_<double> it, end;
for( it = I.begin<double>(), end = I.end<double>(); it != end; ++it)
{
//do something here
}
or
for(int row = 0; row < mat.rows; ++row) {
float* p = mat.ptr(row); //pointer p points to the first place of each row
for(int col = 0; col < mat.cols; ++col) {
*p++; // operation here
}
}
If you have any difficulty to understand how Method 2 works, I borrow the picture from a blog post in the article Dynamic Two-dimensioned Arrays in C, which is much more intuitive and comprehensible.
See the picture below.
OCV goes out of its way to make sure you can't do this without knowing the element type, but if you want an easily codable but not-very-efficient way to read it type-agnostically, you can use something like
double val=mean(someMat(Rect(x,y,1,1)))[channel];
To do it well, you do have to know the type though. The at<> method is the safe way, but direct access to the data pointer is generally faster if you do it correctly.
For cv::Mat_<T> mat just use mat(row, col)
Accessing elements of a matrix with specified type cv::Mat_< _Tp > is more comfortable, as you can skip the template specification. This is pointed out in the documentation as well.
code:
cv::Mat1d mat0 = cv::Mat1d::zeros(3, 4);
std::cout << "mat0:\n" << mat0 << std::endl;
std::cout << "element: " << mat0(2, 0) << std::endl;
std::cout << std::endl;
cv::Mat1d mat1 = (cv::Mat1d(3, 4) <<
1, NAN, 10.5, NAN,
NAN, -99, .5, NAN,
-70, NAN, -2, NAN);
std::cout << "mat1:\n" << mat1 << std::endl;
std::cout << "element: " << mat1(0, 2) << std::endl;
std::cout << std::endl;
cv::Mat mat2 = cv::Mat(3, 4, CV_32F, 0.0);
std::cout << "mat2:\n" << mat2 << std::endl;
std::cout << "element: " << mat2.at<float>(2, 0) << std::endl;
std::cout << std::endl;
output:
mat0:
[0, 0, 0, 0;
0, 0, 0, 0;
0, 0, 0, 0]
element: 0
mat1:
[1, nan, 10.5, nan;
nan, -99, 0.5, nan;
-70, nan, -2, nan]
element: 10.5
mat2:
[0, 0, 0, 0;
0, 0, 0, 0;
0, 0, 0, 0]
element: 0