In matlab when I use imread function the pixel values of an image are stored in a 3D matrix (of uint8). The values of the matrix are between 0 and 255. But in OpenCV the imread function stores the values in a cv::Mat. When I try to see the values of the pixels I see float values and when I try to convert to integer I have big values.
How can I see the cv::Mat components (RGB) with values between 0 and 255 like in Matlab?
Thanks in advance!
cv::Mat can be used with any type of pixel, if you use imread it will create a cv::Mat of the correct type.
Floating point images are unusual - are you sure the source data is floating point or are you just printing the values wrong?
You can convert a floating point image into 8bit ( CV8UC3) with cvtcolor()
int red = (int)((uchar*)pImg->imageData)[y*((int)pImg->widthStep)+x*3+C];
Those are for IplImage. Don't know if works for Mat. C=(0 or 1 or 2) is for color channel.
An example of the results
i have this:
Canny(test,edges,0.9*av[0],2*av[0],3,true);
for(int i=0;i<edges.rows;i++){
for(int j=0;j<edges.cols;j++){
cv::Vec2w& elem = edges.at<cv::Vec2w>(i,j);
std::cout << elem[0] << " , "<<elem[1] << std::endl;
}
}
the edges (cv::Mat) variable storage the result of cv::Canny function (a binary image) .. when i try to see the values of pixels using cv::Vec2w i have this for result:
6408 , 2058
1817 , 7433
1540 , 282
5386 , 1024
15 , 4867
768 , 275
1285 , 512
2 , 0
0 , 0
1 , 256
with cv::Vec2
0 , 0
0 , 0
0 , 0
0 , 0
0 , -256
255 , -256
0 , 0
0 , 0
with cv::Vec2i
0 , 0
0 , 0
-16777216 , -16776961
0 , 0
0 , -256
-1 , 65535
0 , -16777216
65535 , 0
0 , 0
and so on...
But for example if i write this image (imwrite("image.pgm",edges) ) and then reading with armadillo (the image es a single channel binary image) i have for the result an matrix (nx1) with values between 0 and 255 .. i know .. the format of both libraries is different, but .. i supose: binary image always have values 0 and 255 in one channel ....
I normally store images in type IplImage* rather than cv::Mat. If I say IplImage* frame = (get image somehow...), then I can say frame->imageData to see the values you are interested in. The values are arranged for an RGB image in a single array [r1, b1, g1, r2, b2, g2, r3, b3, g3, ..., r(height*width), b(height*width), g(height*width)], and I believe they are arranged row-wise.
What you need is edges.at<cv::Vec2b>(i,j), not cv::Vec2w or cv::Vec2i.
Related
Using a Kalman filter to predict motion in 2D space, we usually create the transition matrix with the following equations:
x(k+1) = x(k) + vt + (1/2)at^2
or simply
x(k+1) = x(k) + vt
with x: position, v: velocity, a: (constant) acceleration.
This results in a transition matrix that looks like this (for 2D space):
1 0 t 0
0 1 0 t
0 0 1 0
0 0 0 1
But the OpenCV examples suggest we use the following matrix when setting up the kalman filter in C++:
1 0 1 0
0 1 0 1
0 0 1 0
0 0 0 1
How can OpenCV interpret this correctly, knowing that a Kalman filter can be used for any dimension and unit?
I don't think it can. Instead of t, I used dt, where dt is the period of the measurements.
I am trying to do translation, rotation and scaling with subpixel precision in openCV.
This is the link for doing translation with subpixel pecision:
Translate image by very small step by OpenCV in C++
So, I need to do the same for scaling and rotation, as an example when I rotate image by 2 degree it works but when I rotate it by 1 or below 1 degree it doesn't work, and the result of rotation by 2 degree is the same as rotation by 2.1 degree!
In the provided link it is mentioned that I could increase the precision by changing the following code inside imgproc.hpp
enum InterpolationMasks {
INTER_BITS = 5,
INTER_BITS2 = INTER_BITS * 2,
INTER_TAB_SIZE = 1 << INTER_BITS,
INTER_TAB_SIZE2 = INTER_TAB_SIZE * INTER_TAB_SIZE
};
Here is an example for rotation:
Source image is the following:
0 0 0 0
0 255 255 0
0 255 255 0
0 0 0 0
I am doing rotation with respect to image center.
For rotation by 1 degree the result is same as source image!
And for rotation by 2 degree the result is same as rotation by 2.1 degree (which it should not be)!
Here is a code:
Mat rotateImage(Mat sourceImage, double rotationDegree){
Mat rotationMatrix, rotatedImage;
double rowOfImgCenter, colOfImgCenter;
rowOfImgCenter = scaledImage.rows / 2.0 - 0.5;
colOfImgCenter = scaledImage.cols / 2.0 - 0.5;
Point2d imageCenter(colOfImgCenter, rowOfImgCenter);
rotationMatrix = getRotationMatrix2D(imageCenter, rotationDegree, 1.0);
warpAffine(scaledImage, rotatedImage, rotationMatrix, scaledImage.size(), INTER_AREA);
return rotatedImage;
}
When I check the provided translation code, it works with 1e-7 precision but when I do the translation in column by 1e-8 it gives me the source image instead of translating it (which is not correct), when I do the code in matlab it gives me the following result which is correct (I want to have a same result with good accuracy in c++):
0 0 0 0
0 2.5499999745e+02 255 2.549999976508843e-06
0 2.5499999745e+02 255 2.549999976508843e-06
0 0 0 0
Also for rotation when I rotate the image by 1 degree the result should look like this (this is what I've got in matlab and want the same result in c++)
(I had to remove some numbers (reduce the accuracy) such that the matrix could fit in here)
0 0 1.1557313 0
1.1557313 2.5497614e+02 2.549761e+02 0
0 2.5497614e+02 2.549761e+02 1.1557313
0 1.1557313 0 0
Now my question is that how can I increase the subpixel precision in openCV?
or
Are there any ideas or codes to do the image scaling and rotation by subpixel precision (like the code provided for image translation in the mentioned link)?
I would appreciate any ideas.
I need to extend kernel from one channel to more channels. For example from
0 1 0
1 -4 1
0 1 0
to
0 0 0 1 1 1 0 0 0
1 1 1 -4 -4 -4 1 1 1
0 0 0 1 1 1 0 0 0
following standard three channels cv::Mat.
I have following code:
void createKernel(InputArray _A, InputArray _B, OutputArray _kernel, const int chn)
{
Mat A = _A.getMat();
Mat B = _B.getMat();
Mat kernel;
Mat kernelOneChannel = A * B;
std::vector<Mat> channels;
for (int i = 0; i < chn; i++)
{
channels.push_back(kernelOneChannel);
}
merge(channels, kernel);
kernel.copyTo(_kernel);
}
One channel kernel is copied to std:vector as many times based on the chn. After that is one multi channel cv::Mat created.
My question is about the last line kernel.copyTo(_kernel). In many examples what I have seen, this is the way how to deal with Outputarray. Is this copyTo really needed? It seems to me like waste of memory and time to copy already computed kernel to the _kernel. Is there any solution without this data copy from one structure to another?
My question is strictly related to OpenCV and mentioned structures.
Thanks in advance.
In your specific case you can pass _kernel variable directly to merge call to avoid unnessesary copy:
merge(channels, _kernel)
In general case the OutputArray object is supposed to be used in the following way:
_outArr.create(size, type);
Mat outMat = _outArr.getMat();
Now the outMat variable can be filled without extra copies.
I am trying to encode a Mat CV_32FC1 image to send it over the internet with base64, the process works but the OpenCV encodes in the wrong format. Example:
vector<unsigned char> buffer;
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PXM_BINARY);
compression_params.push_back(0);
cv::imencode(".pgm", desc, buffer, compression_params);
printf("%s", &buffer[0]);
This generates the following output:
P2
64 15
255
0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
etc..
According to the compression parameter and the first parameter (P2) it shouldn't be encoded as a binary format, it should be ASCII. (Source)
In itself this isn't a problem, but when I change compression_params.push_back(0) to compression_params.push_back(1) I get this output (without image data):
P5
64 15
255
I am using OpenCV 2.4.4 on iOS, how can I fix this or alternatively how can I send a Mat the good way without losing data?
Don't know if you have resolved your problem but I figured out the problem and it might apply to your case too even though you are using the iOS version, which I am not familiar with:
How do I capture images in OpenCV and saving in pgm format?
Hello everybody right now I'm trying to getting grey value for every pixel in an image
what I mean with grey value is the white or black level from an image let's say 0 for white and 1 for black. for an example for this image
the value I want will be like
0 0 0 0 0 0
0 1 1 1 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 0 0 0 0
is this possible? if yes how to do it with OpenCV in C? or if it's impossible with OpenCV is there any other library that can do this?
What you ask is certainly possible but how it can be done depends on a lot of things. If you use C++, on SO we generally expect you to use the C++ interface which means you have a cv::Mat object and loaded the image with something like this: (using namespace cv)
#include <opencv2/core/core.hpp>
Mat mat_gray = imread(path, CV_LOAD_IMAGE_GRAYSCALE);
or by
Mat mat = imread(path); // and assuming it was originally a color image...
Mat mat_gray;
cvtColor(mat, mat_gray, CV_BGR2GRAY); //...convert it to grayscale.
Now, if you just want to access pixel values one-by-one, you use _Tp& mat.at<_Tp>(int x,int y);. That is:
for(int x=0; x<mat_gray.rows; ++x)
for(int y=0; y<mat_gray.cols; ++y)
mat_gray.at<uchar>(x,y); // if mat.type == CV_8U
You can look up your type here, which you should use in place of uchar if the mat.type is other than CV_8U.
As for the pure C interface, you can check this answer. But if you use C++, you should definitely use the C++ interface.