I'm having a 2D Vector called Mat with values from 0 to 255 that I'm assigning to an IPLIMAGE like what is follow:
IplImage *A=cvCreateImage(cvSize(640,480), IPL_DEPTH_8U, 1)
for (int i=0;i<640;i++)
{
for (j...)
{
A->imageData[i*640+j]=Mat[i][j]
}
}
But what about if i m having 3 2D vectors Mat1, Mat2, Mat3 and an IPLIMAGE whose number of channels is equal to 3:
IplImage *A=cvCreateImage(cvSize(640,480), IPL_DEPTH_8U, 3)
I thought that I could do it channel by channel and merge them all at the end, but I really believe it's not the optimal solution.
Any idea how to access to imageData of the 3 channels in that case?
First, note that you can avoid writing the first code if Mat is aligned, by directly assigning the imageData struct member of IplImage. You'll have to use cvCreateImageHeader instead of cvCreateImage to avoid allocating data for the image. More info about the struct can be found here.
Second, regarding your question - it is possible to do that by creating three images by the technique I mentioned earlier, and then using cvMerge to produce the final image. More information here.
In general, I recommend you to migrate to the C++ interface of OpenCV, which uses cv::Mat instead of the old IplImage interface.
If you look at OpenCV tutorial for C++ API, there is example for working with Mat.
http://docs.opencv.org/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-iterator-safe-method
There are presented 3 ways to access 3 channel image.
Related
Im new to openCV and image processing I am trying to find moving objects using optical flow (Lucas-Kanade method) by comparing two saved images on disk which are frames from a camera.
The part of code I am asking about is:
Mat img = imread("t_mono1.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the image
Mat img2 = imread("t_mono2.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the 2nd image
Mat pts;
Mat pts2;
Mat stat;
Mat err;
goodFeaturesToTrack(img,pts,100,0.01,0.01);//extract features to track
calcOpticalFlowPyrLK(img,img2,pts,pts2,stat,err);//optical flow
cout<<" "<<pts.cols<<"\n"<<pts.rows<<"\n";`
I am getting that the size of pts is 100 by 1. I suppose it should have been 100 by 2 since the pixels have x and y coordinates. Further, when I used for loops to display the contents of the arrays the pts array was all zeros and all arrays were one dimensional.
I have seen this question:
OpenCV's calcOpticalFlowPyrLK throws exception
I tried using vectors as he did but I get an error when building it says cannot convert between different data types. I am using VS2008 with openCV2.4.11
I would like to get the (x,y) coordinates of features in the first and second images and the error but all the arrays passed to calcOpticalFlowPyrLK were one dimensional and I don't understand how can this be?
I know this question can be wired for experts, but I want to access pixels of grayscale image in openCV. I am using the following code:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Now if I want to print any pixel(say at position 255,255) using img1.at<float>(255,255) I get 0, which is not actually black pixel. I thought the image has been read as 2d Matrix.
Actually I want to do some calculation on each pixel, for that I need to access each pixel explicitly.
Any help or suggestion will be appreciated.
You will have to do this
int x = (int)(img.at<uchar>(j,i))
This is because of the way the image is stored and the way img.at works. Grayscale image is stored in CV_8UC1 format and hence it is basically an array of uchar type. So the return value is a uchar which you typecast into an int to operate on it as ints.
Also, this is similar to your questions:
accessing pixel value of gray scale image in OpenCV
float is the wrong type here.
if you read an image like this:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then it's type will be CV_8U, (uchar, a single grayscale byte per pixel). so, to access it:
uchar &pixel = img1.at<uchar>(row,col);
I have an array of pixel data in RGBA format. Although I have already converted this data to grayscale using the GPU (thus all 4 channels are identical).
I now want to use this grayscale data in OpenCV, and I don't want to store 4 copies of the same data. Is it possible to create a cv::Mat structure from this pixel array by specifying a stride. (i.e. only read out every 4th byte)
I am currently using
GLubyte* Img = stuff from GPU;
cv::Mat tmp(height, width, CV_8UC4, Img);
But this copies all the data, or does it wrap the existing pointer into a cv::Mat without copying it? If it wraps without copy then I will be happy to use standard c++ routines to copy only the data I want from Img into a new section of memory and then wrap this as cv::Mat.
Otherwise how would you suggest doing this to reduce the amount of data being copied.
Thanks
The code that you are using
cv::Mat tmp(rows, cols, CV_8UC4, dataPointer);
does not perform any copy but only assign the data field of the Mat instance.
If it's ok for you to work with a matrix of 4 channels, then just go on.
Otherwise, if you prefer working with a 1-channel matrix, then just use the function cv::cvtColor() to create a new image with a single channel (but then you will get one additional image in memory and pay the CPU cycles for the conversion):
cv::Mat grey;
cv::cvtColor(tmp, grey, CV_BGR2GRAY);
Finally, one last thing: if you can deinterlace the colorplanes beforehand (for example on the GPU) and get some image with [blue plane, green plane, red plane], then you can pass CV_8UC1 as image type in the construction of tmp and you get a single channel grey image without any data copy.
I have an image which has 4 channels and is in 4 * UINT8 format.
I am trying to convert it to 3 channel float and I am using this code:
images.convertTo(images,CV_32FC3,1/255.0);
After the conversion, the image is in a float format but still has 4 channels. How can I get rid of 4th (alpha) channel in OpenCV?
As #AldurDisciple said, Mat::convertTo() is intended to be used for changing the data type of a Mat, not for changing the number of channels.
To work out, you should split it into two steps:
cvtColor(image, image, CV_BGRA2BGR); // 1. change the number of channels
image.convertTo(image, CV_32FC3, 1/255.0); // 2. change type to float and scale
The function convertTo is intended to be used to change the data type of a Mat, exclusively. As mentionned in the documentation (link), the number of channels of the output image is always the same as the input image.
If you want to change the datatype and reduce the number of channels, you should use a combination of split, merge, and convertTo:
cv::Mat img_8UC4;
cv::Mat chans[4];
cv::split(img_8UC4,chans);
cv::Mat img_8UC3;
cv::merge(chans,3,img_8UC3);
cv::Mat img_32FC3;
img_8UC3.convertTo(img_32FC3);
Another approach may be to recode the algorithm yourself, which is quite easy and probably more efficient.
OpenCV's cvtColor function allows you to convert the type and number of channels of a Mat.
void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0 )
So something like this would convert a colored 4 channel to a colored 3 channel:
cvtColor(image, image, CV_BGRA2BGR, 3);
Or it is probably more efficient to use the mixChannels function, if you check the documentation its example shows how to split a channel out.
Then if you really want to change it to a specific type:
image.convertTo(image,CV_32F);
Mat mask = Mat::zeros(img1.rows, img1.cols, CV_8UC1)
this piece of code is supposed to create a mask, I think, by using C++. What is the equivalent to creating a mask in C, like this? also, can someone explain to me what this piece of code is actually doing please?
With the C API, we would call
IplImage *mask = cvCreateImage(cvGetSize(img1), IPL_DEPTH_8U, 1);
cvSetZero(mask);
The C API is easier to read IMO, and what it does is create an image with 8 bits per pixel, 1 channel (grayscale), of the same size as img1, and then setting all its pixel values to zero.