Change columns order in opencv mat without copying data - c++

I have an opencv mat which has M rows and 3 columns, is there a way to reorder the mat such that the first and last (i.e., third) columns are switched while the middle column kept in place without copying the data?

OpenCV data is an array of pixels. Sometimes you can get a column or a rectangle view of an image (like with col() ). In which the data is not continuous, and it is calculated, as far as I know, with a step between rows. However the data is shared and is still an array.
Then the question becomes: can I swap two portions of an array without copying the data? Not as far as I know.
You can use optimized functions of OpenCV to swap them, but the data will be copied.
Also, non continuous data is way slower than continuous data in OpenCV functions. More can be read here.

You can use the OpenCV function flip for that. As an example the following code flips an image about the mid column.
int main ()
{
Mat img, flipped; //your mat
img =imread("lena.jpg");
flip(img,flipped ,1); // flipped is the output
imshow("img",flipped);
waitKey(0);
}

Related

Warp perspective and stitch/overlap images (C++)

I am detecting and matching features of a pair of images, using a typical detector-descriptor-matcher combination and then findHomography to produce a transformation matrix.
After this, I want the two images to be overlapped (the second one (imgTrain) over the first one (imgQuery), so I warp the second image using the transformation matrix using:
cv::Mat imgQuery, imgTrain;
...
TRANSFORMATION_MATRIX = cv::findHomography(...)
...
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgTrain.size());
From here on, I don't know how to produce an image that contains the original imgQuery with the warped imgTrainWarped on it.
I consider two scenarios:
1) One where the size of the final image is the size of imgQuery
2) One where the size of the final image is big enough to accommodate both imgQuery and imgTrainWarped, since they overlap only partially, not completely. I understand this second case might have black/blank space somewhere around the images.
You should warp to a destination matrix that has the same dimensions as imgQuery after that, loop over the whole warped image and copy pixel to the first image, but only if the warped image actually holds a warped pixel. That is most easily done by warping an additional mask. Please try this:
cv::Mat imgMask = cv::Mat(imgTrain.size(), CV_8UC1, cv::Scalar(255));
cv::Mat imgMaskWarped;
cv::warpPerspective(imgMask , imgMaskWarped, TRANSFORMATION_MATRIX, imgQuery.size());
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgQuery.size());
// now copy only masked pixel:
imgTrainWarped.copyTo(imgQuery, imgMaskWarped);
please try and tell whether this is ok and solves scenario 1. For scenario 2 you would test how big the image must be before warping (by using the transformation) and copy both images to a destination image big enough.
Are you trying to create a panoramic image out of two overlapping pictures taken from the same viewpoint in different directions? If so, I am concerned about the "the second one over the first one" part. The correct way to stitch the panorama together is to cut both images off down the central line (symmetry axis) of the overlapping part, and not to add a part of one image to the (whole) other one.
Accepted answer works but could be done easier with using BORDER_TRANSPARENT:
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), INTER_LINEAR, BORDER_TRANSPARENT);
When using BORDER_TRANSPARENT the source pixel of imgQuery remains untouched.
For OpenCV 4 INTER_LINEAR and BORDER_TRANSPARENT
can be resolved by using
cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT, e.g.
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT);

output from calcOpticalFlowPyrLK function in openCV

Im new to openCV and image processing I am trying to find moving objects using optical flow (Lucas-Kanade method) by comparing two saved images on disk which are frames from a camera.
The part of code I am asking about is:
Mat img = imread("t_mono1.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the image
Mat img2 = imread("t_mono2.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the 2nd image
Mat pts;
Mat pts2;
Mat stat;
Mat err;
goodFeaturesToTrack(img,pts,100,0.01,0.01);//extract features to track
calcOpticalFlowPyrLK(img,img2,pts,pts2,stat,err);//optical flow
cout<<" "<<pts.cols<<"\n"<<pts.rows<<"\n";`
I am getting that the size of pts is 100 by 1. I suppose it should have been 100 by 2 since the pixels have x and y coordinates. Further, when I used for loops to display the contents of the arrays the pts array was all zeros and all arrays were one dimensional.
I have seen this question:
OpenCV's calcOpticalFlowPyrLK throws exception
I tried using vectors as he did but I get an error when building it says cannot convert between different data types. I am using VS2008 with openCV2.4.11
I would like to get the (x,y) coordinates of features in the first and second images and the error but all the arrays passed to calcOpticalFlowPyrLK were one dimensional and I don't understand how can this be?

OpenCV generate cv::Mat from array using stride

I have an array of pixel data in RGBA format. Although I have already converted this data to grayscale using the GPU (thus all 4 channels are identical).
I now want to use this grayscale data in OpenCV, and I don't want to store 4 copies of the same data. Is it possible to create a cv::Mat structure from this pixel array by specifying a stride. (i.e. only read out every 4th byte)
I am currently using
GLubyte* Img = stuff from GPU;
cv::Mat tmp(height, width, CV_8UC4, Img);
But this copies all the data, or does it wrap the existing pointer into a cv::Mat without copying it? If it wraps without copy then I will be happy to use standard c++ routines to copy only the data I want from Img into a new section of memory and then wrap this as cv::Mat.
Otherwise how would you suggest doing this to reduce the amount of data being copied.
Thanks
The code that you are using
cv::Mat tmp(rows, cols, CV_8UC4, dataPointer);
does not perform any copy but only assign the data field of the Mat instance.
If it's ok for you to work with a matrix of 4 channels, then just go on.
Otherwise, if you prefer working with a 1-channel matrix, then just use the function cv::cvtColor() to create a new image with a single channel (but then you will get one additional image in memory and pay the CPU cycles for the conversion):
cv::Mat grey;
cv::cvtColor(tmp, grey, CV_BGR2GRAY);
Finally, one last thing: if you can deinterlace the colorplanes beforehand (for example on the GPU) and get some image with [blue plane, green plane, red plane], then you can pass CV_8UC1 as image type in the construction of tmp and you get a single channel grey image without any data copy.

cv:Mat, every second pixel is set

I'm new to CV and I'm coming up with a question.
I want to create a fading grey bar (from black to white).
So i initializied a Mat:
Mat fadedgrey=Mat(20,256,CV_8UC1);
when I write the pixelvalues:
for(int x=0;x<20;x++){
for(int y=0;y<256;y++){
fadedgrey.at<int>(x,y)=y;}}
the result is the following:
only every second column is written, but I thought CV_8UC1 is one-channel, not a two-channel Mat.
For example the value set at Position (1,129) comes up with a Pixel in the beginning of second row.
Help me!
Greetings!
If your matrix is of type CV_8UC1, then each element is one byte in size and you should be using .at<uchar> or similar, rather than .at<int>.
Although this isn't your problem, you might also end-up confused about rows and columns, as your Mat constructor takes nRows,nCols, which is the opposite way around to x,y

splitting image results in unhandled exception error

I am currently planning on splitting my image into 3 channels so i can get the RGB values of an image to plot a scatter graph so i can model is using a normal distribtion calculating the covariance matrix, mean, etc.
then calculate distance between the background points and the actual image to segment the image.
Now in my first task, i have wrote the following code.
VideoCapture cam(0);
//int id=0;
Mat image, Rch,Gch,Bch;
vector<Mat> rgb(3); //RGB is a vector of 3 matrices
namedWindow("window");
while(1)
{
cam>>image;
split(image,rgb);
Bch = rgb[0];
Gch = rgb[1];
Rch = rgb[2];
but as soon as it reaches the split function, i step through it, it causes a unhandled exception error. access violation writing location 0xfeeefeee
i am still new to opencv, so am not used to dealing with unhandled exception error.
thanks
It sounds as if split expects there to be three instances of Mat in the rgb vector.
But you have only prepared it to hold three items - it is actually empty.
Try adding three items to the vector and run again.
Although this is an old issue I would like to share the solution that worked for me. Instead of vector<Mat> rgb(3); I used Mat channels[3];. I realized there is something wrong with using vector when I was not able to use split even on an image loaded with imread. Unfortunately, I cannot explain why this change works, but if someone can that would be great.