I'm new to CV and I'm coming up with a question.
I want to create a fading grey bar (from black to white).
So i initializied a Mat:
Mat fadedgrey=Mat(20,256,CV_8UC1);
when I write the pixelvalues:
for(int x=0;x<20;x++){
for(int y=0;y<256;y++){
fadedgrey.at<int>(x,y)=y;}}
the result is the following:
only every second column is written, but I thought CV_8UC1 is one-channel, not a two-channel Mat.
For example the value set at Position (1,129) comes up with a Pixel in the beginning of second row.
Help me!
Greetings!
If your matrix is of type CV_8UC1, then each element is one byte in size and you should be using .at<uchar> or similar, rather than .at<int>.
Although this isn't your problem, you might also end-up confused about rows and columns, as your Mat constructor takes nRows,nCols, which is the opposite way around to x,y
Related
I want to calculate the color histogram of an image but only taking into account specific pixels (whose 2D coordinates I know).
Is it possible to use calcHist specifying that only these concrete pixels should be taken into consideration (instead of the whole cv::Mat and all the pixels in it)? If not, is it possible to create a new Mat including only those specific pixels at known positions, and how? (Considering that for a histogram the pixel coordinates do not matter, could they be added to a (1 x number_of_specific_pixels)-dim Mat keeping the original type of the Mat?)
Thanks a lot in advance!
The third parameter of clalHist is called Mask.
So, you create a new single channel 8 bit cv::Mat that has the same size of your input image. It should contain 255's where you want to calculate the histogram and 0's where you do not. Then, pass it as Mask.
I have an opencv mat which has M rows and 3 columns, is there a way to reorder the mat such that the first and last (i.e., third) columns are switched while the middle column kept in place without copying the data?
OpenCV data is an array of pixels. Sometimes you can get a column or a rectangle view of an image (like with col() ). In which the data is not continuous, and it is calculated, as far as I know, with a step between rows. However the data is shared and is still an array.
Then the question becomes: can I swap two portions of an array without copying the data? Not as far as I know.
You can use optimized functions of OpenCV to swap them, but the data will be copied.
Also, non continuous data is way slower than continuous data in OpenCV functions. More can be read here.
You can use the OpenCV function flip for that. As an example the following code flips an image about the mid column.
int main ()
{
Mat img, flipped; //your mat
img =imread("lena.jpg");
flip(img,flipped ,1); // flipped is the output
imshow("img",flipped);
waitKey(0);
}
I am totally new to OpenCV and I have started to dive into it. But I'd need a little bit of help.
So I want to combine these 2 images:
I would like the 2 images to match along their edges (ignoring the very right part of the image for now)
Can anyone please point me into the right direction? I have tried using the findTransformECC function. Here's my implementation:
cv::Mat im1 = [imageArray[1] CVMat3];
cv::Mat im2 = [imageArray[0] CVMat3];
// Convert images to gray scale;
cv::Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);
// Define the motion model
const int warp_mode = cv::MOTION_AFFINE;
// Set a 2x3 or 3x3 warp matrix depending on the motion model.
cv::Mat warp_matrix;
// Initialize the matrix to identity
if ( warp_mode == cv::MOTION_HOMOGRAPHY )
warp_matrix = cv::Mat::eye(3, 3, CV_32F);
else
warp_matrix = cv::Mat::eye(2, 3, CV_32F);
// Specify the number of iterations.
int number_of_iterations = 50;
// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;
// Define termination criteria
cv::TermCriteria criteria (cv::TermCriteria::COUNT+cv::TermCriteria::EPS, number_of_iterations, termination_eps);
// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
im1_gray,
im2_gray,
warp_matrix,
warp_mode,
criteria
);
// Storage for warped image.
cv::Mat im2_aligned;
if (warp_mode != cv::MOTION_HOMOGRAPHY)
// Use warpAffine for Translation, Euclidean and Affine
warpAffine(im2, im2_aligned, warp_matrix, im1.size(), cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
else
// Use warpPerspective for Homography
warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
UIImage* result = [UIImage imageWithCVMat:im2_aligned];
return result;
I have tried playing around with the termination_eps and number_of_iterations and increased/decreased those values, but they didn't really make a big difference.
So here's the result:
What can I do to improve my result?
EDIT: I have marked the problematic edges with red circles. The goal is to warp the bottom image and make it match with the lines from the image above:
I did a little bit of research and I'm afraid the findTransformECC function won't give me the result I'd like to have :-(
Something important to add:
I actually have an array of those image "stripes", 8 in this case, they all look similar to the images shown here and they all need to be processed to match the line. I have tried experimenting with the stitch function of OpenCV, but the results were horrible.
EDIT:
Here are the 3 source images:
The result should be something like this:
I transformed every image along the lines that should match. Lines that are too far away from each other can be ignored (the shadow and the piece of road on the right portion of the image)
By your images, it seems that they overlap. Since you said the stitch function didn't get you the desired results, implement your own stitching. I'm trying to do something close to that too. Here is a tutorial on how to implement it in c++: https://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
You can use Hough algorithm with high threshold on two images and then compare the vertical lines on both of them - most of them should be shifted a bit, but keep the angle.
This is what I've got from running this algorithm on one of the pictures:
Filtering out horizontal lines should be easy(as they are represented as Vec4i), and then you can align the remaining lines together.
Here is the example of using it in OpenCV's documentation.
UPDATE: another thought. Aligning the lines together can be done with the concept similar to how cross-correlation function works. Doesn't matter if picture 1 has 10 lines, and picture 2 has 100 lines, position of shift with most lines aligned(which is, mostly, the maximum for CCF) should be pretty close to the answer, though this might require some tweaking - for example giving weight to every line based on its length, angle, etc. Computer vision never has a direct way, huh :)
UPDATE 2: I actually wonder if taking bottom pixels line of top image as an array 1 and top pixels line of bottom image as array 2 and running general CCF over them, then using its maximum as shift could work too... But I think it would be a known method if it worked good.
I am detecting and matching features of a pair of images, using a typical detector-descriptor-matcher combination and then findHomography to produce a transformation matrix.
After this, I want the two images to be overlapped (the second one (imgTrain) over the first one (imgQuery), so I warp the second image using the transformation matrix using:
cv::Mat imgQuery, imgTrain;
...
TRANSFORMATION_MATRIX = cv::findHomography(...)
...
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgTrain.size());
From here on, I don't know how to produce an image that contains the original imgQuery with the warped imgTrainWarped on it.
I consider two scenarios:
1) One where the size of the final image is the size of imgQuery
2) One where the size of the final image is big enough to accommodate both imgQuery and imgTrainWarped, since they overlap only partially, not completely. I understand this second case might have black/blank space somewhere around the images.
You should warp to a destination matrix that has the same dimensions as imgQuery after that, loop over the whole warped image and copy pixel to the first image, but only if the warped image actually holds a warped pixel. That is most easily done by warping an additional mask. Please try this:
cv::Mat imgMask = cv::Mat(imgTrain.size(), CV_8UC1, cv::Scalar(255));
cv::Mat imgMaskWarped;
cv::warpPerspective(imgMask , imgMaskWarped, TRANSFORMATION_MATRIX, imgQuery.size());
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgQuery.size());
// now copy only masked pixel:
imgTrainWarped.copyTo(imgQuery, imgMaskWarped);
please try and tell whether this is ok and solves scenario 1. For scenario 2 you would test how big the image must be before warping (by using the transformation) and copy both images to a destination image big enough.
Are you trying to create a panoramic image out of two overlapping pictures taken from the same viewpoint in different directions? If so, I am concerned about the "the second one over the first one" part. The correct way to stitch the panorama together is to cut both images off down the central line (symmetry axis) of the overlapping part, and not to add a part of one image to the (whole) other one.
Accepted answer works but could be done easier with using BORDER_TRANSPARENT:
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), INTER_LINEAR, BORDER_TRANSPARENT);
When using BORDER_TRANSPARENT the source pixel of imgQuery remains untouched.
For OpenCV 4 INTER_LINEAR and BORDER_TRANSPARENT
can be resolved by using
cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT, e.g.
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT);
I have a Qt app where I have to find the HSV range of a couple of pixels around click coordinates, to track later on. This is how I do it:
cv::Mat temp;
cv::cvtColor(frame, temp, CV_BGR2HSV); //frame is pulled from a video or jpeg
cv::Vec3b hsv=temp.at<cv::Vec3b>(frameX,frameY); //sometimes SIGSEGV?
qDebug() << hsv.val[0]; //look up H
qDebug() << hsv.val[1]; //look up S
qDebug() << hsv.val[2]; //look up V
//just base values so far, will work on range later
emit hsvDownloaded(hsv.val[0], hsv.val[0]+5, hsv.val[1], 255, hsv.val[2], 255); //send to GUI which automaticly updates worker thread
Now, things are odd. Those are the results (red circle indicates the click location):
With red it's weird, upper half of the shape is detected correctly, lower half is not, despite it being a solid mass of the same colour.
And for an actual test
It detects HSV {95,196,248} which is frankly absurd (base values way too high). None of the pixels that were detected isn't even the one that was clicked. The best values to detect that ball 100% of the time are H:35-141 S:0-238 V:65-255. I've wanted to get a HSV range from a normalized histogram, but I can't even get the base values right. What's up? When OpenCV pulls a frame using kalibrowanyPlik.read(frame); , the default colour scheme is BGR, right?
Why would the colour detection work so randomly?
As berak has mentioned, your code looks like you've used the indices to access pixel in the wrong order.
That means your pixel locations are wrong, except for pixel that lie on the diagonal, so clicked objects that are around the diagonal will be detected correctly, while all the others won't.
To not get confused again and again, I want you to understand why OpenCV uses (row,col) ordering for indices:
OpenCV uses matrices to represent images. In mathematics, 2D matrices use (row,col) indexing, have a look at http://en.wikipedia.org/wiki/Index_notation#Two-dimensional_arrays and watch at the indices. So for matrices, it is typical to use the row index first, followed by the column index.
Unfortunately, images and pixel typically have a (x,y) indexing, which corresponds to x/y axis/direction in mathematical graphs and coordinate systems. So here the x position is used first, followed by the y position.
Luckily, OpenCV provides two different versions of .at method, one to access pixel-positions and one to access matrix elements (which are exactly the same elements in the end).
matrix.at<type>(row,column) // matrix indexing to access elements
// which equals
matrix.at<type>(y,x)
and
matrix.at<type>(cv::Point(x,y)) // pixel/position indexing to access elements
since the first version should be slightly more efficient it should be preferred if the positions aren't already given as cv::Point objects. So the best way often is to remember, that openCV uses matrices to represent images and it uses matric index notations to access elements.
btw, I've seen people wondering why matrix.at<type>(cv::Point(y,x)) doesn't work the way intended after they've learned that openCV images use the "wrong ordering". I hope this question doesn't come up after my explanation.
one more btw: in school I already wondered, why matrices index rows first, while graphs of functions index x axis first. I found it stupid to not use the "same" ordering for both but I still had to live with it :D (and at the end, both don't have much to do with the other)