Apply a formula and transfer image in OpenCV - c++

i'm working with openCV, it happens that i have an image, and have now created a blank image, where i should transfer the pixels of the orginal image and apply this formula: y,−x+ 2x0 that should reproduce a reflection of the original image vericaly.
I know i'll have to use for to go through each pixel on the original image, but then i don't know how i'll apply the formula a place those pixels on the blank image.
Thank you,

For performance reasons, it is usually best not to loop through all the pixels of an image. This is especially true for things where there is a better way to do it. If your aim is to flip an image vertically, then I would have a look into flip:
void flip(InputArray src, OutputArray dst, int flipCode)
if flipCode == 1, then the image will be flipped vertically.
Good luck
Andreas

Related

How to dedistort an image without pixel loss using opencv

As we all know,we can use the function cv::getOptimalNewCameraMatrix() with alpha = 1 to get a new CameraMatrix. Then we use the function cv::undistort() with the new CameraMatrix can get the image after dedistortion. However, I find the image after distortion is as large as the original image and some part of the image after distortion covered by black.
So my question is :Does this mean that the original image pixel is lost? and is there any way to avoid pixel loss or get the image whose size larger than origin image with opencv?
cv::Mat NewKMatrixLeft = cv::getOptimalNewCameraMatrix(KMatrixLeft,DistMatrixLeft ,cv::Size(image.cols,image.rows),1);
cv::undistort(image, show_image, KMatrixLeft, DistMatrixLeft,NewKMatrixLeft);
The size of image and show_image are both 640*480,however from my point of view,the size of image after distortion should be larger than 640*480 because some part of it is meaningless.
Thanks!
In order to correct distortion, you basically have to reverse the process that caused the initial distortion. This implies that pixels are stretched and squashed along various directions to correct the distortion. In some cases, this would move the pixels away from the image edge. In OpenCV, this is handled by inserting black pixels. There is nothing wrong with this approach. You can then choose how to crop it to remove the black pixels at the edges.

Multiply Images In OpenCv & Apply Laplacian Filter On It

In my previous question here's the link. According to the answer I have obtained the desired image which is white flood filled.
Now after applying the morphological operation of erosion on the white flood filled image, I get the new masked image.
Your answer helped a lot. Now what I am trying to do is that I am multiplying the new masked image with the original grayscaled image in order to get the veins pattern. But it gives me the same image as result which I get after performing erosion on the white flood filled image. After completing this step I have to apply the Laplacian function to get the veins pattern. I am attaching the original image and the result image that I want. I hope you will look into the matter.
Original Image.
Result Image.
If I am right in understanding you, you only want to extract the veins from the grayscale hand image, right? To do something like this, you would obviously multiply both of them as,
finalimg = grayimg * veinmask;
If you have done the above I think it would be more helpful to post a portion of your code so experts here might be able to point out whats wrong, also the output image that you're getting, and the one you want would also help.
I hope I understand you correctly. You have a gray scale image showing a hand (first image in your question)
You create a mask image that looks like the second image you posted.
Multiplication of both results in the mask image?
If that is the case check your values. If you work within a byte image your mask image must contain values 0 and 1, not 0 and 255 as the multiplication results for non-zero mask pixels otherwise exceed 255!

Create mask to select the black area

I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.

Warp perspective and stitch/overlap images (C++)

I am detecting and matching features of a pair of images, using a typical detector-descriptor-matcher combination and then findHomography to produce a transformation matrix.
After this, I want the two images to be overlapped (the second one (imgTrain) over the first one (imgQuery), so I warp the second image using the transformation matrix using:
cv::Mat imgQuery, imgTrain;
...
TRANSFORMATION_MATRIX = cv::findHomography(...)
...
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgTrain.size());
From here on, I don't know how to produce an image that contains the original imgQuery with the warped imgTrainWarped on it.
I consider two scenarios:
1) One where the size of the final image is the size of imgQuery
2) One where the size of the final image is big enough to accommodate both imgQuery and imgTrainWarped, since they overlap only partially, not completely. I understand this second case might have black/blank space somewhere around the images.
You should warp to a destination matrix that has the same dimensions as imgQuery after that, loop over the whole warped image and copy pixel to the first image, but only if the warped image actually holds a warped pixel. That is most easily done by warping an additional mask. Please try this:
cv::Mat imgMask = cv::Mat(imgTrain.size(), CV_8UC1, cv::Scalar(255));
cv::Mat imgMaskWarped;
cv::warpPerspective(imgMask , imgMaskWarped, TRANSFORMATION_MATRIX, imgQuery.size());
cv::Mat imgTrainWarped;
cv::warpPerspective(imgTrain, imgTrainWarped, TRANSFORMATION_MATRIX, imgQuery.size());
// now copy only masked pixel:
imgTrainWarped.copyTo(imgQuery, imgMaskWarped);
please try and tell whether this is ok and solves scenario 1. For scenario 2 you would test how big the image must be before warping (by using the transformation) and copy both images to a destination image big enough.
Are you trying to create a panoramic image out of two overlapping pictures taken from the same viewpoint in different directions? If so, I am concerned about the "the second one over the first one" part. The correct way to stitch the panorama together is to cut both images off down the central line (symmetry axis) of the overlapping part, and not to add a part of one image to the (whole) other one.
Accepted answer works but could be done easier with using BORDER_TRANSPARENT:
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), INTER_LINEAR, BORDER_TRANSPARENT);
When using BORDER_TRANSPARENT the source pixel of imgQuery remains untouched.
For OpenCV 4 INTER_LINEAR and BORDER_TRANSPARENT
can be resolved by using
cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT, e.g.
cv::warpPerspective(imgTrain, imgQuery, TRANSFORMATION_MATRIX, imgQuery.size(), cv::InterpolationFlags::INTER_LINEAR, cv::BorderTypes::BORDER_TRANSPARENT);

How to thin an image borders with specific pixel size? OpenCV

I'm trying to thin an image by making the border pixels of size 16x24 becoming 0. I'm not trying to get the skeletal image, I'm just trying to reduce the size of the white area. Any methods that I could use? Enlighten me please.
This is the sample image that i'm trying to thin. It is made of 16x24 white blocks
EDIT
I tried to use this
cv::Mat img=cv::imread("image.bmp", CV_LOAD_IMAGE_GRAYSCALE);//image is in binary
cv::Mat mask = img > 0;
Mat kernel = Mat::ones( 16, 24, CV_8U );
erode(mask,mask,kernel);
But the result i got was this
which is not exactly what i wanted. I want to maintain the exact same shape with just 16x24 pixels of white shaved off from the border. Any idea what went wrong?
You want to Erode your image.
Another Description
Late answer, but you should erode your image using a kernel which is twice the size you want to get rid of plus one, like:
Mat kernel = Mat::ones( 24*2+1, 16*2+1, CV_8U );
Notice I changed the places of the height and width of the block, I only know opencv from Python, but I am pretty sure the order is the same as in Python.