I am trying to do a transform of a RGB image using cv::remap so that the empty space is extrapolated by the average intensity:
cv::Mat mapX(sz, CV_32FC1), mapY(sz, CV_32FC1);
// filling maps
cv::remap(src, dst, mapX, mapY, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::mean(src));
The output is a transformed image, but the empty space is filled with the intensity value in first row/column of the corresponding column/row because the default value for the last parameter of cv::remap is 0.
Example:
If I do this (rotate):
cv::warpAffine(src, dst, rotationMatrix, bbox.size(), cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::mean(src));
I get correct output:
How can I specify a constant color everywhere?
Related
I have a matrix cv::Mat CorrespondenceMap where each element contains its own index. I then use these indexes to reconstruct the transformed image.
When I perform the operation directly on the image data I get the correct first visual. If I do it through the correspondence map it only works when I set to INTER_NEAREST otherwise I get the image shown
std::vector<int> cmap(width*height);
std::iota (std::begin(cmap), std::end(cmap), 0); // Fill with incrementing ints
cv::Mat cmapMat(height, width, CV_32SC1, cmap.data());
cmapMat.convertTo(cmapMat, CV_64F);
cv::warpAffine(source, transformedimage, lkParams.rowRange(0,2), source.size(), cv::INTER_LINEAR,cv::BORDER_CONSTANT, -1);
cmapMat.convertTo(cmapMat, CV_32S, 1, 0.5);
I am attempting to use my own kernel to blur an image (for educational purposes). But my kernel just makes my whole image white. Is my blur kernel correct? I believe the proper name of the blur filter I am trying to apply is a normalised blur.
void blur_img(const Mat& src, Mat& output) {
// src is a 1 channel CV_8UC1
float kdata[] = { 0.0625f, 0.125f, 0.0625f, 0.125f, 0.25f, 0.125f, 0.0625f, 0.125f, 0.0625f };
//float kdata[] = { -1,-1,-1, -1,8,-1, -1,-1,-1}; // outline filter works fine
Mat kernel(3, 3, CV_32F, kdata);
// results in output being a completely white image
filter2D(src, output, CV_32F, kernel);
}
Your image is not white, is float. I am sure you that you are displaying the image with imshow in a later place, and it looks all white. This is explained in the imshow documentation. Specifically:
If the image is 32-bit floating-point, the pixel values are multiplied
by 255. That is, the value range [0,1] is mapped to [0,255].
This means that if it is float it has to be [0,1] values to be displayed correctly.
Now, that we know what cause it, lets see how to solve it. I can think of 3 possible ways:
1) normalize the image to [0,1]
cv::Mat dst;
cv::normalize(outputFromBlur, dst, 0, 1, cv::NORM_MINMAX);
This function normalizes the values, so it may shift the colors... this is not the best one for known images, but rather for depth maps or other matrices with values of unknown colors.
2) covertTo uchar:
cv::Mat dst;
outputFromBlur.convertTo(dst, CV_8U);
This function does saturate_cast, so it may handle possible overflow/underflow.
3) use filter2D with another output depth:
cv::filter2D(src, output, -1, kernel);
With -1 the desire output will be of the same type of the source (I assume your source is CV_8U)
I hope this helps you, if not leave a comment.
You can see the result in the image below. The original image is just a grey pixel, the result should be that but blurred.
Opencv is not using the immediate neighboring pixels for the Gaussian Blur, I'm guessing it's doing some sort of internal padding. Why it is doing so I have no idea, my initial guess would be that it assumes that the vector has more than one channel, which is not the case. Here is how i create the cv::Mats for calculation and how i call cv::gausianBlurr
std::vector<float> sobelCopy (sobel);
cv::Mat sobel_mat_copy(height,
width,
CV_32F,
sobelCopy.data());
cv::Mat sobel_mat(height,
width,
CV_32F,
sobel.data());
cv::GaussianBlur(sobel_mat_copy, sobel_mat, cv::Size(3,3), 0.0, 0.0, cv::BORDER_CONSTANT);
Image
Fixed, it has all to with my how i ordered my vector, i had column major, cv::Mat assumes it is row major ordering.
I have an image (shown below). I want to extract only the pink colored portion and remove the rest of the image. I have the RGB value of pink stored in an array. Is there any way that I can use bitwise_and on the image and the color so that I can single out the required portion in OpenCV?
Bitwise_and is not the right method for this, since it is bitwise. But there are of course methods in OpenCv for this basic task:
If you know the exact value, you can just use the CmpS method. If you want to find all pink colors within a certain range, use the InRangeS method. Optionally change the colorspace of the image first, e.g. if you want to specify your range in HSV space.
Since your green color is not uniform, but it ranges from:
// in BGR color space
Scalar low(182, 204, 168);
Scalar high(187, 207, 172);
// in HSV color space
Scalar low(72, 43, 204);
Scalar high(72, 45, 207);
you can use the inRange function. You can adjust the ranges according to the color you need to segment.
Usually HSV color space is better for segmentation tasks based on color, but in this case also the BGR color space is good enough.
This code shows how to get the binary mask of the desired color, and how to copy only the masked portion of the original image in both BGR and HSV color space.
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
// Load image
Mat3b img = imread("path_to_image");
{
// BGR color space
// Setup ranges
Scalar low(182, 204, 168);
Scalar high(187, 207, 172);
// Get binary mask
Mat1b mask;
inRange(img, low, high, mask);
// Initialize result image (all black)
Mat3b res(img.rows, img.cols, Vec3b(0, 0, 0));
// Copy masked part to result image
img.copyTo(res, mask);
imshow("Mask from BGR", mask);
imshow("Result from BGR", res);
waitKey();
}
{
// HSV color space
// Convert to HSV
Mat3b hsv;
cvtColor(img, hsv, COLOR_BGR2HSV);
// Setup ranges
Scalar low(72, 43, 204);
Scalar high(72, 45, 207);
// Initialize result image (all black)
// Get binary mask
Mat1b mask;
inRange(hsv, low, high, mask);
// Initialize result image (all black)
Mat3b res(img.rows, img.cols, Vec3b(0, 0, 0));
// Copy masked part to result image
img.copyTo(res, mask);
imshow("Mask from HSV", mask);
imshow("Result from HSV", res);
waitKey();
}
return 0;
}
Example of the mask:
Example of segmented image:
For the mat which stores images, it is easy to show it using imshow.But if the data type of the Mat is CV_32FC1, how can I display this mat ?
I tried imshow, but the display figure is totally white and when I zoom int, it is still totally blank, and I cannot see the float numbers in mat.
Is there anyone who knows how to show entire mat matrix ?
p.s.: thanks for replying. I will post some codes and figures to show more details:
codes:
Mat mat1;
mat1 = Mat::ones(3,4,CV_32FC1);
mat1 = mat1 * 200;
imshow("test", mat1);
waitKey(0);
Mat dst;
normalize(mat1, dst, 0, 1, NORM_MINMAX);
imshow("test1", dst);
waitKey(0);
mat1.convertTo(dst, CV_8UC1);
imshow("test2", dst);
waitKey(0);
return 0;
output:
after I zoom in by 150%:
Then after I zoom in by 150%, we can see that 'test' is totally white and we cannot see its element values. 'test1' is totally black and we still cannot see its element values. But for 'test2', it is gray and we can see its element value which is 200.
Does this experiment mean that imshow()can only show CV_8UC1 and we cannot show any other datatyes ?
If image is a Mat of type CV_32F, If the image is 32-bit floating-point, imshow() multiplies the pixel values by 255 - that is, the value range [0,1] is mapped to [0,255].
So your floating point image has to have a range 0 .. 1.
This will display a CV32F image, no matter what the range is:
cv::Mat dst
cv::normalize(image, dst, 0, 1, cv::NORM_MINMAX);
cv::imshow("test", dst);
cv::waitKey(0);