You can see the result in the image below. The original image is just a grey pixel, the result should be that but blurred.
Opencv is not using the immediate neighboring pixels for the Gaussian Blur, I'm guessing it's doing some sort of internal padding. Why it is doing so I have no idea, my initial guess would be that it assumes that the vector has more than one channel, which is not the case. Here is how i create the cv::Mats for calculation and how i call cv::gausianBlurr
std::vector<float> sobelCopy (sobel);
cv::Mat sobel_mat_copy(height,
width,
CV_32F,
sobelCopy.data());
cv::Mat sobel_mat(height,
width,
CV_32F,
sobel.data());
cv::GaussianBlur(sobel_mat_copy, sobel_mat, cv::Size(3,3), 0.0, 0.0, cv::BORDER_CONSTANT);
Image
Fixed, it has all to with my how i ordered my vector, i had column major, cv::Mat assumes it is row major ordering.
Related
I am trying to do a transform of a RGB image using cv::remap so that the empty space is extrapolated by the average intensity:
cv::Mat mapX(sz, CV_32FC1), mapY(sz, CV_32FC1);
// filling maps
cv::remap(src, dst, mapX, mapY, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::mean(src));
The output is a transformed image, but the empty space is filled with the intensity value in first row/column of the corresponding column/row because the default value for the last parameter of cv::remap is 0.
Example:
If I do this (rotate):
cv::warpAffine(src, dst, rotationMatrix, bbox.size(), cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::mean(src));
I get correct output:
How can I specify a constant color everywhere?
I have a matrix cv::Mat CorrespondenceMap where each element contains its own index. I then use these indexes to reconstruct the transformed image.
When I perform the operation directly on the image data I get the correct first visual. If I do it through the correspondence map it only works when I set to INTER_NEAREST otherwise I get the image shown
std::vector<int> cmap(width*height);
std::iota (std::begin(cmap), std::end(cmap), 0); // Fill with incrementing ints
cv::Mat cmapMat(height, width, CV_32SC1, cmap.data());
cmapMat.convertTo(cmapMat, CV_64F);
cv::warpAffine(source, transformedimage, lkParams.rowRange(0,2), source.size(), cv::INTER_LINEAR,cv::BORDER_CONSTANT, -1);
cmapMat.convertTo(cmapMat, CV_32S, 1, 0.5);
I am attempting to use my own kernel to blur an image (for educational purposes). But my kernel just makes my whole image white. Is my blur kernel correct? I believe the proper name of the blur filter I am trying to apply is a normalised blur.
void blur_img(const Mat& src, Mat& output) {
// src is a 1 channel CV_8UC1
float kdata[] = { 0.0625f, 0.125f, 0.0625f, 0.125f, 0.25f, 0.125f, 0.0625f, 0.125f, 0.0625f };
//float kdata[] = { -1,-1,-1, -1,8,-1, -1,-1,-1}; // outline filter works fine
Mat kernel(3, 3, CV_32F, kdata);
// results in output being a completely white image
filter2D(src, output, CV_32F, kernel);
}
Your image is not white, is float. I am sure you that you are displaying the image with imshow in a later place, and it looks all white. This is explained in the imshow documentation. Specifically:
If the image is 32-bit floating-point, the pixel values are multiplied
by 255. That is, the value range [0,1] is mapped to [0,255].
This means that if it is float it has to be [0,1] values to be displayed correctly.
Now, that we know what cause it, lets see how to solve it. I can think of 3 possible ways:
1) normalize the image to [0,1]
cv::Mat dst;
cv::normalize(outputFromBlur, dst, 0, 1, cv::NORM_MINMAX);
This function normalizes the values, so it may shift the colors... this is not the best one for known images, but rather for depth maps or other matrices with values of unknown colors.
2) covertTo uchar:
cv::Mat dst;
outputFromBlur.convertTo(dst, CV_8U);
This function does saturate_cast, so it may handle possible overflow/underflow.
3) use filter2D with another output depth:
cv::filter2D(src, output, -1, kernel);
With -1 the desire output will be of the same type of the source (I assume your source is CV_8U)
I hope this helps you, if not leave a comment.
I am currently working on a problem where I have created an uint16 image of type CV_16UC1 based on Velodyne data where lets say 98% of the pixels are black (value 0) and the remaining pixels have the metric depth information (distance to that point). These pixels correspond to the velodyne points from the cloud.
cv::Mat depthMat = cv::Mat::zeros(frame.size(), CV_16UC1);
depthMat = ... //here the matrice is filled
If I try to display this image I get this:
On the image you can see that the brightest(white) pixels correspond to the pixels with biggest depth.From this I need to get a denser depth image or smth that would resemble a proper depth image like in the example shown on this video:
https://www.youtube.com/watch?v=4yZ4JGgLE0I
This would require proper interpolation and extrapolation of those points (the pixels of the 2D image) and it is here is where I am stuck. I am a beginner when it comes to interpolation techniques. Does anyone know how this can be done or at least can point me to a working solution or example algorithm for creating a depth map from sparse data?
I tried the following from the Kinect examples but it did not change the output:
depthMat.convertTo(depthf, CV_8UC1, 255.0/65535);
const unsigned char noDepth = 255;
cv::Mat small_depthf, temp, temp2;
cv::resize(depthf, small_depthf, cv::Size(), 0.01, 0.01);
cv::inpaint(small_depthf, (small_depthf == noDepth), temp, 5.0, cv::INPAINT_TELEA);
cv::resize(temp, temp2, depthf.size());
temp2.copyTo(depthf, (depthf == noDepth));
cv::imshow("window",depthf);
cv::waitKey(3);
I managed to get the desired output(something that resembles a depth image) by simply using dilation on the sparse depth image:
cv::Mat result;
dilate(depthMat, result, cv::Mat(), cv::Point(-1, -1), 10, 1, 1);
I'm working in OpenCV C++ to filtering image color. I want to filter the image using my own matrix. See this code:
img= "c:/Test/tes.jpg";
Mat im = imread(img);
And then i want to filtering/multiply with my matrix (this matrix can replaced with another matrix 3x3)
Mat filter = (Mat_<double>(3, 3) <<17.8824, 43.5161, 4.11935,
3.45565, 27.1554, 3.86714,
0.0299566, 0.184309, 1.46709);
How to multiply the img mat matrix with my own matrix? I'm still not understand how to multiply 3 channel (RGB) matrix with another matrix (single channel) and resulted image with new color.
you should take a look at the opencv documentation. You could use this function:
filter2D(InputArray src, OutputArray dst, int ddepth, InputArray kernel, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT )
which would give you something like this in your code:
Mat output;
filter2D(im, output, -1, filter);
About your question for 3-channel matrix; it is specified in the documentation:
kernel – convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
So by default your "filter" matrix will be applied equally to each color plane.
EDIT You find a fully functional example on the opencv site: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/filter_2d/filter_2d.html