sepFilter2D Opencv unexpected results - c++

I'm trying to apply 8X8 separable mean filter on top of an image
The filter is 2D separable.
I'm converting the following code from Matlab,
Kernel = ones(n);
% for conv 2 without zeropadding
LimgLarge = padarray(Limg,[n n],'circular');
LimgKer = conv2(LimgLarge,Kernel,'same')/(n^2);
LKerCorr = LimgKer(n+1:end-n,n+1:end-n);
1st I pad the image with the filter size, than correlate 2d, and finally crop the image region.
Now, I'm trying to implement the same thing in C++ using opencv
I have loaded the image, than called the following commands:
m_kernelSize = 8;
m_kernelX = Mat::ones(m_kernelSize,1,CV_32FC1);
m_kernelX = m_kernelX / m_kernelSize;
m_kernelY = Mat::ones(1,m_kernelSize,CV_32FC1);
m_kernelY = m_kernelY / m_kernelSize;
sepFilter2D(m_logImage,m_filteredImage,m_logImage.depth(),m_kernelX,m_kernelY,Point(-1,-1),0,BORDER_REPLICATE);
I expected to receive the same results, but I'm still getting totally different results from Matlab.
I'd rather not to pad the image , do the correlation and finally crop the image again, I expected the same results using BORDER_REPLICATE argument.
Incidentally, I'm aware of copyMakeBorder function, but rather not use it, because sepFilter2D handles the regions by itself.

Since you said you are only loading the image before the code snippet you showed, I can see two potential flaws.
First, if you do nothing between the loading of the source image and your code snippet, then your source image would be an 8-bit image and, since you set the function argument ddepth to m_logImage.depth(), you are also requesting a 8-bit destination image.
However, after reading the documentation of sepFilter2D, I am not sure that this is a valid combination of src.depth() and ddepth.
Can you try using the following line:
sepFilter2D(m_logImage,m_filteredImage,CV_32F,m_kernelX,m_kernelY,Point(-1,-1),0,BORDER_REPLICATE);
Second, check that you loaded your source image using the flag CV_LOAD_IMAGE_GRAYSCALE, so that it only has one channel and not three.

I followed Matlab line by line, The mistake was somewhere else.
Anyways , The following two methods return the same results
Using a 8X8 filter
// Big filter mode - now used only for debug mode
m_kernel = Mat::ones(m_kernelSize,m_kernelSize,type);
cv::Mat LimgLarge(m_logImage.rows + m_kernelSize*2, m_logImage.cols + m_kernelSize*2,m_logImage.depth());
cv::copyMakeBorder(m_logImage, LimgLarge, m_kernelSize, m_kernelSize,
m_kernelSize, m_kernelSize, BORDER_REPLICATE);
// Big filter
filter2D(LimgLarge,m_filteredImage,LimgLarge.depth(),m_kernel,Point(-1,-1),0,BORDER_CONSTANT );
m_filteredImage = m_filteredImage / (m_kernelSize*m_kernelSize);
cv::Rect roi(cv::Point(0+m_kernelSize,0+m_kernelSize),cv::Point(m_filteredImage.cols-m_kernelSize, m_filteredImage.rows-m_kernelSize));
cv::Mat croppedImage = m_filteredImage(roi);
m_diffImage = m_logImage - croppedImage;
Second method, using separable 8x8 filter
sepFilter2D(m_logImage,m_filteredImage,m_logImage.depth(),m_kernelX,m_kernelY,Point(-1,-1),0,BORDER_REPLICATE);
m_filteredImage = m_filteredImage / (m_kernelSize*m_kernelSize);

Related

How to align 2 images based on their content with OpenCV

I am totally new to OpenCV and I have started to dive into it. But I'd need a little bit of help.
So I want to combine these 2 images:
I would like the 2 images to match along their edges (ignoring the very right part of the image for now)
Can anyone please point me into the right direction? I have tried using the findTransformECC function. Here's my implementation:
cv::Mat im1 = [imageArray[1] CVMat3];
cv::Mat im2 = [imageArray[0] CVMat3];
// Convert images to gray scale;
cv::Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);
// Define the motion model
const int warp_mode = cv::MOTION_AFFINE;
// Set a 2x3 or 3x3 warp matrix depending on the motion model.
cv::Mat warp_matrix;
// Initialize the matrix to identity
if ( warp_mode == cv::MOTION_HOMOGRAPHY )
warp_matrix = cv::Mat::eye(3, 3, CV_32F);
else
warp_matrix = cv::Mat::eye(2, 3, CV_32F);
// Specify the number of iterations.
int number_of_iterations = 50;
// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;
// Define termination criteria
cv::TermCriteria criteria (cv::TermCriteria::COUNT+cv::TermCriteria::EPS, number_of_iterations, termination_eps);
// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
im1_gray,
im2_gray,
warp_matrix,
warp_mode,
criteria
);
// Storage for warped image.
cv::Mat im2_aligned;
if (warp_mode != cv::MOTION_HOMOGRAPHY)
// Use warpAffine for Translation, Euclidean and Affine
warpAffine(im2, im2_aligned, warp_matrix, im1.size(), cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
else
// Use warpPerspective for Homography
warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
UIImage* result = [UIImage imageWithCVMat:im2_aligned];
return result;
I have tried playing around with the termination_eps and number_of_iterations and increased/decreased those values, but they didn't really make a big difference.
So here's the result:
What can I do to improve my result?
EDIT: I have marked the problematic edges with red circles. The goal is to warp the bottom image and make it match with the lines from the image above:
I did a little bit of research and I'm afraid the findTransformECC function won't give me the result I'd like to have :-(
Something important to add:
I actually have an array of those image "stripes", 8 in this case, they all look similar to the images shown here and they all need to be processed to match the line. I have tried experimenting with the stitch function of OpenCV, but the results were horrible.
EDIT:
Here are the 3 source images:
The result should be something like this:
I transformed every image along the lines that should match. Lines that are too far away from each other can be ignored (the shadow and the piece of road on the right portion of the image)
By your images, it seems that they overlap. Since you said the stitch function didn't get you the desired results, implement your own stitching. I'm trying to do something close to that too. Here is a tutorial on how to implement it in c++: https://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
You can use Hough algorithm with high threshold on two images and then compare the vertical lines on both of them - most of them should be shifted a bit, but keep the angle.
This is what I've got from running this algorithm on one of the pictures:
Filtering out horizontal lines should be easy(as they are represented as Vec4i), and then you can align the remaining lines together.
Here is the example of using it in OpenCV's documentation.
UPDATE: another thought. Aligning the lines together can be done with the concept similar to how cross-correlation function works. Doesn't matter if picture 1 has 10 lines, and picture 2 has 100 lines, position of shift with most lines aligned(which is, mostly, the maximum for CCF) should be pretty close to the answer, though this might require some tweaking - for example giving weight to every line based on its length, angle, etc. Computer vision never has a direct way, huh :)
UPDATE 2: I actually wonder if taking bottom pixels line of top image as an array 1 and top pixels line of bottom image as array 2 and running general CCF over them, then using its maximum as shift could work too... But I think it would be a known method if it worked good.

Edge Detection, Matlab Vision System Toolbox

I have several images where I need to find an edge. I have tried following the vision.EdgeDetector System object in matlab, and the example they give here: http://www.mathworks.com/help/vision/ref/vision.edgedetectorclass.html
They give the example
hedge = vision.EdgeDetector;
hcsc = vision.ColorSpaceConverter('Conversion','RBG to intensity')
hidtypeconv = vision.ImageDataTypeConverter('OutputDataType',single');
img = step(hcsc, imread('picture.png'))
img1 = step(hidtypeconv, ing);
edge = step(hedge,img1);
imshow(edges);
Which I have followed exactly in my code.
However this code doesn't produce all the edges I would like, it seems as though Matlab can only pick up on about half of the edges in the entire image. Is there a different approach I can take to finding all the edges, or a way to improve upon the vision.EdgeDetector object in Matlab?
By default hedge = vision.EdgeDetector has a Threshold value of 20. Try changing it to hedge = vision.EdgeDetector('Threshold',Value) and play with value to see what value works out the best for you.
Try:
imgGray = rgb2gray(imgRGB);
imgEdge = edge(imgGray,'canny');
This should give you most of the edge points, if not, then change parameters THRESH and SIGMA accordingly. Also check the following for other methods:
help edge
You do not have to use vision.EdgeDetector system, somethings are easier without them! ;)

Using the Matrix operations of OpenCv (Addition and Subtraction) OpenCV C++

I added two images together using the addweighted function of openCV
addWeighted(ROI,1,watermark,0.5,0.0,ROI);
however , when i try to do the reverse , I get patches of black instead of removing the second image from the resultant image .
addWeighted(ROI,1,watermark,-0.5,0.0,ROI);
I have tried using subtract as well but I am getting the same result.
The image below describes what I'm talking about.
Do note that my algorithm did not correctly detect all the watermarked areas, but for those which were detected correctly, I am unable to subtract the watermark from it.
It would be greatly appreciated if you guys could advise me on what to do for the subtraction.
Thank you.
According to docs of addWeighted you are giving half weight to watermark (can you explain why?) and your last argument is depth type...not array...so it should be -1 if watermark and ROI are of the same depth or you put the depth value you want to put...if you note in the docs the final value is a saturated value ...that is if it exceeds 255 it is being pulled down to 255...so no wonder if you subtract you won't get the two exact value.
** EDIT:**
for you I + 0.5W = R where I is the lena image, W is the watermark and R is the resultant image. Since R is getting truncated above 255 so store the R in an integer matrix CV_32UC3. Since you are using OpenCV 2.1 so its better you perform the weighted addition by scanning the image rather than using OpenCV API. That way you can save the R in an integer matrix where the max value you can get is (255 + 255), which will be easily stored. For display use the uchar matrix (truncated one) and for reversing the process use the integer matrix...

Subtract displaced mask using OpenCV

I want to do:
masked = image - mask
But I want to "displace" mask. That is, move it vertically and horizontally (as long as the intersection between it and image is not empty, this would be valid).
I have some hand-coded assembly (which uses MMX instructions) which does this, embedded in a C++ program, but it's unstable when doing vertical displacement, so I thought of using OpenCV instead. Would it be possible to do this calling only one OpenCV function?
Performance is critical; using OpenCV, time should be at least in the same order of magnitude as the assembly code.
EDIT: Here's an example
image (medium frame, see the contrast in the guy's skull):
mask (first frame, no contrast):
image - mask, without displacement. Notice how the contrast path is enhanced, but since the patient moved a little, we can see some skull contours which are visual noise for diagnostic purposes.
image - mask, mask displaced about 5 pixels down. To try and compensate for the noise introduced by the patient's movement, we "displace" the mask slightly so as to remove the contours and see the contrast path better (brightness and contrast were adjusted, that's why it looks a bit darker).
EDIT 2: About the algorithm, I managed to fix its issues. It doesn't crash anymore, but the downside is that it now processes all image pixels (it should only process those which need to be subtracted). Anyway, how to fix the old code is not my question; my question is, how do I do this processing using OpenCV? I'll post some profiling results later.
I know this is in Python, so not what you are after, but translating it to C++ should be very straight forward. It crops both images to matching sizes (required for nearly all operations), determined by the displacement between the images, and their relative sizes. This method should be quick, as cv.GetSubRect doesn't copy anything, so its just down to the cv.AbsDiff function (if you have an actual difference mask, you could use cv.Sub which should make it even quicker). Also this code will handle displacement in any direction and mask and image can be any size (mask can be larger than image). There must be an overlap for a specified displacement. The difference between images can be viewed alone, or the difference 'in-place'.
A nice diagram to illustrate whats going on. The first two squares are example image and mask. The next three squares show a horizontal displacement of the 'mask' of -30, 0, and 30 pixels, and the last one has a displacement of 20, 20.
import cv
image = cv.LoadImageM("image.png")
mask = cv.LoadImageM("mask.png")
image = cv.LoadImageM("image2.png")
mask = cv.LoadImageM("small_mask.png")
image_width, image_height = cv.GetSize(image)
mask_width, mask_height = cv.GetSize(mask)
#displacements here:
horiz_disp = 20
vert_disp = 20
image_horiz = mask_horiz = image_vert = mask_vert = 0
if vert_disp < 0:
mask_vert = abs(vert_disp)
sub_height = min(mask_height + vert_disp, image_height)
else:
sub_height = min(mask_height, image_height - vert_disp)
image_vert = vert_disp
if horiz_disp < 0:
mask_horiz = abs(horiz_disp)
sub_width = min(mask_width + horiz_disp, image_width)
else:
sub_width = min(mask_width, image_width - horiz_disp)
image_horiz = horiz_disp
#cv.GetSubRect returns a rectangular part of an image, without copying any data. - fast.
mask_sub = cv.GetSubRect(mask, (mask_horiz, mask_vert, sub_width, sub_height))
image_sub = cv.GetSubRect(image, (image_horiz, image_vert, sub_width, sub_height))
#Subtracts the mask overlap region from the image overlap region, puts it in image_sub
cv.AbsDiff(image_sub, mask_sub, image_sub)
# Shows diff only:
cv.ShowImage('image_sub', image_sub)
# Shows image with diff section
cv.ShowImage('image', image)
cv.WaitKey(0)

Is there any function opposite to bwmorph(image,'skel') in MATLAB or C,C++ code?

I want to create an image of an object from its morphological skeleton. Is there any function in MATLAB or C,C++ code? Thanks in advance.
Original image, and its skeleton (obtained using bwmorph(image,'skel',Inf)):
As stated in the comments above, bwmorph(..,'skel',Inf) gives you a binary image of the skeleton, which is not enough on its own to recover the original image.
On the other, if you had, for each skeleton pixel, the values returned by the distance transform, then you can successfully apply the inverse distance transform (as suggested by #belisarius):
Note that this implementation of InverseDistanceTransform is rather slow (I based it on a previous answer). It repeatedly uses POLY2MASK to get pixels inside the specified circles, so there is room for improvement..
%# get binary image
BW = ~imread('http://img546.imageshack.us/img546/3154/hand2.png');
%# SkeletonTransform[]
skel = bwmorph(BW,'skel',Inf);
DD = double(bwdist(~BW));
D = zeros(size(DD));
D(skel) = DD(skel);
%# zero-centered unit circle
t = linspace(0,2*pi,50);
ct = cos(t);
st = sin(t);
%# InverseDistanceTransform[] : union of all disks centered around each
%# pixel of the distance transform, taking pixel values as radius
[r c] = size(D);
BW2 = false(r,c);
for j=1:c
for i=1:r
if D(i,j)==0, continue; end
mask = poly2mask(D(i,j).*st + j, D(i,j).*ct + i, r, c);
BW2(mask) = true;
end
end
%# plot
figure
subplot(131), imshow(BW), title('original')
subplot(132), imshow(D,[]), title('Skeleton+DistanceTransform')
subplot(133), imshow(BW2), title('InverseDistanceTransform')
The result:
Depending on your object, you may be able to get a meaningful result using dilation (IMDILATE in Matlab).
The function bwmorph can be used for code generation as seen here Image Processing functions for code generation. Write the code in a MATLAB function and use the codegen command. for generating code. The option for code generation is available past R2012b MATLAB Release Notes.