Shadow Removal by inverse laplacian in C++ - c++

I have some query to code the algorithm in
Removing shadows from images by finlayson
I have gotten the matlab code to get illumination invariance image by jose alvarez and convert it to c++ coding
I have continued to follow the image reconstruction step outlined by finlayson algorithm but was stumped at the part of the Possion equation which is the part after removing the shadow edge from the log channel image
How should I proceed after that. The discussion of this part is vague to me. SI have read the following presentation slides. It say I must perform a inverse laplace operation on the image.
what should i do ? inverse laplace is not so common to code. Would need any advice I could get
ThANKS

Related

MATLABs interp1 with either pchip or cubic in OpenCV for 1D vector

I need to implement the same logic and values we get in MATLAB using interp1 with 'pchip' or 'cubic' flags and I'm unable to find a fitting replacement in OpenCV other than implementing by myself the cubic interpolation in Numerical Recipes (as noted by another question, this is based on De Boor's algorithm, which is used by MATLAB).
We need to interpolate values of a 1D doubles vector based on our sample points. Using Linear interpolation did not yield sufficient results as it is not smooth enough and results in non-continuity on the gradient in the joint points.
I came across this on OpenCV's website. But this seems to only be bicubic and work on an image, whilst I need a cubic interpolation.
Does anyone know any other function or simple solution on OpenCV's side for this issue? Any suggestion would help, thanks.
Side note: we are using OpenCV 4.3.0.

How can I segment an object using Kinect V1?

I'm working with SDK 1.8 and I'm getting the depth stream from the Kinect. Now, I want to hold a paper of A4 size in front of the camera and want to get the co-ordinates of the corners of that paper so I can project an image onto it.
How can I detect the corners of the paper and get the co-ordinates? Does Kinect SDK 1.8 provide that option?
Thanks
Kinect SDK 1.8 does not provide this feature itself (to my knowledge). Depending on the language which you use for coding, there most certainly are libraries which allow such an operation if you segment it into steps.
OpenCV for example is quite useful in image-processing. When I once worked with the Kinect for object recognition, I used AForge with C#.
I recommend to target the challenge as follows:
Edge Detection:
You will apply edge detection algorithms such as the Canny Filter onto the image. First you will probably - depending on the library - transform your depth picture into a greyscale picture. The resulting image will be greyscale as well and the intensity of a pixel correlates with the probability of it belonging to an edge. Using a threshold, you will binarize this picture to black/white.
Hough Transformation: is used to get the position and parameters of a line within a image, which allows further calculation. Hough Transformation is VERY sensistive to its parameters and you will spend a lot of time in tuning those to get good results.
Calculation of edge points: Assuming that your Hough Transformation was successful, you can now calculate all intersections or the given lines which will yield the points that you are looking for.
All of these steps (especially Edge Detection and Hough Transformation) have been asked/answered/discussed in this forum.
If you provide code and intermediate results or further question, you can get a more detailled answer.
p.s.
I remember that the kinect was not that accurate and that noise was a topic. Therefore you might consider using a filter before doing these operations.

3D image gradient in OpenCV

I have a 3D image data obtained from a 3D OCT scan. The data can be represented as I(x,y,z) which means there is an intensity value at each voxel.
I am writing an algorithm which involves finding the image's gradient in x,y and z directions in C++. I've already written a code in C++ using OpenCV for 2D and want to extend it to 3D with minimal changes in my existing code for 2D.
I am familiar with 2D gradients using Sobel or Scharr operators. My search brought me to this post, answers to which recommend ITK and Point Cloud Library. However, these libraries have a lot more functionalities which might not be required. Since I am not very experienced with C++, these libraries require a bit of reading, which time doesn't permit me. Moreover, these libraries don't use cv::Mat object. If I use anything other than cv::Mat, my whole code might have to be changed.
Can anyone help me with this please?
Update 1: Possible solution using kernel separability
Based on #Photon's answer, I'm updating the question.
From what #Photon says, I get an idea of how to construct a Sobel kernel in 3D. However, even if I construct a 3x3x3 cube, how to implement it in OpenCV? The convolution operations in OpenCV using filter2d are only for 2D.
There can be one way. Since the Sobel kernel is separable, it means that we can break the 3D convolution into convolution in lower dimensions. Comments 20 and 21 of this link also tell the same thing. Now, we can separate the 3D kernel but even then filter2D cannot be used since the image is still in 3D. Is there a way to break down the image as well? There is an interesting post which hints at something like this. Any further ideas on this?
Since the Sobel operator is separable, it's easy to envision how to add a 3rd dimension.
For example, when you look at the filter definition for Gx in the link you posted, you see that is multiplies the surrounding pixels by coefficients that have a sign dependent on the relative X position, and magnitude relative to the offset in Y.
When you extend to 3D, the Gx gradient should be calculated the same way, but you need to work on a 3x3x3 cube, and the coefficient sign remains the same definition, and the magnitude now depends on change in either Y or Z or both.
The other gradients (Gy, Gz) are the same but around their axis.

Pixel level image registration / alignment?

I'm trying to remove foreground from two images, here's a sample pair of images:
As you can see, the Budweiser bottle is removed from the scene before the second shot is taken.
These photos were captured from a pinhole camera (iPhone), and, the tricky part is I'm hand-holding the camera, so it cannot be guaranteed that the images are perfectly aligned pixel by pixel, so a simple minus-threshold method will not work.
Then, I've decided to perform image registration using findHomography and warpPerspective from OpenCV, here's the result image:
This image is warped with the matrix I've got from findHomography, it kind of improved the alignment quality, but still not that aligned so I can use a simple way to remove the foreground.
So, finally, I decided to implement a "fuzzy-minus" algorithm: for every pixel in image1, I'll look through a 7x7 neighbour in image2 (a 7 by 7 kernel?), using the minimal difference in grayscale as the result of minus, and threshold the result into binary image, here's what I've got:
And the result is still not good. Notice the white wholes in the bottle, this is produced due to similar grayscale value of foreground and background. So I'm not sure what to do now.
I can think of two ways to solve the problem, the first is to get a better aligned pair of images, and simply minus the pairs; the second is to use a more robust way to extract the foreground.
Can anyone give me some advice on how to deal with this kind of problem? I believe there should be some state-of-art algorithms or processing pipelines, but after googling around, I get nothing.
I'm using OpenCV with C++, it would be fantastic if you can tell me how to do it with these tools in hand.
Big big thanks in advance!
The problem is not in your algorithm. You are having problem because the two scenes were not taken from exactly the same angle, as shown in the animation below. This slight difference highlight the edges in the subtraction.
You need a static camera in order to apply this approach.
I suggest using mathematical morphology on the mask that you got to get rid of the artifacts.
Try applying both opening and closing to get rid of the black and the white small regions.
Mathematical Morphology
Mathematical Morphology in opencv
The difference between the two picture is pretty huge, so you will need to use a large structure element, but I don't think you will be able to get rid of the shadow.
For the two large strips in the background, you may try to use a horizontally shaped structure element as well.
Edit
Is it possible to produce a grayscale image instead of a binary image? if yes, you may try to experiment with the hat method for the shadow, but I am not sure about this point.
This is what I got using two different structure elements for closing THEN opening
Mat mask = imread("mask.jpg",CV_LOAD_IMAGE_GRAYSCALE);
morphologyEx(mask,mask,MORPH_CLOSE,getStructuringElement(CV_SHAPE_ELLIPSE,Size(50,10)));
morphologyEx(mask,mask,MORPH_OPEN,getStructuringElement(CV_SHAPE_ELLIPSE,Size(10,50)));
imshow("open",mask);
imwrite("maskopenclose.jpg",mask);
I would suggest optical flow for alignment and OpenCV's background subtraction algorithm:
http://docs.opencv.org/trunk/doc/tutorials/video/background_subtraction/background_subtraction.html
I suggest that instead of using findHomography try using some of openCV's stereo correspondence functions: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
there is a sample code here: https://github.com/Itseez/opencv/blob/master/samples/cpp/stereo_calib.cpp

warping images and problem with the matrix of Homography opencv

I have a problem when I'm getting the results of using the function cvFindHomograhpy().
The results it gives negative coordinates, I will explain now what I'm doing.
I'm working on video stabilization by using optical flow method. I have estimated the location of the features in the first and second frame. Now, my aim is to warp the images in order to stabilize the images. Before this step I should calculate the homography matrix between the frames which I used the function mentioned above but the problem that I'm getting this results which it doesn't seem to be realistic because it has negative values and these values can be changed to more weird results.
0.482982 53.5034 -0.100254
-0.000865877 63.6554 -0.000213824
-0.0901095 0.301558 1
After obtaining these results I get a problem to apply for image Warping by using CvWarpPerspective(). The error shows that there is a problem with using the matrices. Incorrect transforming from "cvarrTomat"?
So where is the problem? Can you give me another suggestion if it's available?
Notice: if you can help me about implementing the warping in c++ it would be great.
Thank you
A poor homography estimation can generate warping error inside CvWarpPerspective().
The homography values you have posted show that you have a full projective transformation that move points at infinity to points in 2d euclidean plane and it could be wrong.
In video stabilization to compute a good homography model usually other features are used such as harris corner, hessian affine or SIFT/SURF combined with a robust model estimator such as RANSAC or LMEDS.
Check out this link for a matlab example on video stabilization...