Is there a way to solve quadratic programming problems natively in OpenCV? - c++

So, I'm trying to write an algorithm for adjusting the lighting around a camera to get the right grey levels in an image. I've measured grey levels at different lighting levels, and now I'm trying to get a relationship for grey levels across an area to light sources across that same area.
So far I've been doing this in OpenCV and C++, based on a prior prototype in MatLab. Part of the solution in MatLab involves using a Quadratic Solver (the function quadprog), and I must admit to being a little out of my depth when it comes to Quadratic Programming. If possible I would really like to be able to implement this natively in OpenCV, so I don't have to use a different library, convert my Mats to its matrix standard, and then back to cv::Mats once more. Is there a way to do this natively? I have looked at the documentation for OpenCV solvers, but I'm not sure they're applicable in this case . . .
For reference, the MatLab I'm trying to implement looks like this, where H, F, A, B, LB and UB are matrices:
opt=optimset('Algorithm','active-set','Display','off');
[setpwm,fval,flag]=quadprog(H,F,A,B,[],[],LB,UB,[],opt);
"H represents the quadratic in the expression 1/2*x'Hx + f'*x. If H is not symmetric, quadprog issues a warning and uses the symmetrized version (H + H')/2 instead." - quadprog MatLab page
"F represents the linear term in the expression 1/2*x'Hx + F'*x." - quadprog MatLab page
"A is an M-by-N matrix, where M is the number of inequalities, and N is the number of variables." - quadprog MatLab page
"B contains the Linear inequality constraints, specified as a real vector." - quadprog MatLab page
LB and UB are vectors specifying the upper and lower bounds of the vector x.
Thank you! :)

Related

Bundle adjustment with focal length correction not converging

I am trying to add a new feature to our existing implementation of the bundle adjustment in code.
The algorithm uses the Gauss-Newton method and has been working for well over a decade. The least squares "A" matrix is populated using initial approximations of the image exterior orientations, as well as the object points. The book from Kraus - "Photogrammetry: Fundamental and Standard Processes" - was used for this.
A while ago, self calibration was added to this algorithm, however, only the formulae by Ebner and Gruen were added (formula for Ebner here). I am now trying to add the "Brown-Conrady" formula which is well documented in this paper (final algorithm under "concluding remarks"). It uses 10 parameters to determine deltaX and deltaY.
When I include all the parameters except for deltaC (the correction to the focal length/camera constant), our algorithm works and the adjustment converges and produces the desired residuals. However, as soon as I introduce deltaC (which mathematically I see as "allowing" the image points to scale by some amount in X and Y) the adjustment diverges.
The input to the algorithm is a large set of already undistorted aerial images, along with their control points and a large number of image points. We are therefore expecting the distortion/correction parameters to be close to zero, since the images are already undistorted. This is indeed the case for Ebner and Grun.
For Brown, however, some of the parameters (and therefore the delta corrections) grow uncontrollably. I have tried scaling these parameters (the principle points and focal length correction deltaC) so that they are closer in magnitude to the other parameters (K1,K2,K3,P1,P2) however this did not help - the adjustment diverges all the same.
Is there any reason for this? Could it perhaps be because the images are already undistorted? Or something to do with this aerial job in particular?
I have not provided code as it is simply too complex, however I feel it is maybe an understanding of the implementation as opposed to specific code where I am going wrong.
Thanks!

Perspective projection based on 4 points in 2D

I'm writing to ask about homography and perspective projection.
I'm trying to write a piece of code, that will "warp" my image so that its corners align with 4 reference points that are in the 3D space - however, the game engine that I'm running it in, already allows me to get the screen position of them, so I already have their screen-space coordinates of both xi,yi and ui,vi, normalized to values between 0 and 1.
I have to mention that I don't have a degree in mathematics, which seems to be a requirement in the posts I've seen on this topic so far, but I'm hoping there is actually a solution to this problem that one can comprehend. I never had a chance to take classes in Computer Vision.
The reason I came here is that in all the posts I've seen online, the simple explanation that I came across is that each point must be put into a 1x3 matrix and multiplied by a 3x3 homography, which consists of 9 components h1,h2,h3...h9, and this transformation matrix will transform each point to the correct perspective. And that's where I'm hitting a brick wall - how do I calculate the transformation matrix? It feels like it should be a relatively simple algebraic task, but apparently it's not.
At this point I spent days reading on the topic, and the solutions I've come across are either based on matlab (which have a ton of mathematical functions built into them), or include elaborations and discussions that don't really explain much; sometimes they suggest tons of different parameters and simplifications, but rarely explain why and what's their purpose, or they are referencing books and studies that have been since removed from the web, and I found myself more confused than I was in the beginning. Most of the resources I managed to find online are also made in a different context - image stitching and 3d engine development.
I also want to mention that I need to run this code each frame on the CPU, and I'm fairly concerned about the effect of having to run too many matrix transformations and solving a ton of linear algebra equations.
I apologize for not asking about any specific code, but my general question is - can anyone point me in the right direction with this issue?
Limit the problem you deal with.
For example, if you always warp the entire rectangular image, you can treat that the coordinates of the image corners are {(0,0), (1,0), (0,1), (1,1)}.
This can simplify the equation, and you'll be able to solve the equation by yourself.
So you'll be able to implement the answer.
Note : Homograpy is scale invariant. So you can decrease the freedom to 8. (e.g. you can solve the equation under h9=1).
Best advice I can give: read a good book on the subject. For example, "Multiple View Geometry" by Hartley and Zisserman

How to calculate efficiently and accurately the Fourier transform of a radial function in Fortran

As my question states, I want to calculate the Fourier transform F(q) of a radial function f(r) (defined on [0,infinity[ and which decays like an exponential exp(-Ar +b) at large r) as accurately as possible in Fortran. The function values come from a data file (which I can easily interpolate through cubic interpolation for example and extrapolate since the behaviour at large r is known).
I'm using the "physics" definition of the Fourier transform in 3D, which gives (because f is radial) :
I first tried to calculate this integral for some chosen values of q by using Gauss-Legendre quadrature, by generating some 60 or 100 abscissas and weights via the NAG routine D01BCF (D01BCF link). In the case of Gauss Legendre quadrature, the problem is to choose the interval [0,B] on which to integrate. While the function f loses 4 to 5 orders of magnitude from r=10 to r=20 (example), the choice of B as a strong influence on the result of the calculation... When I compared the result I get to a "nearly exact" calculation (made with matlab but with a veeeery long computation time), I saw that in fact this was only valid for small values of q (of the order of 5, when I have to deal with values as large as 150). A Gauss-Laguerre quadrature does not give any better result, probably because of the oscillatory part of the integrand.
I then tried to compute this Fourier transform for some given values of q with the routine D01ASF (D01ASF link). It is a "one-dimensional quadrature, adaptive, semi-infinite interval, weight function cos(ωx) or sin(ωx) ", which is exactly what I need. The results are quite convincing for q up to 80 or 100 if I input absolute error tolerances of 10E-5. Problems are : I would need to go at larger q, and the Fourier transform F(q) oscillates with a magnitude of ~ 10E-6 at such q's. Lowering the tolerance to 10E-5 already takes some time and even makes the whole thing to output some error message from the subroutine so I don't know if 10E-6 would be feasible.
I'm thus currently wondering if trying to calculate this Fourier transform with FFT wouldn't be a good idea ? The problems I face are that I don't know how to calculate radial wave functions with FFT (and also that I don't even know how to use FFT properly either since the definition of the transform is not even the same (exponent sign and argument) and that I never used it before).
Would you have ideas ? :)
EDIT 2 : I tried by FFT (using the routine C06FAF from NAG library). It works quite well up to some large values of q. The problem I face is that there is always some constant normalising factor to account for. I don't get why. This normalising factor evolves with the number N of points used in the mesh. It has the for of a power law : Normalising Factor F = N^(-0.5) x exp(9.9) approximately (see figure where the black line is the "exact" Fourier Transform and the green, magenta, blue, red and yellow lines are the FFT calculated for different values of N)
EDIT3 : I found the factor to be A*N^(-0.5) where A is the length of the integration mesh

3D image gradient in OpenCV

I have a 3D image data obtained from a 3D OCT scan. The data can be represented as I(x,y,z) which means there is an intensity value at each voxel.
I am writing an algorithm which involves finding the image's gradient in x,y and z directions in C++. I've already written a code in C++ using OpenCV for 2D and want to extend it to 3D with minimal changes in my existing code for 2D.
I am familiar with 2D gradients using Sobel or Scharr operators. My search brought me to this post, answers to which recommend ITK and Point Cloud Library. However, these libraries have a lot more functionalities which might not be required. Since I am not very experienced with C++, these libraries require a bit of reading, which time doesn't permit me. Moreover, these libraries don't use cv::Mat object. If I use anything other than cv::Mat, my whole code might have to be changed.
Can anyone help me with this please?
Update 1: Possible solution using kernel separability
Based on #Photon's answer, I'm updating the question.
From what #Photon says, I get an idea of how to construct a Sobel kernel in 3D. However, even if I construct a 3x3x3 cube, how to implement it in OpenCV? The convolution operations in OpenCV using filter2d are only for 2D.
There can be one way. Since the Sobel kernel is separable, it means that we can break the 3D convolution into convolution in lower dimensions. Comments 20 and 21 of this link also tell the same thing. Now, we can separate the 3D kernel but even then filter2D cannot be used since the image is still in 3D. Is there a way to break down the image as well? There is an interesting post which hints at something like this. Any further ideas on this?
Since the Sobel operator is separable, it's easy to envision how to add a 3rd dimension.
For example, when you look at the filter definition for Gx in the link you posted, you see that is multiplies the surrounding pixels by coefficients that have a sign dependent on the relative X position, and magnitude relative to the offset in Y.
When you extend to 3D, the Gx gradient should be calculated the same way, but you need to work on a 3x3x3 cube, and the coefficient sign remains the same definition, and the magnitude now depends on change in either Y or Z or both.
The other gradients (Gy, Gz) are the same but around their axis.

What does eigen value of structure tensor matrix denote?

It is known that good feature point across two images can be determined properly, if
the two eigen value of above matrix, are greater than 0. Can someone explain, what does it mean to have both eigen value greater than 0 and why the feature point is not good if either of them is approx. equal to 0.
Note that this matrix always has nonnegative eigenvalues. Basically this rule says that one should favor rapid change in all directions, that is corners are better features than edges or flat surfaces.
The biggest eigenvalue corresponds to the eigenvector pointing towards the direction of the most significant change in the image at the point u.
If the two eigenvalues are small the image at point u does not change much.
If one of the eigenvectors is large and the other is small this point might lie on an edge in the image but it will be difficult to figure out where exactly on that edge.
If both are large, the point is like a corner.
There is a nice presentation with examples in the panoramic stitching slide deck from a course taught by Rajesh Rao at the University of Washington.
Here E(u,v) denotes the Eucledian distance between the two areas in the vicinities of pixels shifted by the vector (u,v) from each other. This distance tells how easy it is to distinguish the two pixels from one another.
Edit The matrix of image derivatives is denoted H in this illustration probably because of its relation to Harris corner detection algorithm.
That is related with the concept of Texturedness in the paper of Thomasi-Shi "Good features to track".
The idea of Textureness is to provide a rating of texture to make features (within a window) identifiable and unique. For instance, lines are not good features since are not unique (see Figure 3.9a)
To solve equation an optical flow equation, it must be possible to invert J (Hessian matrix). In practice next conditions must be satisfied:
Eigenvalues of J cannot differ by several orders of magnitude.
Eigenvalues of Hessian overcome image noise levels λnoise: implies that both eigenvalues of J must be large.
For the first condition we know that the greatest eigenvalue cannot be arbitrarily large because intensity variations in a window are bounded by the maximum allowable pixel value.
Regarding to second condition, being λ1 and λ2 two eigenvalues of J, following situations may rise (See Figure 3.10):
• Two small eigenvalues λ1 and λ2: means a roughly constant intensity profile within a window (Pink region). Problem of figure 3.9-b.
• A large and a small eigenvalue: means unidirectional texture patter (Violet or gray region). Problem of figure 3.9-a.
• λ1 and λ2 are both large: can represent a corner, salt and pepper textures or any other pattern that can be tracked reliably (Green region).
Some references:
1 - ORTIZ CAYON, R. J. (2013). Online video stabilization for UAV. Motion estimation and compensation for unnamed aerial vehicles.
2 - Shi, J., & Tomasi, C. (1994, June). Good features to track. In Computer Vision and Pattern Recognition, 1994. Proceedings CVPR'94., 1994 IEEE Computer Society Conference on (pp. 593-600). IEEE.
3 - Richard Szeliski. Image alignment and stitching: a tutorial. Found.
Trends. Comput. Graph. Vis., 2(1):1–104, January 2006.