Objective Lower Bound command? Using PULP with COIN_CMD/CBC/CLP - linear-programming

I'm using PULP to solve an LP with CBC with a fracgap(epgap) of 0.01 (99.99%). Is there a command to return the best lower bound found after the solver finishes? So, whatever the solver is comparing the objective value to in order to know its within 0.01 and it can stop.

Sorry but Pulp does not return the best bound of the solution, with COIN_CMD().
I think it is printed to stdout if msg=1 is used if you just what to see what it is.

Related

Calculating EOF with CDO gives Eigenvalue computation warning jacobi scheme did not converge

I need to get the principal coefficients (daily) of the first EOF of the daily geopotential height field (data = geopot.nc). I use only the cold season (N,D,J,F,M,A). My code is as follows:
cdo selmon,11,12,1,2,3,4 geopot.nc geopotw.nc
cdo --reduce_dim - copy geopotw.nc geopotw3.nc #delete 1 dimension of level=1)
cdo sub geopotw3.nc -ydaymean geopotw3.nc deseason.nc
cdo mulcoslat deseason.nc deseasoncos.nc
cdo eof,3 deseasoncos.nc eval.nc eof.nc
I got a warning message: statistics-module (Warning): Eigenvalue computation with one-sided jacobi scheme did not converge properly. 7017371 of 7244721 pairs of columns did not achieve requested orthogonality of 1e-12.
What I am doing wrong? Maybe I am missing something? Or maybe there are other better ways to achieve the result I am looking for? For example in R. Thank you for the help!
I'm not sure if this post will help, you can change the max iteration number using an environmental variable in the following way:
export MAX_JACOBI_ITER=100
See if that will help with your convergence issue?

Find one solution of equation via sympy solve()

Is there a way to find one solution of a complicated equation via sympy. It seems finding all the solutions is too sophisticated.
I have tried to put quik = True but it does not improve the resolution.
If the equation is a polynomial you can request a particular RootOf solution; if it is a univariate expression you can get a numerical solution "near some intial guess" with nsolve.

How to know if the optimization problem is infeasible or not? Pyomo Warning: Problem may be infeasible

Pyomo can find a solution, but it gives this warning:
WARNING: Loading a SolverResults object with a warning status into
model=(SecondCD);
message from solver=Ipopt 3.11.1\x3a Converged to a locally infeasible point. Problem may be infeasible.
How do I know if the problem is infeasible or not?
this pyomo model optimizes a farm's decision of inputs allocation.
model.Crops = Set() # set Crops := cereal rapes maize ;
model.Inputs = Set() # set Inputs := land labor capital fertilizer;
model.b = Param(model.Inputs) # Parameters in CD production function
model.x = Var(model.Crops, model.Inputs, initialize = 100, within=NonNegativeReals)
def production_function(model, i):
return prod(model.x[i,j]**model.b[j] for j in model.Inputs)
model.Q = Expression(model.Crops, rule=production_function)
...
instance = model.create_instance(data="SecondCD.dat")
opt = SolverFactory("ipopt")
opt.options["tol"] = 1E-64
results = opt.solve(instance, tee=True) # solves and updates instance
instance.display()
if I set b >=1, (e.g.: param b := land 1 labor 1 capital 1 fertilizer 1),
pyomo can find optimal solution;
but if i set b < 1, (e.g.: param b := land 0.1 labor 0.1 capital 0.1 fertilizer 0.1), and set opt.options["tol"] = 1E-64, pyomo can find a solution, but gives that warning.
I expect an optimal solution, but the actual result gives the warning mentioned above.
The message you get (message from solver=Ipopt 3.11.1\x3a Converged to a locally infeasible point. Problem may be infeasible.) doesn't mean that the problem is necessarilly infeasible. A non-linear solver will typically give you a local optimum, and the path to get to the solution is a very important part of finding a "better" local optimum. When you tried with another point, you found a feasible solution, and that is the proof that your problem is feasible.
Now, in finding the global optimum instead of a local optimum, this is a little bit harder. One way to find out is to check if your problem is convex. If it is, it means that there will only be one local optimum, and that this local optimum is the global optimum. This can be done mathematically. See https://math.stackexchange.com/a/1707213/470821 and http://www.princeton.edu/~amirali/Public/Teaching/ORF523/S16/ORF523_S16_Lec7_gh.pdf from a quick Google search). If you found that your problem is not convex, then you can try to prove that there are few local optimums and that they can be found easily with good starting points. Finally, if this can't be done, you should consider more advanced techniques, all with their pros and cons. For example, you can try to generate a set of starting solutions to make sure that you cover the whole feasible domain of your problem. Another one would be to use meta-heuristics methods to help you find a better starting solution.
Also, I am sure that Ipopt have some tools to help tackling this problem of finding a good starting solution that improves the resulting local optimum.

Auto-rounding up problems python

I am working on a problem, that essentially it comes down to solving the following equation (b/n)*((b-1)/(n-1)) = 0.5, where only the lower limit of n is 10**12. I was able to solve the problem making use of methods described here https://www.alpertron.com.ar/QUAD.HTM
However I also tried solving the problem as a quadratic equation, and checking than the answers are integers, and that the required ratio is reached. The program works for lower values of n, but as soon it starts approaching the required limit (10**12), it starts giving false solutions. For example, the program yields
b = 707106783028 and
n = 1000000002604
as a set of solutions, and yet it is not -> (b/n)*((b-1)/(n-1)) gives 0.499999999999, however python just takes it as 0.5. I tried using x.hex() to try to account for that, but it did not help. Is there any way to make python store/display the true (or most accurate) value of a float?

Autocorrelation returns random results with mic input (using a high pass filter)

Sorry to ask a similar question to the one i asked before (FFT Problem (Returns random results)), but i've looked up pitch detection and autocorrelation and have found some code for pitch detection using autocorrelation.
Im trying to do pitch detection of a users singing. Problem is, it keeps returning random results. I've got some code from http://code.google.com/p/yaalp/ which i've converted to C++ and modified (below). My sample rate is 2048, and data size is 1024. I'm detecting pitch of both a sine wave and mic input. The frequency of the sine wave is 726.0, and its detecting it to be 722.950820 (which im ok with), but its detecting the pitch of the mic as a random number from around 100 to around 1050.
I'm now using a High pass filter to remove the DC offset, but it's not working. Am i doing it right, and if so, what else can i do to fix it? Any help would be greatly appreciated!
(Fixed)
Thanks,
Niall.
Edit: Changed the code to implement a high pass filter with a cutoff of 30hz (from What Are High-Pass and Low-Pass Filters?, can anyone tell me how to convert the low-pass filter using convolution to a high-pass one?) but it's still returning random results. Plugging it into a VST host and using VST plugins to compare spectrums isn't an option to me unfortunately.
Edit: Fixed, thanks for everyones help, but I never got it to work, now using new code.
I am no sound expert, but if you are sampling with 44100 (I guess samples per second) and use 1024 datapoints. You are working with about 1/40th of a second worth of data. I doesn't surprise me that the current pitch varies a lot, depending on which piece you pick. If you want to find the average or main pitch of a voice, I'd expect to need about 1second worth of data.
At 44.1 kHz sampling frequency, 1024 samples is only a little bit over 23 ms worth of data. Isn't it possible that this is simply insufficient data in order to compute the pitch of a human singer?
I mean, the sound I can make that lasts for 23 ms is probably not something I have a lot of pitch-control over; I would expect this kind of measurement to be done over slighly longer periods of time.
The problem is in your findBestCandidates() function:
Inside this function you access the 'inputs' array from 0 up to 'length - 1'.
When you call this function inside detectPitchCalculation() function 'inputs' is 'results' and 'length' is 'nHiPeriodInSamples'.
But 'results' is only allocated and filled up to 'nHiPeriodInSamples - nLowPeriodInSamples - 1'.
So if 'nLowPeriodInSamples' is greater 0 you access unallocated and random memory inside the findBestCandidates() function!
EDIT:
Another bug is that you fill each 'nResolution' entry of the 'results' array in detectPitchCalculation() function but access each entry in the findBestCandidates() function (via the 'inputs' argument). But since you call detectPitchCalculation() with a 'nResolution=1' this does not explain your specific problem...so I will look a little bit more. But it would definitely a problem if you call it with higher resolutions.
I don't see the problem in you code, but I'm no good in C. But I'd try the following to find the problem:
run with data where the result in known, e.g. with sin(x) as input
run it with small data size (e.g. 2)
Compare the results with known correct ones. You should be able to find those on the internet, or do them by hand.
If random means: same input, different output, you most probably have some bug in the initialisation of variables. Use a debugger and known input to check, that all variables, especially all elements of arrays are properly initialized.