Drawbacks of avoiding crossover after barrier solve in linear program - linear-programming

I am running a large LP (approximately 5M non-zeros) and I want to speed up the solving process. I tried a concurrent solve to test which algorithm solves my problem the quickest and I found that the barrier method is the clear winner (solver = Xpress MP, but I guess that it would be the same for other solvers).
However, I want to further speed it up. I noticed that the real barrier solve takes less than 1% of the total solution time. The remainder of the time is spend in the crossover (~40%) and the primal solve (to find the corner solution in the new basis) (~60%). Unfortunately, I couldn't find a setting to tell the solver to do the dual crossover (there is one in Cplex, but I don't have a license for Cplex). Therefore I couldn't compare if this would be quicker.
Therefore I tried to turn off the crossover which yields a huge speed increase, but there are some disadvantages according to the documentation. So far, the drawbacks that I know of are:
Barrier solutions tend to be midface solutions.
Barrier without crossover does not produce a basic solution (although that the solver settings mention that "The full primal and dual solution is available whether or not crossover is used.").
Without a basis, you will not be able to optimize the same or similar problems repeatedly using advanced start information.
Without a basis, you will not be able to obtain range information for performing sensitivity analysis.
My question(s) is(are) simple. What other drawbacks are important to justify the very inefficient crossover step (both Cplex and Xpress MP enable the crossover as default setting). Alternatively, is my problem exceptional and is the crossover step very quick in other problems? Finally, what is wrong with having midface solutions (this means that a corner optimum is also not unique)?
Sources:
http://www.cs.cornell.edu/w8/iisi/ilog/cplex101/usrcplex/solveBarrier2.html (Barrier algorithm high-level theory)
http://tomopt.com/docs/xpress/tomlab_xpress008.php (Xpress MP solver settings)
https://d-nb.info/1053702175/34 (p87)

Main disadvantage: the solution will be "ugly", that is many 0.000001 and 0.9999999 in the solution values. Secondly you may get somewhat different duals. Finally a basis is required to do fast "hot starts". One possible way to speed up large models up is to use a simplex method and using an advanced basis from a representative base run.

Related

Efficient Kullback–Leibler calculation

I am looking to implement KL Divergence in C++ efficiently. (CPU Only for now).
Much like AES or FTT (Fast Fourier transform) whereby use of a common function has lead to hardware level optimizations (Intel AES and Intel FTT).
Is there anything similar for natural log, or slightly higher level efficiencies (ASM/C) that prevent bottlenecks of executing many Natural log functions in success (If they exist)?
Same use case examples:
.Many parallel and independent node executions; each one performing 20~ KL calculations from localized (not shared or pointer reffed) memory.
.Scheduled KL: executing in stepped-parallel where the hardware is setup expecting uses of for example tables (assuming tables are used at this lower level - AES implement ions indicated this is a high probability) .
I have found a possible candiate, but I'm unsure: ALTFP_LOG
You can use SSE instructions to calculate the logarithm of many values in parallel. But whether you can actually make use of those instructions depends heavily on how the rest of the calculations you are going to do depend on the logarithms you calculate, so it is not possible to give a more specific answer.

Is it possible to engage multiple cores (like gcc -j8) when solving with Pyomo?

The power flow library PyPSA uses Pyomo. I am trying to reduce the cost of each linear optimal power flow simulation.
I read through the Pyomo docs. Nothing sticks out at me yet. Perhaps it is not possible to split up the processing when solving linear optimisation problems.
Ubuntu 19.04, i5-4210U 1.70 GHz, 8 Gb RAM
When you talk about processing there are two things to consider. Processing to write the .lp file and processing to solve the problem with an optimization solver.
First, writing the .lp file is to my knowledge not yet parallelized in Pyomo. PyPSA developers created Linopy to parallel the processing to reduce RAM requirements and increase the speed.
Second, parallelizing the solver processing depends on the solver. PyPSA-Eur has an example of integration for that for Gurobi, and CPLEX. The performant open-source solver HiGHS can also something like that see here.

adam optimizer and momentum optimizer

I am trying to run an image segmentation code, which is based on U-net architecture. During the experimentation, I found that Adam optimizer runs much slowly than the momentum optimizer. I am not sure whether it is a common observation between these two optimizers? Or should it be a data-dependent observation?
Optimization using Adam runs slowly than optimization using Momentum because the former needs to accumulate for every parameter the exponential moving average of first and second moments, since it's an adaptive learning rate algorithm.
The latter, instead, don't need to keep track of the past gradients nor apply update rules with different values for every parameter.
Your observation is therefore correct but it's not data dependent, it's the optimization algorithm by itself that needs to do additional computation and therefore the execution time (for every train step) is slower.
The advantage is that using an adaptive learning rate algorithm, you reach a minimum faster, even if the single steps are slower.
It might depend on your framework; for instance, this issue for MxNet: https://github.com/dmlc/mxnet/issues/1516. In my experience Adam tends to converge with less epochs, though I realize that's not the same as the optimizer running quickly.

Speed of C++ operators/ simple math

I'm working on a physics engine and feel it would help having a better understanding of the speed and performance effects of performing many simple or complex math operations.
A large part of a physics engine is weeding out the unnecessary computations, but at what point are the computations small enough that a comparative checks aren't necessary?
eg: Testing if two line segments intersect. Should there be check on if they're near each other before just going straight into the simple math, or would the extra operation slow down the process in the long run?
How much time do different mathematical calculations take
eg: (3+8) vs (5x4) vs (log(8)) etc.
How much time do inequality checks take?
eg: >, <, =
You'll have to do profiling.
Basic operations, like additions or multiplications should only take one asm instructions.
EDIT: As per the comments, although taking one asm instruction, multiplications can expand to microinstructions.
Logarithms take longer.
Also one asm instruction.
Unless you profile your code, there's no way to tell where your bottlenecks are.
Unless you call math operations millions of times (and probably even if you do), a good choice of algorithms or some other high-level optimization will results in a bigger speed gain than optimizing the small stuff.
You should write code that is easy to read and easy to modify, and only if you're not satisfied with the performance then, start optimizing - first high-level, and only afterwards low-level.
You might also want to try dynamic programming or caching.
As regards 2 and 3, I could refer you to the Intel® 64 and IA-32 Architectures Optimization Reference Manual. Appendix C presents the latencies and the throughput of various instructions.
However, unless you hand-code assembly code, your compiler will apply its own optimizations, so using this information directly would be rather difficult.
More importantly, you could use SIMD to vectorize your code and run computations in parallel. Also, memory performance can be a bottleneck if your memory layout is not ideal. The document I linked to has chapters on both issues.
However, as #Ph0en1x said, the first step would be choosing (or writing) an efficient algorithm, making it work for your problem. Only then should you start wondering about low-level optimizations.
As for 1, in a general case I'd say that if your algorithm works in such a way that it has some adjustable thresholds for when to execute certain tests, you could do some profiling and print out a performance graph of some kind, and determine the optimal values for those thresholds.
Well, this depends on your hardware. Very nice tables with instruction latency are http://www.agner.org/optimize/instruction_tables.pdf
1. it depends on the code a lot. Also don't forget it doesn't depend only on computations, but how well the comparison results can be predicted.
2. Generally addition/subtraction is very fast, multiplication of floats is a bit slower. Float division is rather slow (if you need to divide by a constant c, it's often better to precompute 1/c and multiply by it). The library functions are usually (I'd dare to say always) slower than simple operators, unless the compiler decides to use SSE. For example sqrt() and 1/sqrt() can be computed using one SSE instruction.
3. From about one cycle to several dozens of cycles. The current processors does the prediction on conditions. If the prediction is right right, it will be fast. However, if the prediction is wrong, the processor has to throw away all the preloaded instructions (IIRC Sandy Bridge preloads up to 30 instructions) and start processing new instructions.
That means if you have a code, where a condition is met most of the time, it will be fast. Similarly if you have code where the condition is not met most the time, it will be fast. Simple alternating conditions (TFTFTF…) are usually fast too.
This depends on the scenario you are trying to simulate. How many objects do you have and how close are they? Are they clustered or distributed evenly? Do your objects move around alot, or are they static? You will have to run tests. Possible data-structures for fast checking of proximity are kd-trees or locality-sensitive hashes (there may be others). I am not sure if these are appropriate for your application, you'd have to check if the maintenance of the data-structure and the lookup-cost are OK for you.
You will have to run tests. Consider checking if you can use vectorization, or if you can even run some of the computations in a GPU using CUDA or something like that.
Same as above - you have to test.
You can generally consider inequality checks, increment, decrement, bit shifts, addition and subtraction to be really cheap. Multiplication and division are generally a little more expensive. Complex math operations like logarithms are much more expensive.
Benchmark on your platform to be sure. Be careful about benchmarking using artificial tests with tight loops -- that tends to give you misleading results. Try to benchmark in code that's as realistic as possible. Ideally, profile the actual code under realistic conditions.
As for the optimizations for things like line intersection, it depends on the data set. If you do a lot of checks and most of your lines are short, it may be worth a quick check to rule out cases where the X or Y ranges don't overlap.
as much as I know all "inequality checks" take the same time.
regarding the rest calculations, I would advice you to run some tests like
take time stamp A
make 1,000,000 "+" calculation (or any other).
take time stamp B
calculate the diff between A and B.
then you can compare the calculations.
take in mind:
using different mathematical lib may change it (some math lib are more performance oriented and some more precision oriented)
the compiler optimization may change it.
each processor is doing it differently.

mapreduce vs other parallel processing solutions

So, the questions are:
1. Is mapreduce overhead too high for the following problem? Does anyone have an idea of how long each map/reduce cycle (in Disco for example) takes for a very light job?
2. Is there a better alternative to mapreduce for this problem?
In map reduce terms my program consists of 60 map phases and 60 reduce phases all of which together need to be completed in 1 second. One of the problems I need to solve this way is a minimum search with about 64000 variables. The hessian matrix for the search is a block matrix, 1000 blocks of size 64x64 along a diagonal, and one row of blocks on the extreme right and bottom. The last section of : block matrix inversion algorithm shows how this is done. Each of the Schur complements S_A and S_D can be computed in one mapreduce step. The computation of the inverse takes one more step.
From my research so far, mpi4py seems like a good bet. Each process can do a compute step and report back to the client after each step, and the client can report back with new state variables for the cycle to continue. This way the process state is not lost computation can be continued with any updates.
http://mpi4py.scipy.org/docs/usrman/index.html
This wiki holds some suggestions, but does anyone have a direction on the most developed solution:
http://wiki.python.org/moin/ParallelProcessing
Thanks !
MPI is a communication protocol that allows for the implementation of parallel processing by passing messages between cluster nodes. The parallel processing model that is implemented with MPI depends upon the programmer.
I haven't had any experience with MapReduce but it seems to me that it is a specific parallel processing model and is designed to be simple to implement. This kind of abstraction should save you programming time and may or may not provide a suitable solution to your problem. It all depends on the nature of what you are trying to do.
The trick with parallel processing is that the most suitable solution is often problem specific and without knowing more specifics about your problem it is hard to make recommendations.
If you can tell us more about the environment that you are running your job on and where your program fits into Flynn's taxonomy, I might be able to provide some more helpful suggestions.