If I'm using std::max_element with a lambda and I have a special value in the row which is always considered to be lower than anything.
So when the parameters are given to the comparison-lambda:
which of both parameters is the current maximum, i.e. the maximum of the values evaluated before?
This would help me in a way that I won't have to check both values for my magic value.
The Compare functor must implement strict weak ordering - acting as <(less than). It is not specified which parameter is the current maximum. It must be one of them because exactly N-1 comparisons are made. But it can be either of them depending on the if statement used in the for loop.
The logic then dictates that the larger of those parameters will be the new maximum.
Related
I don't quite understand how to call CAtlArray::SetCount.
This is the signature:
bool SetCount(size_t nNewSize, int nGrowBy = - 1);
And here's an explanation of the parameters, from the docs:
nNewSize
The required size of the array.
nGrowBy
A value used to determine how large to make the buffer. A
value of -1 causes an internally calculated value to be used.
So, let's say you have an array of some current length, and you'd like to add another 7 units to it. How would you call SetCount? Do you get the current count, add 7, and then pass that number as the first argument? Or do you just pass 7 as the second argument -- and if so, what's the first argument?
The lack of details on the nGrowBy parameter is a hint that Microsoft's guys do not want ordinary programmers to use it. For normal operations you can ignore it and trust the default value of -1 to do the right thing.
So the correct way to add a number of items is what you guessed:
find the current size if you do not have it at hand
add the number or units to add
and pass that value as the first argument of SetCount (leaving the default value of -1 for the second argument)
nGrowBy sets storage growing strategy.
nGrowBy
Strategy
-1
Use last set strategy. If not set, default strategy is 0.
0
Set default strategy which grows storage by at least 50%.
>0
Set new strategy which grows storage by at least nGrowBy items.
Every time when nNewSize is larger than size calculated by growing strategy, nNewSize is used instead of calculated value (this explains at least in above table).
In most cases you should use just -1 (not specify it at all). Providing explicit value for nGrowBy makes sense only when you know something special on future array resizing, but even then in most cases it is better to just SetCount( final_size ).
Assume you have two arrays of equal dimension and want to construct a larger one which holds the maximum. Of course you use the in-built max and you don't need an explicite index. But now assume the new number is not computable by a vectorizable construct and you have to compute an if statement inside a do loop for each index. The compiler can parallelize that anyway, I bet (the result shall just depend on the current loop index). But is a construct like IF (a(:).EQ.b(:)) THEN c(:)=... without explicite index possible?
I'm using C++ to model a (maximization) MIP with CPLEX, and I specify a relative gap using
cplex.setParam(IloCplex::EpGap, gap);
I'm puzzled by the difference between
cplex.getBestObjValue();
and
cplex.getObjValue();
in case of early termination because of the gap.
If I understand correctly, the value from getBestObjValue() will always correspond to an integer feasible solution, and a lower bound to the optimal value. On the other hand, the value from getObjValue() (may? will always?) correspond to a non-feasible solution and is an upper bound to the optimal value. Am I understanding this correctly?
I also have another question: the value returned by getBestObjValue() is, in the case of maximization problems, 'the maximum objective function value of all remaining unexplored nodes' (from the CPLEX docs). Is there a a way to query the objective values of these unexplored nodes? I'm asking because I would like to get the minimal value that satisfies my relative gap, not the maximum.
According to the manual:
Cplex.GetBestObjValue Method:
It is computed for a minimization problem as the minimum objective function value of all remaining unexplored nodes. Similarly, it is computed for a maximization problem as the maximum objective function value of all remaining unexplored nodes.
For a regular MIP optimization, this value is also the best known bound on the optimal solution value of the MIP problem. In fact, when a problem has been solved to optimality, this value matches the optimal solution value.
It corresponds to an upper bound (when maximising) of the objective value, there is a gap when you stop the solver before reaching optimality. In MIP, there is branch and bound tree behind, as more nodes are explored, the upper bound decreases. There might or might not be any solution matching the upper bound when you stop by epgap.
Therefore your assumption below is wrong:
If I understand correctly, the value from getBestObjValue() will always correspond to an integer feasible solution.
GetObjValue() on the other hand is the objective value of the current best solution (corresponding to a found feasible solution). It is a lower bound, this is the value that you want to use in your second question.
I have the following scenario:
variable in {12, 4, 999, ... }:
Where there are about 100 discrete values in the list. I am writing a parser to convert this to C++, and the only ways that I can think of to do it are 100 case statements, or 100 if ==
Is one preferred to the other, or is there an all round better way to do this?
I should clarify, the values are constant integers. Thanks
If the maximum value of any one of your discrete values is small enough a std::vector<bool> of flags set true or false depending on whether that entry is in the list should be pretty optimal - assuming the values occur with approximately equal probabilility.
One way is to arrange the values in order and use binary search to check whether a value is contained in your collection.
You can either put your values in a vector in sorted order using std::lower_bound for the insertion point and then use std::binary_search to test for membership, or you can put your values in an std::set and get that feature for free (using std::set::find() for membership testing).
There are minor performance considerations that may make either option preferable; profile and decide for yourself.
A second approach is to put your values in a hash table such as std::unordered_set (or some kind of static equivalent if your values are known statically).
Assuming the values are constants, you can certainly use a switch statement. The compiler will do this pretty efficiently, using either a binary search type approach or a table [or a combination of table and binary search]. A long list of if-statements will not be as efficient, unless you sort the numbers and make a binary search type approach - a switch-statement is much easier to generate, as the compiler will sort out the best approach to decide what numbers are in the list and which ones aren't.
If the values are not constants, then a switch-statement is obviously not a solution. A bitmap may work - again, depending on the actual range - of the values are a large range, then that's not a good solution, since it will use a lot of memory [but it probably is one of the fastest methods, since it's just a case of dividing/modulo with a 2^n number, which can be done with simple >> and & operators, followed by one memory read].
I ran across an issue whenever I was trying to sort a vector of objects that was resulting in an infinite loop. I am using a custom compare function that I passed in to the sort function.
I was able to fix the issue by returning false when two objects were equal instead of true but I don't fully understand the solution. I think it's because my compare function was violating this rule as outlined on cplusplus.com:
Comparison function object that,
taking two values of the same type
than those contained in the range,
returns true if the first argument
goes before the second argument in the
specific strict weak ordering it
defines, and false otherwise.
Can anyone provide a more detailed explanation?
The correct answer, as others have pointed out, is to learn what a "strict weak ordering" is. In particular, if comp(x,y) is true, then comp(y,x) has to be false. (Note that this implies that comp(x,x) is false.)
That is all you need to know to correct your problem. The sort algorithm makes no promises at all if your comparison function breaks the rules.
If you are curious what actually went wrong, your library's sort routine probably uses quicksort internally. Quicksort works by repeatedly finding a pair of "out of order" elements in the sequence and swapping them. If your comparison tells the algorithm that a,b is "out of order", and it also tells the algorithm that b,a is "out of order", then the algorithm can wind up swapping them back and forth over and over forever.
If you're looking for a detailed explanation of what 'strict weak ordering' is, here's some good reading material: Order I Say!
If you're looking for help fixing your comparison functor, you'll need to actually post it.
If the items are the same, one does not go before the other. The documentation was quite clear in stating that you should return false in that case.
The actual rule is specified in the C++ standard, in 25.3[lib.alg.sorting]/2
Compare is used as a function object which returns true if the first argument is less than the second, and false otherwise.
The case when the arguments are equal falls under "otherwise".
A sorting algorithm could easily loop because you're saying that A < B AND B < A when they're equal. Thus the algorithm might infinitely try to swap elements A and B, trying to get them in the correct order.
Strict weak ordering means a < b == true and when you return true for equality its a <= b == true. This requirement is needed for optimality for different sort algorithms.