What is the best implementation of min? [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Is there a difference between
int min(int a, int b) {
return (a < b) ? a : b;
}
and
int min(int a, int b) {
return (b < a) ? b : a;
}
Is there any specific reason to prefer one over the other?
This question is specifically intended for both the C and the C++ languages. I understand these are different languages and a similar question was asked for C++ here: Correct implementation of min .
I am interested in reasons that may pertain to one language and not the other.

No, there isn't. The two implementations are equivalent under all defined circumstances. (An individual compiler might exhibit performance differences, but not functional differences.)
It would be different if the function weren't concerned solely with ints. Floating point numbers, as chux mentioned in a comment, can have both a<b and b<a false even when their bit patterns differ, as with negative/positive zero, or when at least one is a NaN. Technically this could also occur with an exotic (but standards-compliant) integer representation (through -0 or padding bits), but AFAIK no otherwise standards-compliant compiler does that.

Related

Whats the expressive way of or'ing a range of bools? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I recently came across a situation where i needed to 'or' a range of bools together.
An example could be std::vector<bool> bools;
I have come up with some different ways of or'ing all of them together:
std::any_of(bools.begin(), bools.end(), [](bool x) { return x; } )
or
std::find(bools.begin(), bools.end(), true) != bools.end()
or
std::accumulate(bools.begin(), bools.end(), false, std::logical_or);
or
std::accumulate(bools.begin(), bools.end(), 0) != 0;
If the length is fixed, I could use a std::bitset which has the any function - but this is only an option in that case.
Of all these, neither is very intuitive or is really expressing the intent clearly here (in a small way they all exhibit some kind of "WTF? code")
The irony with this situation is that for variadic-templates we do have the fold-expressions which, in comparison, would be simple and expressive:
(bools || ...)
Note: I do have boost which have the range library, but AFAIK it doesn't alleviate this issue really.

Is there any particular reason to declare float constants like this? (both C and C++) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Is there any objective reason, besides readability or tradition, to declare float constants with a zero fractional part as 1.0f as opposed to simply 1.f?
This may simply be opinion based, but pretty much all the code I've seen online has it that way. Is it aiming to make adding a fractional part later easier (doubtful, since it involves one more erase) or simply for readability sake?
Should there be a difference between C and C++ regarding this, include both flavors.
As you point out, it is not necessary in C, nor C++ to have digits after the decimal point, not indeed before if some are present after. It does not make a difference and is solely a matter of style to write 1.0f or 1.f.
Other languages differ on this syntactic choice to allow for the decimal point to be parsed as the member dereference operator.
In Ruby for example, 1.f would be parsed as 1 . f attempting to read property f of the number 1.
No. It's purely readability and tradition. It does not differ between C and C++.

What is the advantage of C++ supporting native unsigned integers, while java does not? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
According to https://en.wikipedia.org/wiki/Comparison_of_Java_and_C%2B%2B C++ supports unsigned integers while java does not, what is the advantages of that?
A major difference is that C and C++ are used for low-level programming where bits are shifted and masked; an unsigned integer behaves naturally there.
And then for C++ there always is C compatibility. When C was conceived the larger value range may have been a reason too, when ints where 16 bit.
A minor point (for C) may be that for efficiency one wanted to support unsigned chars on architectures where those are the architecture default, and from there expand the concept of unsigned to all integer types in order to be orthogonal; although I find this argument weak.

Bitwise operations vs. logical operations in C++ [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
In C++ are the following statements when comparing two integer values for equality the same? If not, why?
if(a == b)
...do
if(!(a ^ b))
...do
For integer values, yes. Obviously the xor operator will return not-zero if there are any bit differences between A and B, and ! will invert that. For integer data types, the conditions are equivalent.
For floating point values, because of how you can perform two mathematical operations that "should" give the same result, but they may be represented differently as floats, you should not use either of these to compare floats for equality, you should check whether they are the same to within a small margin of error (an "epsilon").
For pointers...I have no idea why you would want to do this to pointers. But if you really want to do it, then yes, they are the same.
However, there is no reason to do this. With optimizations enabled, they will compile to the same code, without, the first will likely be faster. Why would you use the less-clear !(a^b)?
The two comparisons are equivalent: a^b is 0 if and only if a==b, so !(a^b) is true if and only if a and b have the same value.
Whether you can call them "the same" depends on what you mean by two different operations being "same." They probably will not be compiled into the same code, and a==b is definitely easier to read.

What's faster? (a+a vs 2*a and more) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
In C/C++, I was wondering which is faster?
int a;
int b = a + a; // this
int b = 2 * a; // or this?
Also, is important the datatype? What about long? what about the number of times we add up?
(what about...)
long a;
long b = a + a + a +a;
long b = 4 *a;
Trust your optimizing compiler. It knows how to optimize for a specific CPU/architecture in ways that you will only be able to guess. Without reference to a specific architecture, there is no meaning to statements like "is x faster than y?", because it all depends on a huge number of factors.
And as always with performance questions, measuring is going to answer the question more completely than us offering semi-informed opinions and guesses.