postincrement operation in same statement [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
the question is to understand how the standards define or allow to handle these situations and what would be behaviour in this particular case wherein the variable undergoing post/pre increment is used in same statement as that of expression, when it is being used as argument to function call.
take for example following sample code
char a[SZ];
which of the following would be correct?
strlcpy(&a[i++],"Text",SZ-i-1);
strlcpy(&a[i++],"Text",SZ-i);
if the
"," comma
would used for computation of i++ or
";" semicolon
??

In this case, since the "comma separated expressions" are parameters of a function (strlcpy), the order of evaluation of the expressions is unspecified, even in C++17.
However, C++17 guarantees that expression evaluation won't be interleaved between arguments, so that each expression is fully formed before forming another one.
So, in your strlcpy(&a[i++],"Text",SZ-i), you cannot rely on the value of i: it could exhibit a different behavior depending on your implementation. Though since it's not undefined behavior, you know it's either going to be the old value of i, or the old value plus one.

Related

Why not always use assert? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
If static_assert is restricted to compile-time evaluations, why not always stick with assert if it can handle both compile-time and run-time evaluated expressions?
assert(...) is ALWAYS evaluated at runtime, of course you can call it with a compile-time evaluated expression, but you will only first see the assertion at runtime.
Sometimes you wanna make sure something only compiles when a certain expression is true, thats when you use
static_assert(expression) which gives a compiler error if not fulfilled.
This is in direct spirit with "fail as early as possible" (and probably hard too ;-)

is there any plausible scenario in which a programmer might wish to avoid shortcircuit evaluation of a Boolean expression? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
short circuit evaluation can shorten the compile time, so i learned that C, C++ is using that way. But are there any situations that short-circuit evaluation ruining the code?
Short circuiting does not shorten the compile time of the code. (by any meaningful amounts, at least)
It might be shortening the runtime, but it's not its intended purpose.
The purpose of short circuiting is to do the minimal amount of work in order to check a certain condition.
For example:
When using && (as opposed to a single &), the right operand won't be evaluated if the left one is false. This is due to the nature of a logical and operation: if at least one of the operands is false, the whole expression is false.
Technically, it will shorten the runtime if the condition fails early, but the amount of saved runtime is dependent on the expressions inside each operand.
Anyway, it's incorrect to use && because it's "faster" than &. You should use either when appropriate.
& is used for bitwise operations.

The meaning of side-effect in Clojure [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I was thinking on the meaning of side-effect in Clojure. What exactly is a side-effect in Clojure? Could any one explain this with an example?
A side effect in any programming language would be everything that is done which does not have a direct correlation between supplied arguments and the returned result.
(+ 3 4) ; ==> 7 (result is always a mapping between arguments and result. It will always be 7 no matte rhow many times you do it.
(rand-int 4) ; ==> 0,1,2, or 3. You have no idea what it will produce next.
The first expression is functional. You can make a lookup table of all the different two values with it's result and you wouldn't know the difference.
The second might give different result for the same argument. The computation must be based on something else, like internal state, and not the argument alone. It has side effects.
Typical side effects used in programs are I/O and object mutation.

Is there any reason for std::multiplies and std::divides to be in third person? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Today we discovered that the functors for multiplying and dividing, are called std::multiplies and std::divides, as opposed to, for example, std::multiply and std::divide respectively.
This is surprising to say the least, considering that std::plus and std::minus are not formulated the same way.
Is there a particular reason for the difference?
It looks like this is nothing more than a blooper: plus and minus are even not verbs...
The name themselves are not C++14 originals: C++14 just adds the <void> specialization, but the typed version and all other <functional> header stuff exist from C++98 (and even pre-iso), and certain coding convention (functions as verbs, object as substatives interface as adjectives...) were not yet already well established.
What C++14 does is just add one more feature to existing definitions letting existing code to continues to works as is. It simply cannot redefine names.
That said, consider also that the + sign is not always used across the entire standard library for add: in std::strings it is concatenation, and std::plus, if applied to strings, concatenates them. Similarly, the * is often used as a "closure" operation (think to boost::spirit).
A more proper "from scratch" library will most likely call them neutrally as cross, dash, star and slash, letting the classes that provides the corresponding operations to give them consistent names in their own context

how compilers evaluate mathematical expressions? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
i know that it has to do something with compilers converting the infix expressions to postfix or prefix (i don't know which one exactly) and i think compilers do that because in postfix and prefix expressions parenthesis are not needed to emphasize on the precedence of an operator.
so can anyone tell me why and how exactly computer evaluate mathematical expressions?
is the process the same for all the programming languages compilers?
Usually it's to postfix notation, using operand and operator stacks. Any first year computer science book (compiler design) will discuss the details. It has to do with parentheses encountered, and relative precedence (and associativity) of operators encountered in the input. Most computer languages have similar evaluation, precedence, and associativity rules, and will use a similar process. But not all!