When use bitwise operations instead of arithmetic alternatives? [duplicate] - c++

This question already has answers here:
What is the fastest way to find if a number is even or odd?
(12 answers)
Closed 4 years ago.
Lets say I am checking for odd numbers:
(i % 2 == 1)
Will the compiler convert that operation to:
if(a & 1)
?
I am aware that bitwise operations are faster and that sometimes I will work with bits.
However my question is: If the normal arithmetic is more readable (in most instances), when shall I use bitwise if the compiler might convert it later?
Or shall I always use bitwise whenever is posible (even if it is less readable)?

You should always use the form that is better readable by human beings. If execution speed matters you have to profile your program and look at the assembly your compiler generates.

The only way to tell is to look at the assembly language code generated by the compiler. There are all kinds of optimizations that compilers do. The compile could easily change your modulus operator into a bit test instruction.
Performance is a product of system design; not coding.
The speed difference between your two examples is so small that it would be hardly noticed in nearly any application.

Related

why doesn't c++ have an exponent operator? [duplicate]

This question already has answers here:
Exponential Operator in C++
(4 answers)
Closed 9 years ago.
is there a reason? i know there's POW(), but that's a function. why doesn't it have ^ for exponents, when it seems like a very simple thing too add, that would be very convenient
C++'s operators are modeled after C's operators which in turn are modeled after general machine code instructions. The later have addition, subtraction, shift, and , or , xor etc. They can have multiplication and perhaps even division. All handle integers are handled and sometimes even floating-point numbers too. But it would extremely rare for exponentiation to have direct processor support. So it's never been thought of as (and so therefore wasnever made into) a builtin operator. Having said all that, there's left-shift << which exponentiates powers of 2.
In some languages ^ is the sign of logical operation. I think is XOR operation.
So that is why you should use POW() in C++.

Why would you use Bitwise operators? [duplicate]

This question already has answers here:
Real world use cases of bitwise operators [closed]
(41 answers)
Closed 9 years ago.
So I'm currently in the process of learning C++ via the book 'SAMS teach yourself C++ in 1 hour a day'. So far it's been great - I've understood everything that's said and I have managed to use all of them in simple programs to practice them.
However I just got to the section on the Bitwise operators and I am completely stumped. I understand that you have &, ~, |, <<, >> etc, and I understand that each one performs a different action upon a number in its binary form, for ~ flips the numbers over.
The problem I have is that I just can't get my head around how and why you'd want to use them. It's all very well me being to take an int, flip the binary digits over and have another number, but how exactly does this help me in any way shape or form? I'd appreciate an explanation as to why you'd use each one, and if possible maybe an example?
Thanks everyone!
There are a lot of applications, but here are two examples. Suppose you have eight one-bit values stored in a one-byte container. Bitwise-and with a power of two will access individual bits easily.
If you're scanning for high intensity pixels in an RGB image, you can use bitwise-and with 128 against the three color values; that's a faster operation than another Boolean expression like R>128.

How to make large calculations program faster

I'm implementing a compression algorithm. Thing is, it is taking a second for a 20 Kib files, so that's not acceptable. I think it's slow because of the calculations.
I need suggestions on how to make it faster. I have some tips already, like shifting bits instead of multiplying, but I really want to be sure of which changes actually help because of the complexity of the program. I also accept suggestions concerning compiler options, I've heard there is a way to make the program do faster mathematical calculations.
Common operations are:
pow(...) function of math library
large number % 2
large number multiplying
Edit: the program has no floating point numbers
The question of how to make things faster should not be asked here to other people, but rather in your environment to a profiler. Use the profiler to determine where most of the time is spent, and that will hint you into which operations need to be improved, then if you don't know how to do it, ask about specific operations. It is almost impossible to say what you need to change without knowing what your original code is, and the question does not provide enough information: pow(...) function: what are the arguments to the function, is the exponent fixed? how much precision do you need? can you change the function for something that will yield a similar result? large number: how large is large in large number? what is number in this context? integers? floating point?
Your question is very broad, without enough informaiton to give you concrete advise, we have to do with a general roadmap.
What platform, what compiler? What is "large number"? What have you done already, what do you know about optimization?
Test a release build with optimization (/Ox /LTCG in Visual C++, -O3 IIRC for gcc)
Measure where time is spent - disk access, or your actual compression routine?
Is there a better algorithm, and code flow? The fastest operation is the one not executed.
for 20K files, memory working set should not be an issue (unless your copmpression requries large data structures), so so code optimization are the next step indeed
a modern compiler implements a lot of optimizations already, e.g replacing a division by a power-of-two constant with a bit shift.
pow is very slow for native integers
if your code is well written, you may try to post it, maybe someone's up to the challenge.
Hints :-
1) modulo 2 works only on the last bit.
2) power functions can be implemented in logn time, where n is the power. (Math library should be fast enough though). Also for fast power you may check this out
If nothing works, just check if there exists some fast algorithm.

Bit-wise operation tricks [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What USEFUL bitwise operator code tricks should a developer know about?
Hi,
What are some neat tricks with using bit-wise operations. I know that unless you're programming in C you won't have much encounters with operating on bit level. Nonetheless, there are some neat tricks that you can apply in even higher level languages. Here are a few that I already know.
bit mask: Can hold a collection of boolean values
XOR Swap: Swap 2 values in place without a third variable
XOR Linked List: Create a doubly linked list with each node only hold one address value
What are some others?
find whether a number is odd or not
(number & 1)

C++ Optimization on negative integers

Lets say we have a negative integer say
int a;
is there a faster implementation of
-a?
Do I have to do some bitwise operation on this?
There's almost certainly nothing faster than the machine code NEG instruction that your compiler will most likely turn this into.
If there was, I'm sure the compiler would use it.
For a twos-complement number, you could NOT it and add 1 but that's almost certainly going to be slower. But I'm not entirely certain that the C/C++ standards mandate the use of twos-complement (they may, I haven't checked).
I think this question belongs with those that attempt to rewrite strcpy() et al to get more speed. Those people naively assume that the C library strcpy() isn't already heavily optimized by using special machine code instructions (rather than a simplistic loop that would be most people's first attempt).
Have you run performance tests which seem to indicate that your negations are taking an overly long time?
<subtle-humor-or-what-my-wife-calls-unfunny>
A NEG on a 486 (state of the art the last time I had to worry
about clock cycles) takes 3 clock cycles (memory version,
register only takes 1) - I'm assuming the later chips will be
similar. On a 3Ghz CPU, that means you can do 1 billion of
these every second. Is that not fast enough?
</subtle-humor-or-what-my-wife-calls-unfunny>
Have you ever heard the phrase "premature optimization"? If you've optimized all of your code, and this is the only thing left, fine. If not, you're wasting your time.
To clarify Pax's statement,
C++ compilers are not mandated to use two's complement, except in 1 case. When you convert a signed type to an unsigned type, if the number is negative, the result of the conversion must be the 2's complement representation of the integer.
In short, there is not a faster way than -a; even if there were, it would not be portable.
Keep in mind as well that premature optimization is evil. Profile your code first and then work on the bottlenecks.
See The C++ Programming Language, 3rd Ed., section C.6.2.1.
Negating a number is a very simple operation in terms of CPU hardware. I'm not aware of a processor that takes any longer to do negation than to do any bitwise operation - and that includes some 30 year old processors.
Just curious, what led you to ask this question? It certainly wasn't because you detected a bottleneck.
Perhaps you should think about optimizing your algorithms more-so than little things like this. If this is the last thing to optimize, your code is as fast as it's going to get.
All good answers.
If (-a) makes a difference, you've already done some really aggressive performance tuning.
Performance tuning a program is like getting water out of a wet sponge. As a program is first written, it is pretty wet. With a little effort, you can wring some time out of it. With more effort you can dry it out some more.
If you're really persistent you can get it down to where you have to put it in the hot sun to get the last few molecules of time out of it.
That's the level at which (-a) might make a difference.
Are you seeing a performance issue with negating numbers? I have a hard time thinking that most compilers would do a bitwise op against integers to negate them.