This question already has answers here:
What is the fastest way to find if a number is even or odd?
(12 answers)
Closed 4 years ago.
Lets say I am checking for odd numbers:
(i % 2 == 1)
Will the compiler convert that operation to:
if(a & 1)
?
I am aware that bitwise operations are faster and that sometimes I will work with bits.
However my question is: If the normal arithmetic is more readable (in most instances), when shall I use bitwise if the compiler might convert it later?
Or shall I always use bitwise whenever is posible (even if it is less readable)?
You should always use the form that is better readable by human beings. If execution speed matters you have to profile your program and look at the assembly your compiler generates.
The only way to tell is to look at the assembly language code generated by the compiler. There are all kinds of optimizations that compilers do. The compile could easily change your modulus operator into a bit test instruction.
Performance is a product of system design; not coding.
The speed difference between your two examples is so small that it would be hardly noticed in nearly any application.
This question already has an answer here:
What variable type for extremely big integer numbers?
(1 answer)
Closed 6 years ago.
"unsigned long int" keeps becoming 0. It has to be a really big integer that I can operate multiplication and modulo on.
Example: 1234^123 % 1234
If you're using gcc you can try __uint128_t or maybe use double instead unless you need to do bitwise operations
Two roads you got
1. Implement what you need by learning how to do large calculations using integer arrays with maths tricks for speed up OR
2. Use a library
I would suggest to go with 2nd option if you are not tied to use external library. One such bigNum library is ttmath
OR
If you have a option to switch to java, then java has builtin BigInteger, BigDecimal and more functionalities supported. Look at BigInteger javadoc
This question already has answers here:
Real world use cases of bitwise operators [closed]
(41 answers)
Closed 9 years ago.
So I'm currently in the process of learning C++ via the book 'SAMS teach yourself C++ in 1 hour a day'. So far it's been great - I've understood everything that's said and I have managed to use all of them in simple programs to practice them.
However I just got to the section on the Bitwise operators and I am completely stumped. I understand that you have &, ~, |, <<, >> etc, and I understand that each one performs a different action upon a number in its binary form, for ~ flips the numbers over.
The problem I have is that I just can't get my head around how and why you'd want to use them. It's all very well me being to take an int, flip the binary digits over and have another number, but how exactly does this help me in any way shape or form? I'd appreciate an explanation as to why you'd use each one, and if possible maybe an example?
Thanks everyone!
There are a lot of applications, but here are two examples. Suppose you have eight one-bit values stored in a one-byte container. Bitwise-and with a power of two will access individual bits easily.
If you're scanning for high intensity pixels in an RGB image, you can use bitwise-and with 128 against the three color values; that's a faster operation than another Boolean expression like R>128.
I was wondering what kind of method was used to multiply numbers in C++. Is it the traditional schoolbook long multiplication? Fürer's algorithm? Toom-Cook?
I was wondering because I am going to need to be multiplying extremely large numbers and need a high degree of efficiency. Therefore the traditional schoolbook long multiplication O(n^2) might be too inefficient, and I would need to resort to another method of multiplication.
So what kind of multiplication does C++ use?
You seem to be missing several crucial things here:
There's a difference between native arithmetic and bignum arithmetic.
You seem to be interested in bignum arithmetic.
C++ doesn't support bignum arithmetic. The primitive datatypes are generally native arithmetic to the processor.
To get bignum (arbitrary precision) arithmetic, you need to implement it yourself or use a library. (such as GMP) Unlike Java, and C# (among others), C++ does not have a library for arbitrary precision arithmetic.
All of those fancy algorithms:
Karatsuba: O(n^1.585)
Toom-Cook: < O(n^1.465)
FFT-based: ~ O(n log(n))
are applicable only to bignum arithmetic which are implemented in bignum libraries. What the processor uses for its native arithmetic operations is somewhat irrelevant as it's
usually constant time.
In any case, I don't recommend that you try to implement a bignum library. I've done it before and it's quite demanding (especially the math). So you're better off using a library.
What do you mean by "extremely large numbers"?
C++, like most other programming languages, uses the multiplication hardware that is built-in in the processor. Exactly how that works is not specified by the C++ language. But for normal integers and floating-point numbers, you will not be able to write something faster in software.
The largest numbers that can be represented by the various data types can vary between different implementations, but some typical values are 2147483647 for int, 9223372036854775807 for long, and 1.79769e+308 for double.
In C++ integer multiplication is handled by the chip. There is no equivalent of Perl's BigNum in the standard language, although I'm certain such libraries do exist.
That all depends on the library and compiler used.
It is performed in hardware. for the same reason huge numbers won't work. The largest number c++ can represent in 64 bit hardware is 18446744073709551616. if you need larger numbers you need an arbitrary precision library.
If you work with large numbers the standard integer multiplication in c++ will no longer work and you should use a library providing arbitrary precision multiplication, like GMP http://gmplib.org/
Also, you should not worry about performance prior to writing your application (=premature optimization). These multiplications will be fast, and most likely many other components in your software will cause much more slowdown.
plain c++ uses CPU mult instructions (or schoolbook multiplication using bitshifts and additions if your CPU does not have such an instruction. )
if you need fast multiplication for large numbers, I would suggest looking at gmp ( http://gmplib.org ) and use the c++ interface from gmpxx.h
Just how big are these numbers going to be? Even languages like python can do 1e100*1e100 with arbitrary precision integers over 3 million times a second on a standard processor. That's multiplication to 100 significant places taking less than one millionth of second. To put that into context there are only about 10^80 atoms in the observable universe.
Write what you want to achieve first, and optimise later if necessary.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What USEFUL bitwise operator code tricks should a developer know about?
Hi,
What are some neat tricks with using bit-wise operations. I know that unless you're programming in C you won't have much encounters with operating on bit level. Nonetheless, there are some neat tricks that you can apply in even higher level languages. Here are a few that I already know.
bit mask: Can hold a collection of boolean values
XOR Swap: Swap 2 values in place without a third variable
XOR Linked List: Create a doubly linked list with each node only hold one address value
What are some others?
find whether a number is odd or not
(number & 1)