I am getting into bit manipulation in C/C++.
Is there a good calculator or program (executable or online), that makes it relatively convenient to study & test bit procedures?
I can do same work in Visual Studio or Eclipse, but relatively small program is easier and more convenient.
I use bincalc by Ding Zhaojie. It conveniently lets you see what's going on in hex and binary (and decimal if you care), and has a nice gadget for inputting binary or flipping bits.
http://sites.google.com/site/bincalc/
Free, but no source.
The calculator, calc, in Windows 7 has a 'Programmer' view (Alt + 3). It has most of the operations for bit manipulations.
bitwisecmd.com Supports conversion from decimal to binary and hehadecimal numbers as well as basic bitwise operations as AND, OR, XOR, NOT, left shift, right shift. Neatly arranges bits in binary numbers so you can see how this stuff works.
I use a good online tool that supports all the bitwise operations, so you don't need to install anything: http://www.convertforfree.com/bitwise-calculator/
This tool saved me a lot of time and effort.
Use this tool. It has a free online truth table generator as well as a calculator for bitwise operations.
Related
I have an stm32-F1 processor that is very slow with float operations and I have some libraries from an F7 processor that uses a lot of floats. I would like to use this libraries on my poor F1 so I was thinking of a way to make as less tweaks as I can on the code and emulate floats with a same interface but with an underling integer type.It's important to note that I only need 7 digits of accuracy (numbers between 0.001 and 4094.999 that's why I guess something like typedef number<cpp_dec_float<7> > fixed7; would work in my case faster than floats.
Is boost's multiprescision good enough for that ? Do you have any other suggestions? should I make my own arithmetic type?
I found a solution after all. Τhis numeric system is excellent for my purpose and a little simpler to use as multiprescision is based on having higher accuracy than normal c++ but I just want less precision. Here is John's MC Farlane numeric types
I am new at using bit-wise operators and manipulating bits and I was wondering if someone knows some techniques or something that will help me to learn to do this properly.
Maybe some book or something that explains this well and how to apply it.
I highly recommend these resources:
Representation of Numbers
Binary Arithmetic
Low Level Bit Hacks You Absolutely Must Know
and absolute classic
Bit Twiddling Hacks
For example, how can I use the result of 1000^1000 for arithmetic? I don't think there's a library that can accommodate that, all I see at most is 100 number digits.
Use an arbitrary-precision arithmetic library like GMP or Boost.Multiprecision.
What you are looking for is a library like GMP or Boost-Multiprecision or TTmath.
Or, you might challenge yourself to write a low level representation that handles longer than standard bit representations, and do arithmetic with it.
Stick with the first option though, if it does the job you have in mind.
I was wondering what kind of method was used to multiply numbers in C++. Is it the traditional schoolbook long multiplication? Fürer's algorithm? Toom-Cook?
I was wondering because I am going to need to be multiplying extremely large numbers and need a high degree of efficiency. Therefore the traditional schoolbook long multiplication O(n^2) might be too inefficient, and I would need to resort to another method of multiplication.
So what kind of multiplication does C++ use?
You seem to be missing several crucial things here:
There's a difference between native arithmetic and bignum arithmetic.
You seem to be interested in bignum arithmetic.
C++ doesn't support bignum arithmetic. The primitive datatypes are generally native arithmetic to the processor.
To get bignum (arbitrary precision) arithmetic, you need to implement it yourself or use a library. (such as GMP) Unlike Java, and C# (among others), C++ does not have a library for arbitrary precision arithmetic.
All of those fancy algorithms:
Karatsuba: O(n^1.585)
Toom-Cook: < O(n^1.465)
FFT-based: ~ O(n log(n))
are applicable only to bignum arithmetic which are implemented in bignum libraries. What the processor uses for its native arithmetic operations is somewhat irrelevant as it's
usually constant time.
In any case, I don't recommend that you try to implement a bignum library. I've done it before and it's quite demanding (especially the math). So you're better off using a library.
What do you mean by "extremely large numbers"?
C++, like most other programming languages, uses the multiplication hardware that is built-in in the processor. Exactly how that works is not specified by the C++ language. But for normal integers and floating-point numbers, you will not be able to write something faster in software.
The largest numbers that can be represented by the various data types can vary between different implementations, but some typical values are 2147483647 for int, 9223372036854775807 for long, and 1.79769e+308 for double.
In C++ integer multiplication is handled by the chip. There is no equivalent of Perl's BigNum in the standard language, although I'm certain such libraries do exist.
That all depends on the library and compiler used.
It is performed in hardware. for the same reason huge numbers won't work. The largest number c++ can represent in 64 bit hardware is 18446744073709551616. if you need larger numbers you need an arbitrary precision library.
If you work with large numbers the standard integer multiplication in c++ will no longer work and you should use a library providing arbitrary precision multiplication, like GMP http://gmplib.org/
Also, you should not worry about performance prior to writing your application (=premature optimization). These multiplications will be fast, and most likely many other components in your software will cause much more slowdown.
plain c++ uses CPU mult instructions (or schoolbook multiplication using bitshifts and additions if your CPU does not have such an instruction. )
if you need fast multiplication for large numbers, I would suggest looking at gmp ( http://gmplib.org ) and use the c++ interface from gmpxx.h
Just how big are these numbers going to be? Even languages like python can do 1e100*1e100 with arbitrary precision integers over 3 million times a second on a standard processor. That's multiplication to 100 significant places taking less than one millionth of second. To put that into context there are only about 10^80 atoms in the observable universe.
Write what you want to achieve first, and optimise later if necessary.
Is there a publically available library that will produce the exact
same results for sin, cos, floor, ceil, exp and log on 32 bit and
64 bit linux, solaris and possibly other platforms?
I am considering the following alternatives:
a) cephes compiled
with gcc -mfpmath=sse and the same optimization levels on each
platform ... but its not clear that this would work.
b) MPFR but I am worried that this would be
too slow.
Regarding precision (edited): For this particular application
I don't really need something that produces the value that is
numerically closest to the exact value. I just need the answers
to be the exact same on all platforms, os and "bitness". That
being said the values need to be reasonable (5 digits would
probably be enough). I apologize for not having made this clear
in my initial question.
I guess MAPM or MPFR with a low enough precision setting might do
the trick but I was hoping to find something that did not have
the "multiple precision" machinery/flavor to it. In any case, I will
try this out.
Would something like: http://lipforge.ens-lyon.fr/www/crlibm/index.html be what you are searching for (this is a library whose aim is to be able to replace the standard math library of C99 -- so keep good enough performance in the normal cases -- while ensuring correctly rounded result according to IEEE 754 rounding modes) ?
crlibm is the correct tool for this. An earlier poster linked to it. Because it is correctly rounded, it will deliver bit-identical results on all platforms with IEEE-754 compliant hardware if compiled properly. It is much, much faster than MPFR.
You shouldn't need one. floor and ceil will be exact since their computation is straightforward.
What you are concerned with is rounding on the last bit for the transendentals like sin, cos, and exp. But these are native to the CPU microcode and can be done in high quality consistently regardless of library. However, the rounding does vary from chip architecture to architecture.
So, if exact answers for the transindentals is indeed your goal, you do need a portable library, and you also will be giving up huge efficiencies by doing so.
You could use a portable library like MAPM which gives you not only consistent ULP results but as a side benefit lets you define arbirary precision.
You can check your math precision with tools like this one and this one.
You mention using SSE. If you're planning on only running on x86 chips, then what exactly are the inconsistencies you're expecting?
As for MPFR, don't worry - test it! By the way, if it's good enough to be included in GCC, it's probably good enough for you.
You want to use MPFR. That library has been around for years and has been ported to every platform under the sun and optimized by tons of people.
If MPFR isn't sufficient for your needs we're talking about full custom ASM implementations in which case it might be more efficient to consider implementing it in dedicated hardware.