I try to handle with big numbers in C++. One thing that I tried is installing the gmp library but this is not working properly on my computer (see this post). So I want to try another method and that is integer to string conversion.
But I dont get the idea of that. Let me make myself clear. For example we handle with a big integer. Lets say 2^1000. When, for example, I want to calculate 2^1000 mod 10 this is not possible (so far I know) with the normal libraries of c++. So my question is: Is it possible when converting my integer to a string and if the answer is yes:
How can I do arithmetic operations when I convert my integer to a string.
If you are using c++ predefined integer type, then 2^1000 is simply impossible. On your system maximum should be 2^16 or 2^32, max 2^64 (for long long). If you wanted to do that, you need to use (or implement yourself - what I don't recommend) infinite-precision integers.
You can convert normal int to string very easily with
... = std::to_string(/*Your int*/);
If you meant you want to do something like this:
amazing_to_string_conversion(1000000000000000000000000000000000000000000000)
It's not possible in any C++ implementation. The very number constant can't exist in code, it will many, many times overflow.
And if you consider implementing it yourself, it will probably K.O. you, because of very complicated calculations during division and non-trivial calculations like sqrt().
Related
I am currently working on a project in OCaml where I have to manipulate unsigned integers on 8 bits and on 16 bits. In my context, things can get a little messy, I sometimes want to convert an 8 bit integer into a 16 bits one, or split a 16 bits integer into two 8 bits one. I also want to use all the operations like addition, or the bitwise operations on those. Since there are all these interaction between 8 and 16 bits, I really like the comfort of having separate types for those. However, I still want my program to compute reasonnably efficiently, and I don't want to actually lose too much time casting an integer of a given size into another size. So my question is essentially how should I go about this? I have two main options but I don't know enough about the low-level interpretation of OCaml to comfortably chose:
Option 1 : Use dedicated types
I figured that I can use the Stdint library that is available through opam and has an implementation of the types uint8 and uint16 which are exactly what I am looking for.
Pros
I get very good mileage from the typing and will definitely avoid silly bugs from this
Cons
I have to constantly use the functions Uint8.to_uint16 and Uint16.to_uint8, which might eventually add up to heavy memory usage and poor efficiency of the compiled program, depending on how the precise representation is stored in machine
Option 2 : Encode everything within the type int
This means that all my integers will simply be of type int and I will have to program the addition of two 8-bits integers and of two 16-bits integers in this type, for instance.
Pros:
I think these operations can be programmed in a very efficient way using usual operations and the bitwise operations on the type int.
Cons:
I get essentially nothing from the typing and I have to trust myself to chose the right function at the right time.
Possible workaround
I could use two modules for defining 8-bits and 16-bits integers encoded in an int declared as private. I think that would essentially work like I presented with Option 2. The fact that I chose the type to be private would however mean that I cannot switch from one to the other without running into a typing mistake, thus forcing explicit casts and getting leverage from the type system. Still I expect the casts to be very efficient, since the memory representation of the object won't change.
So I would like to know how you would go about that? Is it worth going through all the trouble, do you think a solution is clearly better, or are they reasonably equivalent?
Bonus
Everytime I want to print (in hexadecimal) the value of a variable a of type uint8, I am writing
Printf.ksprintf "a = %02x" (Uint8.to_int a)
There is again a cast that seems to me a bit silly, I could also use direclty the Uint8.to_string_hex function, but it writes explicitly the 0x in front of the number, which I don't want. Ideally I would like to just write
Printf.ksprintf "a = %02x" (Uint8.to_int a)
Is there a way to change the scopes and do some magic with the Printf to make it happen?
In the stdint library both int8 and int16 are represented as int so there is no real tradeoff between option 1 and option 2,
type int8 = private int
(** Signed 8-bit integer *)
type int16 = private int
(** Signed 16-bit integer *)
The stdint library already provides you the best of two worlds, you have an efficient implementation and type safety. Yes, you need to do these translations but they no-ops and there only for the typechecker.
Also, if you're looking for modular arithmetic (and, in general, modeling machine words and bitvectors) then you can look at our Bitvec library, which we developed as part of the Binary Analysis Platform. It is focused on performance while still providing type safety and a lot of operations. We modeled it based on the latest SMT-LIB specification to give clear semantics to all operations. It uses the excellent Zarith library underneath the hood that enables efficient representation for small and arbitrary-length integers.
Since the modularity is not a property of a bitvector itself, but a property of an operation we do not encode the number of bits in the type and use the same type (and representation) for all bitvectors from 1-bits to thousands-of-bits. However, it is impossible to use mix-match the types incorrectly. E.g., you can use generic functions,
(x + y) mod m8
Or predefined modules for the specified modulus, e.g.,
M8.(x + y)
The library has a minimal number of dependencies, so try it by installing
opam install bitvec
There are also additional libraries like bitvec-order, bitvec-sexp, and bitvec-order that enable further integration with the Core suite of libraries, if you need them.
I am trying to write a program for finding Mersenne prime numbers. Using the unsigned long long type I was able to determine the value of the 9th Mersenne prime, which is (2^61)-1. For larger values I would need a data type that could store integer values greater than 2^64.
I should be able to use operators like *, *=, > ,< and % with this data type.
You can not do what you want with C natives types, however there are libraries that let handle arbitrarily large numbers, like the GNU Multiple Precision Arithmetic Library.
To store large numbers, there are many choices, which are given below in order of decreasing preferences:
1) Use third-party libraries developed by others on github, codeflex etc for your mentioned language, that is, C.
2) Switch to other languages like Python which has in-built large number processing capabilities, Java, which supports BigNum, or C++.
3) Develop your own data structures, may be in terms of strings (where 100 char length could refer to 100 decimal digits) with its custom operations like addition, subtraction, multiplication etc, just like complex number library in C++ were developed in this way. This choice could be meant for your research and educational purpose.
What all these people are basically saying is that the 64bit CPU will not be capable of adding those huge numbers with just an instruction but you rather need an algorithm that will be able to add those numbers. Such an algorithm would have to treat the 2 numbers in pieces.
And the libraries they listed will allow you to do that, a good exercise would be to develop one yourself (just the algorithm/function to learn how it's done).
There is no standard way for having data type greater than 64 bits. You should check the documentation of your systems, some of them define 128 bits integers. However, to really have flexible size integers, you should use an other representation, using an array for instance. Then, it's up to you to define the operators =, <, >, etc.
Fortunately, libraries such as GMP permits you to use arbitrary length integers.
Take a look at the GNU MP Bignum Library.
Use double :)
it will solve your problem!
I want to print "845100400152152934331135470251" or "1071292029505993517027974728227441735014801995855195223534251"
but in C++ the max value of "Unsigned long long " is "18446744073709551615"
this is much less than which I want to print
please help me...
First of all, your problem is not about printing big numbers but storing them in variables (and maybe calculating on them).
On some compilers (GCC for example), you have variable types like int128 that can handle numbers up to 10^38 (more less).
If this doesn't solve the problem, you'll have to write your own arithmetic. For example, store numbers in strings and write functions that will calculate on them (addition and subtraction is rather easy, multiplying medium (as long as numbers aren't really huge), dividing by big integers hard). Alternatively you can look for already made big integer libraries (on the Internet, c++ doesn't have built-in one).
I am trying to convert a boost::multiprecision::cpp_dec_float_x to a boost::multiprecision::uintx_t. So basically a boost bigreal to a boost bigint, with regards to memory needed for this conversion not to be lossy.
Consider the following:
boost::multiprecision::cpp_dec_float_100 myreal(100); /* bigreal */
boost::multiprecision::uint256_t myint; /* bigint */
Designing memory allocation
I want to make a conversion from the first to the last. Consider that I kept in count the number of bits needed for this. Starting from a 256-bit integer I need a float able to store from 0 to 2^256-1. How many digits do I need for this? Exactly 256*log_10(2) ~= 77. So a 100 digit float is more than enough. So if I keep my real number lower than 2^256, I can convert it to a 256-bit integer.
How can I make the conversion considering that convert_to<> can only be used with built in types and static_cast<> raise errors (which is expected considering that the boost documentation does not mention such a context)? Thankyou
Do not care about data-loss
I do not care about data loss. For my purpose I will store (in the bigreal variable) an integer number (no decimal part). So I am just ok!
I don't know if this is you are looking for, try...
cpp_dec_float_100 myreal(100);
cpp_dec_float_100 int_part = myreal.backend().extract_integer_part();
The type is still cpp_dec_float_100, but only contains the integer part.
I hope this helps.
Is it possible to change the
float *pointer
type that is used in the VS c++ project
to some other type, so that it will still behave as a floating type but with less range?
I know that the floating point values never exceed some fixed value in that project, so I want to optimize the program by memory it uses. It doesn't need 4 bytes for each element of the 'float *pointer', 2 bytes will be enough I think. If I change a float to short and imitate the floating point behaviour, then it will use twice shorter memory. How to do it?
EDIT:
It calculates the probabilities. So there are divisions like
A / B
Where A < B,
And also B (and A) can be from 1 to 10 000.
There is a standard 16-bit floating point format described in IEEE 754-2008 called "binary16". It is specified as a format to store floating point values with reduced precisions. There is almost no compiler support for that yet (I think GCC supports it for certain ARM platforms), but it is quite easy to roll your own routines. This fellow:
http://blog.fpmurphy.com/2008/12/half-precision-floating-point-format_14.html
wrote a bit about it and also presents a routine to convert half-float <-> float.
Also, here seems to be a half-float C++ wrapper class:
half.h:
http://www.koders.com/cpp/fidABD00D95DE84C73BF0218AC621E400E07AA77B53.aspx
half.cpp
http://www.koders.com/cpp/fidF0DD0510FAAED03817A956D251787609BEB5989E.aspx
which supplies "HalfFloat" as a possible drop-in replacement type.
Maybe use fixed-point math? It all depends on value and precision you want to achieve.
http://www.eetimes.com/discussion/other/4024639/Fixed-point-math-in-C
For C there is a lot of code that makes fixed-point easy and I'm pretty sure there are also many C++ classes that make it even easier, but I don't know of any, I'm more into C.
The first, obvious, memory optimization would be to try and get rid of the pointer. If you can store just the float, that may, depending on the larger context, reduce your memory consumption from eight to four bytes already. (On a 64-Bit system, from twelve to four.)
Whether you can get by with a short depends on what your program does with the values. You may be able to use fix point arithmetic using an integral type such as a short, yes but your questions shows way too little context to judge that.
The code you posted and the text in the question do not deal with actual float, but with pointers to float. In all architectures I know of, the size of a pointer is the same regardless of the pointed type, so there would be no improvement in changing that to a short or char pointer.
Now, about the actual pointed elements, what is the range that you expect in your application? What is the precision you need? How many of those elements do you have? What are the memory constraints of your target platform? Unless the range and precision are small and the number of elements huge, just use floats. Also note that if you need floating point operations, storing any other type will require conversions before and after each operation, and you might be impacting performance.
Without greater knowledge of what you are doing, the ranges for short in many architectures are [-32k, 32k), where k stands for 1024. If your data ranges is [-32,32) and you can do with roughly 3 decimal digits you could use fixed point arithmetic with shorts, but there are few such situation.