Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I want to store and operate on very large integers, what is the best way to do this without using pre-built libraries?
Based on what another StackOverflow user stated:
The std::string object will be copied onto the stack, but the string body will not - it will be allocated on heap. The actual limitation will depend on the system and program memory usage and can be something like from ten million to one billion charactes on a 32-bit system.
I just thought of two simple ways, both of which require me to write my own class. The first is to use vectors and strings, and the second is to break down a large integer into separate blocks in integer arrays and add up the total.
The max.size() of a string on my computer is 4294967291.
I have decided to write my own class.
Thanks for the help: C++ char vector addition
EDIT:
Working on it: https://github.com/Jyang772/Large_Number_Collider
If depends on the usage of this integer, but to keep the semantic of numbers and make your class coding easier, i'd suggest to use a vector of long integers.
Using std::string will be far more complicated for code design and maintenance.
You will have to redefine every operators and take into account the propagation of computations from one chunk of you number to an other.
The usual way is to use an array (vector, etc) of ints (longs, etc), rather than a string.
Start by looking at existing large integer classes, even if you can't use one verbatim for your homework.
When we faced similar problems on contests, we used vectors, each cell containing one digit of the number. This way you can store an immensely large number.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
I'd like to write an unsigned Big Int library in C++ as an exercise, however I would like to stay away from using the traditional vector of chars storing individual digits. Due to the amount of memory wasted by this approach.
Would using a vector of unsigned short ints (ie. 16 bit postive integers) work, or is there a better way of doing this?
To Clarify, I would store "12345678" as {1234, 5678}.
Storing digits in corresponding chars is certainly not traditional, because of the reason you stated - it wastes memory. Using N-bit integers to store N corresponding bits is the usual approach. It wastes no memory, and is actually easier to implement (though harder to debug, because the integers are typically large).
Yes, using unsigned short int for individual units of information (generalized "digits") is a good idea. However, consider using 32-bit or 64-bit integers - they are closer to the size of the CPU registers, so your implementation will be more efficient.
General idea of syntax:
class BigInt
{
private:
std::vector<uint64_t> digits;
};
You should decide whether you store least significant digits in smaller or larger indices. I think smaller is better (because addition, subtraction and multiplication algorithms start from LSB), but it's your choice.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have always asked this question to myself. I tried to find the answer in the internet but, I just couldn't find what I was really looking for. If the developers made Vectors which can be used easier (according to some people), then what was the use of Arrays(which are generally avoided, according to some people aswell)?
The elements stored in an std::array can be allocated on the stack as the size is known at compile time and elements of a std::vector will be allocated on the heap. This can make a huge performance difference. Or more general, an std::array does not need its own memory allocation but an std::vector always does.
In C++, array is used to refer to two distinct kinds of things. One is std::array. The other is the built-in array type you get from a declaration like this: int foo[10];. This defines an array of 10 integers, named foo.
The advice against using an array will (at least usually) refer to the built-in array types. I don't know of anybody who advises against using std::array (except for cases where somebody needs a different container such as std::vector instead).
It's pretty easy to advise using std::array over a built-in array type simply because std::array is designed to impose no overhead compared to a built-in array type. In addition, however, std::array provides the normal container interface for getting things like the first element of the array, the size of the array, or iterators to the beginning and end so it's easy to apply a standard algorithm to an std::array.
Of course, all of these can be done with built-in array types as well. The implementation of std::array doesn't contain any "magic"--it just provides a standard interface to things you could do on your own. At the same time, it does provide a standard interface, and normally imposes no overhead, so there's rarely a reason to do the job on your own.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I mean, what is the use of studying different sorting algorithms when it can be done with the help of a single line in c++ using STL?
Is it just for the sake of knowing (or any other reason)?
It's comparable (i think) to knowing all of the different STL containers. Think about all the different options you have just too store objects, priority queue's, vectors, arrays, deques, stacks, maps, sets, etc... The list goes on. A naive programmer may simply use a std::vector for everything. I mean everyone is always saying such good things about std::vector, it manages it's own size, it's extremely fast at adding new elements, etc... The list goes on. But do you use std::vector for all your containers, i certainty hope not! The same logic apply's too knowing the various sorting algorithms, their are cases where the built in sorting mechanisms are simply inadequate, and you must not only know how to recognize when this situation occurs but be able too come up with a clean solution.
Just because the STL handles many operations (such as sorting) effectively it does not mean it will handle ALL situations effecively
Learning different ways to do things and the benefits/tradeofs they provide is often helpful.
For (an extreme) example; if you are sorting a container of at most 5 elements, then the lowly bubble sort may out-perform std::sort (which is most likely quicksort). So if this is something you do millions of times each second then you'd lose out with std::sort.
There is never (or at least "very rarely") a single " best" solution to a problem. So learning about the alternatives and the tradeoffs is valuable.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
After learning about c++ through a couple different sources, I found conflicting advice concerning the use of cout/printf(). One source said that printf(), and I quote:
... does not provide type safety, so it is easy to inadvertently tell it to display an integer as if it were a character and vice versa. printf() also does not support classes, and so it is not possible to teach it how to print your class data; you must feed each class member to printf() one by one.
So, the more important thing to me would be the readability factor of using printf(). On the other hand, another source mentioned that cout, with its use of the overloaded operator <<, uses more instructions to execute and can therefore be more expensive in terms of memory over a large program. Although, the person who said this was a systems programmer, in which every bit of performance is vital. But say I wanted to go into game or application development.
Would the performance differences between printf() and cout matter all that much?
In general, does it really matter what I choose to use in an application program?
Thank you for any input.
You would measure the differences on your particular implementation for your specific use case and determine that for yourself.
I would say both lines of reasoning in the question have merit, but you can't generalise about performance.
If you want to go into game or application programming, printf/cout won't be needed too much. The only use in these cases is for debugging.
If you really need to use printf/cout a lot, the difference will be when writing a huge amount of data, otherwise you don't need to bother.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am supposed to build a program for storing and handling huge integers. I know that there are many answers out there but I need ideas that I can implement easily, bearing in mind that I can use any of the basic concepts of C/C++.
How should I go about it?
This is the first time I am asking a question here so please correct me if I am wrong about anything.
Edit: Actually what I wanted to know was how should I go about storing a huge integer... Obviously an array is what comes to mind at first glance but are there any other methods out there at the basic level?
EDIT2: I came across a very nice solution to this problem a while ago, but was just a bit lazy to put it on here. We can use the concept of number systems to deal with huge numbers. We can declare an array that holds the co-efficient of powers of 256, thus obtaining a base 256 system. We can then use fundamental concepts like those of the various number systems to obtain our required results.
Matt McCutchen has a Big Integer Library
If you want to do this yourself his code would be a great starting point. As you can overload arithmetic operators in C++ it is not too difficult to make a new BigInteger class and make this handle any number of bits per integer.
There is also a stack overflow answer to this question: here
I consider this as a question about theory, as such I suggest to browse the internet using the right keywords for documents/articles or to take a sneak peek at libraries that are implementing this feature and are well tested, this projects also tend to offer a mailing list or a forum where developers can communicate, it can be a good place to start writing about this stuff.