Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
I'd like to write an unsigned Big Int library in C++ as an exercise, however I would like to stay away from using the traditional vector of chars storing individual digits. Due to the amount of memory wasted by this approach.
Would using a vector of unsigned short ints (ie. 16 bit postive integers) work, or is there a better way of doing this?
To Clarify, I would store "12345678" as {1234, 5678}.
Storing digits in corresponding chars is certainly not traditional, because of the reason you stated - it wastes memory. Using N-bit integers to store N corresponding bits is the usual approach. It wastes no memory, and is actually easier to implement (though harder to debug, because the integers are typically large).
Yes, using unsigned short int for individual units of information (generalized "digits") is a good idea. However, consider using 32-bit or 64-bit integers - they are closer to the size of the CPU registers, so your implementation will be more efficient.
General idea of syntax:
class BigInt
{
private:
std::vector<uint64_t> digits;
};
You should decide whether you store least significant digits in smaller or larger indices. I think smaller is better (because addition, subtraction and multiplication algorithms start from LSB), but it's your choice.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
So, before I get into my question. I tried search this but I am probably not wording it correctly to get any valid results. So the purpose is to use in the a AES 128-bit encryption program.
I need to multiply an unsigned char (which would be the hexadecimal value) by 2 or 3 and this would be an XOR operation. So basically, is there a way to do it without typing it out like this.
(SBOX[0] ^ SBOX[0]) ^ SBOX[0]
If I have to do it this way, each line is going to be fairly long but can be done I believe. It would be nice if there is an operator to just say 3 ^ SBOX[0].
If you're doing AES, then you're doing your arithmetic in a Galois Field (specifically GF(28)). Thus rules that you're used to for standard integers no longer hold.
In particular, whilst addition is XOR (in GF(2n)), multiplication isn't repeated addition. Your example shows why - multiplication by two would be x ^ x == 0 always.
The actual steps (in code) depend on the reducing polynomial of your Galois field (and in any case, deriving them is way beyond my ability nowadays). However, they're summarised in multiple places on the web. And in many case, these explanations specifically target the S-box MixColumns operation, e.g. Wikipedia.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
This is a simple code that counts two to the right degree. Starting from somewhere around 60 degrees the answer is incorrect.
I need to count 2^200. The answer shouldn't be in the form like "1.606938e+60", but by numbers. How to do this in C++?
#include <iostream>
using namespace std;
int main()
{
unsigned long long int n,z;
cin>>n;
z=pow(2,n);
cout<<z<<endl;
return 0;
}
You need to use std::set_precision(n) to get it to print in the format that you're expecting it, but if your numbers get high enough, you'll run into a second issue. pow returns a double, which loses precision in a big way with huge numbers. For more information on how to solve that, refer to this Stack Overflow answer.
Anyway you cant print such big number as a single integer or double value in c++ (not yet). Maybe there exist some 256 or 512 machine architectures and implementations which build in types are large enough, but its not possible in common. You probably need to use some data structure to store you number and operate on that.
This, this and this examples may be helpful.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I heard that less dimensions is better, but is it worth it to change from method B (that i already use) to A?
What is the most optimal way of using array variables?
Example code in C
Method A:
int var1[5][10][5][4];
int var2[5][10][5][4];
int var3[5][10][5][4];
Method B:
int var4[3][5][10][5][4];
I think that if something is "better" or not depends a lot on the perspective, e.g. performance, readability, maintainability, consistency, and probably many more. There is probably no clear answer to your question, and every answer might be opinion based.
Anyway, you could "mix" both approaches, having separate variables that "view" slices of the "many-dimension"-array. Then you could pick the "best" approach for the respective context:
int main() {
int var4[3][5][10][5][4] = { {{{1}}}, {{{2}}}, {{{3}}} };
int (*var1)[5][10][5][4] = &var4[0];
int (*var2)[5][10][5][4] = &var4[1];
int (*var3)[5][10][5][4] = &var4[2];
return 0;
}
less dimensions is better -- this is a very lame suggestion. The question is -- better for what?! As it was suggested in the comments, it all depends on the context. If you need it - you need it. There might not be a better way to express your algorithm.
There are questions of readability and performance. All arrays, no matter on hoe many dimensions, are allocated in memory linearly, so the compiler has to calculate the index. Performance comes from the fact that the compiler needs to do more complicated calculation than with single dimensional arrays. So, if you try to reduce the number of dimensions, you would probably move this calculations from the compiler to your code, most likely loosing performance and readability.
Now, if you, however can split it in independent variables for which you do not need any conditional logic to choose, do it. it will improve both. Do not aggregate unrelated variables into an array, it will reduce both.
I guess in your case, if var1, var2, and var3 are independent, use them this way. If you need a loop to browse them, keep them in an array.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Let's say I have three parts of information and they can be stored using as a portion of a byte.
info_a is 3 bit long
info_b is 2 bit long
info_c is 3 bit long
I don't have any memory constraints.
The first implementation is
struct info{
unsigned char info_a;
unsigned char info_b;
unsigned char info_c;
}
the second implementation is
unsigned char info; // bit 0..2 info_a, bit 3..4 info b, bit 5..7 info_c
So which one is the faster version for
storing data
retrieving data
?
The struct way will be faster on most typical modern platforms. This is because a single instruction can be used to load or store a single byte, whereas sub-byte loads and stores require extra operations.
Of course, if you have many times when you need to set all three values at once, or clear them all at once, or you are memory-constrained (e.g. you store billions of these values), the sub-byte solution will have an advantage.
If you have one instance of these then I don't think there'll be any meaningful difference as they are both smaller than a word in length. The single char version will possibly slower as you'll have to do some bit manipulation that you don't need to with the struct.
If you have a million of these in an array and loop then I'd expect the more compact data of the single char will lessen the number of cache misses, which will make up for any tiny loss in the bit manipulation to get at the data.
But as ever, you'll have to test it in your particular environment to find out for sure.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I want to store and operate on very large integers, what is the best way to do this without using pre-built libraries?
Based on what another StackOverflow user stated:
The std::string object will be copied onto the stack, but the string body will not - it will be allocated on heap. The actual limitation will depend on the system and program memory usage and can be something like from ten million to one billion charactes on a 32-bit system.
I just thought of two simple ways, both of which require me to write my own class. The first is to use vectors and strings, and the second is to break down a large integer into separate blocks in integer arrays and add up the total.
The max.size() of a string on my computer is 4294967291.
I have decided to write my own class.
Thanks for the help: C++ char vector addition
EDIT:
Working on it: https://github.com/Jyang772/Large_Number_Collider
If depends on the usage of this integer, but to keep the semantic of numbers and make your class coding easier, i'd suggest to use a vector of long integers.
Using std::string will be far more complicated for code design and maintenance.
You will have to redefine every operators and take into account the propagation of computations from one chunk of you number to an other.
The usual way is to use an array (vector, etc) of ints (longs, etc), rather than a string.
Start by looking at existing large integer classes, even if you can't use one verbatim for your homework.
When we faced similar problems on contests, we used vectors, each cell containing one digit of the number. This way you can store an immensely large number.