In NTL library I know that we can define a big integer value as:
ZZ p;
p=to_ZZ("1111111111111111111111111111111333333333333333");
ZZ_p::init(p);
ZZ_p b(12);
My question is: What if I want to assign a big integer to b rather than 12 ?
e.g.
ZZ_p b("1111111111111111111111111111111333333333333334");
So it should modulo p and assign 1 to b.
I need it for fFindRoots(vec_ZZ_p& x, const ZZ_pX& ff), so would be able to insert big integers to a vector as coefficients (of a polynomial)
First: I tried the code you posted and the line ZZ_p b(12); did not work for me.
I had to use
ZZ_p b;
b = 12;
If you want to assign a big integer you can do this by
ZZ_p b;
b = to_ZZ_p(conv<ZZ>("1111111111111111111111111111111333333333333334"));
or
char bigInteger[47] = "1111111111111111111111111111111333333333333334";
ZZ_p b;
b = to_ZZ_p(conv<ZZ>(bigInteger));
cout << b << endl; will now print 1.
Related
Consider the following code which prints the ascending order of 3 integers.
#include <iostream>
using namespace std;
int main() {
int a, b, c;
cin >> a >> b >> c;
int mn = a, mx = a;
if (b > mx)
mx = b;
if (c > mx)
mx = c;
if (b < mn)
mn = b;
if (c < mn)
mn = c;
int mid = (a + b + c) - (mn + mx);
cout << mn << " " << mid << " " << mx << "\n";
}
Let's assume -10^9 <= a, b, c <= 10^9. So there's no overflow when reading them.
The expression (a + b + c) should cause overflow when (a + b + c) > INT_MAX, however the mid variable print correct results. I tried to print "a + b + c" in a separate line, it printed some negative value (clearly it's an overflow). My question is: Does the compiler make optimization when the result of expression fits in integer data type?
It is true that a signed integer overflow can occur here, which is undefined behavior. But undefined behavior means that "anything can happen".
And here "anything can happen" means "2's complement arithmetic". Which works out the correct answer.
Does the compiler make optimization when the result of expression fits
in integer data type?
No special optimizations are needed. The compiled code simply uses integer addition and subtraction that gets carried out using the rules for 2's complement arithmetic. The underlying hardware does not generate an exception for signed integer overflow, the addition simply wraps around, using 2's complement arithmetic.
The addition wraps around, and the subtraction wraps back to where it came from. Everyone lives happily ever after.
I'm trying to find a solution to a task. My code passed only 3 autotests. I checked that the solution satisfies the max/min cases. Probably there are situations when my code is not valid.
Description of the task: Find the remainder after dividing the sum of two integers by the third.
Input: The first line of input contains two integers A and B (-10^18 ≤ A, B ≤ 10^18). The second line contains an integer C (2 ≤ C 10^9).
Output: Print the remainder of dividing A + B by C.
My code:
#include <iostream>
// int64_t
#include <cstdint>
#include <stdint.h>
// #include <math.h>
// 3 tests passed
int main() {
int64_t a, b, c;
std::cin >> a >> b >> c;
// a = pow(-10, 18);
// b = pow(-10, 18);
// // c = pow(10, 9);
// c = 3;
// c = pow(10, 18) - 20;
// c = 1000000000000000000 + 1;
// c = 1000000000000000000 + 2;
// std::cout << a << std::endl;
// std::cout << b << std::endl;
// std::cout << c << std::endl;
std::cout << (a + b) % c << std::endl;
return 0;
}
Please note that in C++ reminder for negative values:
was implementation defined until C++11
integer division is rounded towards zero since C++11 which makes reminder negative sometimes.
Now most probably in your task modulo result should be always in range <0, C) (or written differently <0, C - 1>). So to handle cases where A + B is negative, you have to take this into account that reminder may be negative.
So your code can look like this:
nt main() {
int64_t a, b, c;
std::cin >> a >> b >> c;
std::cout << (a + b) % c + ((a + b) % c < 0) * c << '\n';
return 0;
}
Which basically adds c to result if result is negative and makes result inside required range. (assuming c is positive).
Modulo operation in C++ uses truncated division, i.e., the result of x % y is negative if x is negative.
To obtain a non-negative result congruent to (a + b) % c, you can use ((a + b) % c + c) % c.
I believe the point of the exercise is to handle overflows and underflow by realizing that the remainder of the sum is the sum of the remainder modulo c.
That's what my magic-8 ball says anyway. If that's not the solution the provide the failed input and expected and actual output.
From what I've gathered, assigning a fractional number to a double won't work properly unless either the numerator or the denominator is a floating point number, ( and by "not working properly", I mean that the decimals get cut off, I know that numbers can't be stored as fractions of course). However, I've tried type casting ints to doubles before assigning them to another double variable but it still doesn't work. It's not a big deal since I just had to do a minor work around, but why is this the case?
I added some coding I did while testing.
#include <iostream>
using namespace std;
double convert(int v) {
return v;
}
int main() {
int a = 5;
int b = 2;
double n;
n = convert(a) / convert(b);
cout << n << endl; // Decimals are stored
a = static_cast<double> (a);
b = static_cast<double> (b);
n = a / b;
cout << n << endl; // Decimals are cut off
a = (double) a;
b = (double) b;
n = a / b;
cout << n << endl; << // Decimals are cut off
double c = a;
double d = b;
n = c / d;
cout << n << endl; // Decimals are stored
return 0;
}
Output:
2.5
2
2
2.5
Because
a / b;
is integer division (because both operands are int) i.e. the output is an integer, whether the output is then assigned to double or anything else is irrelevant in the calculation of the result.
Because of integer division.
n = a / b;
Here a and b are integers so the result is also an integer, this is a rule of C++, so 5/2 == 2. The integer 2 then gets converted to a double which then prints as 2.
int a = 5;
a = static_cast<double> (a);
The first line creates an int variable named a and puts the value 5 in it. The second line explicitly converts the value of a to a double, then stores that converted value in a. However, a has type int, so there is an implicit conversion to int. That is, the second line is functionally equivalent to:
a = static_cast<int> ( static_cast<double> (a) );
So by the time you get to the division, you are back to integer arithmetic. To get the conversion to floating point to "stick" through your division, you need to avoid throwing it away. You could either assign the converted value to a new variable, as in
double aa = static_cast<double> (a);
or do the conversion in the same expression as the division
n = static_cast<double>(a) / b;
n = a / static_cast<double>(b);
n = static_cast<double>(a) / static_cast<double>(b);
Any of these three alternatives will trigger floating-point division.
Here is what I'm trying to do:
I have two integers
int a = 0; // can be 0 or 1
int b = 3; // can be 0, 1, 2 or 3
Also I want to have
unsigned short c
to store that variables inside it.
For example, if I would store a inside c it will be looking like this:
00000000
^ here is a
Then I need to store b inside c. And it should look like following:
011000000
^^ here is b.
Also I would like to read that numbers back after writing them.
How can I do this?
Thanks for your suggestions.
Assuming those are binary representations of the numbers and assuming that you really meant to have five zeros to the right of b
01100000
^^ here is b
(the way you have it a and b overlap)
Then this is how to do it
// write a to c
c &= ~(1 << 7);
c |= a << 7;
// write b to c
c &= ~(3 << 5);
c |= b << 5;
// read a from c
a = (c >> 7)&1;
// read b from c
b = (c >> 5)&3;
You can accomplish this with C++ Bit Fields:
struct MyBitfield
{
unsigned short a : 1;
unsigned short b : 2;
};
MyBitfield c;
c.a = // 0 or 1
c.b = // 0 or 1 or 2 or 3
How can I let a pointer assigned with a two dimensional array?
The following code won't work.
float a1[2][2] = { {0,1},{2,3}};
float a2[3][2] = { {0,1},{2,3},{4,5}};
float a3[4][2] = { {0,1},{2,3},{4,5},{6,7}};
float** b = (float**)a1;
//float** b = (float**)a2;
//float** b = (float**)a3;
cout << b[0][0] << b[0][1] << b[1][0] << b[1][1] << endl;
a1 is not convertible to float**. So what you're doing is illegal, and wouldn't produce the desired result.
Try this:
float (*b)[2] = a1;
cout << b[0][0] << b[0][1] << b[1][0] << b[1][1] << endl;
This will work because two dimensional array of type float[M][2] can convert to float (*)[2]. They're compatible for any value of M.
As a general rule, Type[M][N] can convert to Type (*)[N] for any non-negative integral value of M and N.
If all your arrays will have final dimension 2 (as in your examples), then you can do
float (*b)[2] = a1; // Or a2 or a3
The way you do this is not legit in c++. You need to have an array of pointers.
The problem here is that the dimensions of b are not known to the compiler. The information gets lost when you cast a1 to a float**. The conversion itself is still valid, but you cannot reference the array with b[][].
You can do it explicitly:
float a1[2][2] = { {0,1},{2,3}};
float* fp[2] = { a1[0], a1[1] };
// Or
float (*fp)[2] = a1;
Try assigning b to be directly equals to a1, that's mean that the pointer b is pointing to the same memory location that pointer a1 is pointing at, they carry the same memory reference now, and you should be able to walk through the array.