I want to add two numbers which is the largest values that a long long integer can hold; and print it. If I don't store the value of sum in a variable, I just print it using "cout" then will my computer will be able to print that? The code will be some what like this:
cout<<theLastValueOfLongLong + theLastValueOfLongLong;
I am assuming that a long long int is the largest primary variable type.
If you don't want to overflow, then you need to use a "long integer" library, such as Boost.Multiprecision. You can then perform arbitrary-long integer/f.p. operations, such as
#include <iostream>
#include <limits>
#include <boost/multiprecision/cpp_int.hpp>
int main()
{
using namespace boost::multiprecision;
cpp_int i; // multi-precision integer
i = std::numeric_limits<long long>::max();
std::cout << "Max long long: " << i << std::endl;
std::cout << "Sum: " << i + i << std::endl;
}
In particular, Boost.Multiprecision is extremely easy to use and integrates "naturally" with C++ streams, allowing you to treat the type almost like a built-in one.
No, at first it counts (theLastValueOfLongLong + theLastValueOfLongLong) (which causes overflow or freezes at max value available) and then it sends result into cout.<<(long long) operator
It's the same as:
long long temp = theLastValueOfLongLong + theLastValueOfLongLong;
cout << temp;
temp will contain the result of the addition, which will be undefined because you get an overflow, and then it will cout that result what ever it's value is.
Since long long is signed, the addition overflows. This is Undefined Behavior and anything may happen. It's unlikely to format your harddisk, especially in this simple case.
Once Undefined Behavior happens, you can't even count on std::cout working after that.
Related
#include <iostream>
using namespace std;
int main()
{
unsigned long maximum = 0;
unsigned long values[] = {60000, 50, 20, 40, 0};
for(short value : values){
cout << "Current value:" << value << "\n";
if(value > maximum)
maximum = value;
}
cout << "Maximum value is: " << maximum;
cout << '\n';
return 0;
}
Outputs are:
Current value:-5536
Current value:50
Current value:20
Current value:40
Current value:0
Maximum value is: 18446744073709546080
I know I should not use short inside for loop, better use auto, but I was just wondering, what is going on here?
I'm using Ubuntu with g++ 9.3.0 I believe.
The issue is with short value when element 60000 is reached.
That's too big to fit into a short on your platform, so your short is overflowed, with implementation-defined results.
What seems to be happening in your case is that 60000 wraps round to the negative -5536, then converted (in a well-defined) way to an unsigned long, which in your case is 264 - 5536: that's equal to the maximum displayed by your program.
One fix is to use the idiomatic
for(auto&& value: values){
The problem is pretty much simple, the 2-bytes short type integer can hold only the values between -32,768 to 32,767. Afterwards, it get overflowed. You've given 60000, which is obviously an overflow for a short int.
When you use auto here, the value get converted into an appropriate type which could hold such a large number (note that it's up to the platform in which you're running the program.)
In my case, the value is get converted into unsigned long which ranges between 0 to 4,294,967,295.
I've noticed some weird behaviour in c++ which i don't understand,
i'm trying to print a truncated double in a hexadecimal representation
this code output is 17 which is a decimal representation
double a = 17.123;
cout << hex << floor(a) << '\n';
while this code output is 11 and also my desirable output
double a = 17.123;
long long aASll = floor(a);
cout << hex << aASll << '\n';
as double can get really big numbers i'm afraid of wrong output while storing the truncated number in long long variable, any suggestions or improvements?
Quoting CPPreference's documentation page for std::hex (and friends)
Modifies the default numeric base for integer I/O.
This suggests that std::hex does not have any effect on floating point inputs. The best you are going to get is
cout << hex << static_cast<long long>(floor(a)) << '\n';
or a function that does the same.
uintmax_t from <cstdint> may be useful to get the largest available integer if the values are always positive. After all, what is a negative hex number?
Since a double value can easily exceed the maximum resolution of available integers, this won't cover the whole range. If the floored values exceed what can fit in an integer type, you are going to have to do the conversion by hand or use a big integer library.
Side note: std::hexfloat does something very different and does not work correctly in all compilers due to some poor wording in the current Standard that is has since been hammered out and should be corrected in the next revision.
Just write your own version of floor and have it return an integral value. For example:
long long floorAsLongLong(double d)
{
return (long long)floor(d);
}
int main() {
double a = 17.123;
cout << hex << floorAsLongLong(a) << endl;
}
I have a bitset which is very large, say, 10 billion bits.
What I'd like to do is write this to a file. However using .to_string() actually freezes my computer.
What I'd like to do is iterate over the bits and take 64 bits at a time, turn it into a uint64 and then write it to a file.
However I'm not aware how to access different ranges of the bitset. How would I do that? I am new to c++ and wasn't sure how to access the underlying bitset::reference so please provide an example for an answer.
I tried using a pointer but did not get what I expected. Here's an example of what I'm trying so far.
#include <iostream>
#include <bitset>
#include <cstring>
using namespace std;
int main()
{
bitset<50> bit_array(302332342342342323);
cout<<bit_array << "\n";
bitset<50>* p;
p = &bit_array;
p++;
int some_int;
memcpy(&some_int, p , 2);
cout << &bit_array << "\n";
cout << &p << "\n";
cout << some_int << "\n";
return 0;
}
the output
10000110011010100111011101011011010101011010110011
0x7ffe8aa2b090
0x7ffe8aa2b098
17736
The last number seems to change on each run which is not what I expect.
There are a couple of errors in the program. The maximum value bitset<50> can hold is 1125899906842623 and this is much less than what bit_array has been initialized with in the program.
some_int has to be defined as unsigned long and verify if unsigned long has 64 bits on your platform.
After this, test each bit of bit_array in a loop and then do the appropriate bitwise (OR and shift) operations and store the result into some_int.
std::size_t start_bit = 0;
std::size_t end_bit = 64;
for (std::size_t i = start_bit; i < end_bit; i++) {
if (bit_array[i])
some_int |= mask;
mask <<= 1;
}
You can change the values of start_bit and end_bit appropriately as you navigate through the large bitset.
See DEMO.
For accessing ranges of a bitset, you should look at the provided interface. The lack of something like bitset::data() indicates that you should not try to access the underlying data directly. Doing so, even if it had seemed to work, is fragile, hacky, and probably undefined behavior of some sort.
I see two possibilities for converting a massive bitset into more manageable pieces. A fairly straight-forward approach is to just go through bit-by-bit and collect these into an integer of some sort (or write them directly to a file as '0' or '1' if you're not that concerned about file size). Looks like P.W already provided code for this, so I'll skip an example for now.
The second possibility is to use bitwise operators and to_ullong(). The downside of this approach is that it nominally uses auxiliary storage space, specifically two additional bitsets the same size as your original. I say "nominally", though, because a compiler might be clever enough to optimize them away. Might. Maybe not. And you are dealing with sizes over a gigabyte each. Realistically, the bit-by-bit approach is probably the way to go, but I think this example is interesting at a theoretical level.
#include <iostream>
#include <iomanip>
#include <bitset>
#include <cstdint>
using namespace std;
constexpr size_t FULL_SIZE = 120; // Some large number
constexpr size_t CHUNK_SIZE = 64; // Currently the mask assumes 64. Otherwise, this code just
// assumes CHUNK_SIZE is nonzero and at most the number of
// bits in long long (which is at least 64).
int main()
{
// Generate some large bitset. This is just test data, so don't read too much into this.
bitset<FULL_SIZE> bit_array(302332342342342323);
bit_array |= bit_array << (FULL_SIZE/2);
cout << "Source: " << bit_array << "\n";
// The mask avoids overflow in to_ullong().
// The mask should be have exactly its CHUNK_SIZE low-order bits set.
// As long as we're dealing with 64-bit chunks, there's a handy constant to handle this.
constexpr bitset<FULL_SIZE> mask64(UINT64_MAX);
cout << "Mask: " << mask64 << "\n";
// Extract chunks.
const size_t num_chunks = (FULL_SIZE + CHUNK_SIZE - 1)/CHUNK_SIZE; // Round up.
for ( size_t i = 0; i < num_chunks; ++i ) {
// Extract the next CHUNK_SIZE bits, then convert to an integer.
const bitset<FULL_SIZE> chunk_set{(bit_array >> (CHUNK_SIZE * i)) & mask64};
unsigned long long chunk_val = chunk_set.to_ullong();
// NOTE: as long as CHUNK_SIZE <= 64, chunk_val can be converted safely to the desired uint64_t.
cout << "Chunk " << dec << i << ": 0x" << hex << setfill('0') << setw(16) << chunk_val << "\n";
}
return 0;
}
The output:
Source: 010000110010000110011010100111011101011011010101011010110011010000110010000110011010100111011101011011010101011010110011
Mask: 000000000000000000000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111
Chunk 0: 0x343219a9dd6d56b3
Chunk 1: 0x0043219a9dd6d56b
I am trying to learn how to program in C++, so I created something that allowed to you enter a minimum, and maximum parameter, and it will compute k+(k+1)+(k+2)+...+(max), and compared it to the analytical value, using the standard formula (n(n+1)/2). It seems to work fine when I try small numbers, but when, for example, trying min=4, max=4*10^5 (400,000), I get a negative result for the sum, but a positive result checking with the analytical method, even after changing the type from 'int' to 'long'. Trying other combinations, I have achieved the opposite, with the analytical method resulting in a negative sum. I suspect this is related to the fact the type int can go up to a certain number of digits, but I wanted some confirmation on that, and if it isn't, what the actual problem is. The code is provided below:
#include <iostream>
// Values are inconsistent when paramin,parammax become large.
// For example, try (parammin,parammax)=(4,400,000)
int main() {
int parammax,parammin;
std::cout << "Input a minimum, then maximum parameter to sum up to" << std::endl;
std::cin >> parammin >> parammax;
int sum=0;
for (int iter = parammin; iter <= parammax; iter++){
sum += iter;
}
std::cout << "The sum is: " << sum << std::endl;
const int analyticalmethod = (parammax*(parammax+1)-parammin*(parammin-1))/2;
std::cout << "The analytical result for the sum is,"
" via (max*(max+1)-min*(min-1))/2: "
<< analyticalmethod << std::endl;
return 0;
}
Using very large numbers without control is dangerous in C++. The basic types int, long and long long are implementation dependant, with only the following requirements:
int is at least 16 bits large
long is at least as large as int and at least 32 bits large
long long is at least as large as long and at least 64 bits large
If you think you can need larger values, you should considere a multi precision library like the excellent gmp.
I want to know if there is a way to + two big int like
562159862489621563489 + 51456235896321475268
without put them in string in c++
You can use types like long long or unsigned long long, but be aware of integer overflows ant that the actual biggest number you can get is platform dependent.
Have a look at
std::cout << std::numeric_limits<long long>::max() << std::endl;
std::cout << std::numeric_limits<unsigned long long>::max() << std::endl;
If this is not enough maybe it is worth looking at this