Cannot convert a very large number to hex [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
The community is reviewing whether to reopen this question as of 2 years ago.
Improve this question
I have the following string:
std::string data{"4a00f000f00e1887a9900fff0000004ec00ff004a00f000f00e1887a9900fff0000004ec00ff004a00f000f00e1887a9900fff0000004ec00ff004a00f000f00e1887a9900fff0000004ec00ff000ff004a00f000f00e1887a9900fff000"}
I need to extract it as its equivalent hex value:
4a00f000f00e1887a9900fff0000004ec00ff004a00f000f00e1887a9900fff0000004ec00ff004a00f000f00e1887a9900fff0000004ec00ff004a00f000f00e1887a9900fff0000004ec00ff000ff004a00f000f00e1887a9900fff000
when i do the following code it prints ffffffffffffffff.
I see the issue is the value is too large to fix in value but how do I overcome this?
Is there a way to perhaps put it in a vector bit by bit using a for loop?
{
std::istringstream hex_buffer(data);
unsigned long long value;
hex_buffer >> std::hex >> value;
std::cout << value;
return 0;
}

C++ language uses fixed size integral types. The basic set contains (increasing sizes): char, short, int, long. char has at least 8 bits, short and int at least 16, and long at least 32.
long long is an optional type with at least 64 bits when it exists. But I know no architecture with an integral type of more that 64 bits, meaning that the larger value can be represented in an unsigned long long is 0xFFFFFFFFFFFFFFFF.
You can of course define a class able to handle integer values or arbitrary sized, or use a library that can process arbitrary size values like gmp, by you cannot expect store a number of more than 64 bits in a 64 bit integer.

Related

logic behind assign binary literals to an int [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Found that logic on a code and don't get the reason behind that; why use it instead of assign normal int?
(its a character controller in a 3D environment)
// assumes we're not blocked
int blocked = 0x00;
if ( its_a_floor )
blocked |= 0x01;
if ( its_a_wall )
blocked |= 0x02;
0x00 is a "normal int". We are used to base 10 representations, but other than having 10 fingers in total, base 10 is not special. When you use an integer literal in code you can choose between decimal, octal, hexadecimal and binary representation (see here). Don't confuse the value with its representation. 0b01 is the same integers as 1. There is literally no difference in the value.
As a fun fact and to illustrate the above, consider that 0 is actually not a decimal literal. It is an octal literal. It doesn't really matter, because 0 has the same representation in any base.
As the code is using bit-wise operators it would be most convenient to use a binary literals. For example you can see easily that 0b0101 | 0b10101 equals 0b1111 and that 0b0101 & 0b1010 equals 0b0000. This isn't that obvious when using base 10 representations.
However, the code you posted does not use binary literals, but rather hexadecimal literals. This might be due to the fact that binary literals are only standard C++ since C++14, or because programmers used to bit wise operators are so used to hexadecmials, that they still use them rather than the binary.

What exactly is a bit vector in C++? [duplicate]

This question already has answers here:
C/C++ Bit Array or Bit Vector
(5 answers)
Closed 7 years ago.
So, I was reading a question in Cracking the Coding Interview: 5th Edition where it says to implement a bit vector with 4 billion bits. And it defines a bit vector as an array that compactly stores boolean values by using an array of ints. Each int stores a sequence of 32 bits, or boolean values. I am sort of confused in the above definition. Can someone explain me what exactly does the above statement mean?
The marked question that has been attached as duplicate, I couldn't really understand since their is no associated example. The second answer does have an example but it's not really understandable. It will be great if any of you can add an example, albeit for a small value only. Thanks!
The bool type is at least 1 byte. It means it's at least 8 bits.
In a 'int' type, on a 32bits system, it's 32 bits.
You then have 32 booleans in 4 bytes with int, instead of 32 bytes minimum if you use bool type.
In an int you can store 32 booleans by basic bit operations : &, | and ~

Assign octal/hex declared INT/UINT to another variable [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
My WIN32 (C++) code has a UINT lets call it number.
The value of this UINT (or INT doesn't matter) start with a 0 and is recognized as an octal value. It's possible to use the standart operators and the value will keep the octal-system. The same is possible with hex (with foregoing 0x).
The problem is I have to use the Value of number in a buffer to calculate with it without changing the value of number. I can assign a value like 07777 to buffer on declaration line but if use an operation like buffer = number the value in buffer is recognized on decimal base.
Anybody has a solution for me?
There's no such thing in C as an "octal value". Integers are stored in binary.
For example, these three constants:
10
012
0xA
all have exactly the same type and value. They're just different notations -- and the difference exists only in your source code, not at run time. Assigning an octal constant to a variable doesn't make the variable octal.
For example, this:
int n = 012;
stores the value ten in n. You can print that value in any of several formats:
printf("%d\n", n);
printf("0%o\n", n);
printf("0x%x\n", n);
In all three cases, the stored value is converted to a human-readable sequence of characters, in decimal, octal, or hexadecimal.
Anybody has a solution for me?
No, because there is no actual problem.
(Credit goes to juanchopanza for mentioning this in a comment.)

Increment pointer bit by bit [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I know that pointers increment by the number of bytes that sizeof(type_i_am_using) returns. However, is there anyway to make a pointer increment bit by bit?
However, is there anyway to make a pointer increment bit by bit?
If you meant "byte by byte": No, because there is something called alignment. Addresses that are not aligned cannot be addresses of valid objects, hence a pointer containing an unaligned address is invalid. Most operations with invalid pointers invoke undefined behavior. If you want to e.g. access array subobjects of a standard-layout class where that array is the first member, cast the pointer to the element type of the array and work from there. There is no direct point in what you describe.
If you meant literally "bit by bit": There are well-known methods of iterating through all bits in an object representation using a simple for loop.
Most computer architectures don't let you address individual bits. If you need to, say, iterate through a sequence of bits, you need to iterate over the bytes instead (using a char *, or a pointer to a larger, unsigned integral type) and extract bits through bit shifting and bit mask operations. (value >> x) & 1 will extract the bit at index x from the right; value |= 1 << x will set it to 1, and value &= ~(1 << x) will set it to 0.
Note that vector<bool> is specialized to pack its values into individual bits.
A pointer can't be incremented bit by bit because a char is the smallest amount of addressable memory. This is mandated in the language specification. If you need to start looking inside individual bits then you most likely will want to use the bit shifting/masking operations.
For example to look inside a character you might want to do something like this:
bool get_bit_n(unsigned int n, char x){
return (1 << n) & x
}
Also you might want to look into std::bitset.
No. Character is the smallest thing that machine can address. You can however traverse all bits in machine using bit operations. In C++ you can use std::bitset or std::vector<bool> specialization, or bit fields.

How to determine the number of bits in int [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
This is what I tried:
int i=-1,size=1;
while (i>>1)
size++;
printf("%d",size);
The goal is to determine the size of int without using the sizeof operator.
The above loop turns out to be infinite. Is there a way to fix it so it does what it is intended to do?
Just use unsigned for i, rather than int. They are
guaranteed to have the same size, and right shift of a signed integer is implementation defined (but will usually shift in the sign bit). And don't forget to divide
the results by CHAR_BIT (which is not guaranteed to be 8).
You have chosen a negative number for right-shifting.
Right shifting a negative number, it gets filled with the sign bit 1 (or not, depending on implementation), so your value can never be 0 (=false), which means you get precisely the infinite loop you are complaining about.
your loop is indeed infinite.
start from i = 1 and shift it left till you reach i =0 and stop. you have the bits.
-edit---
this will work for signed as well as unsigned integer alike.