While I was writing code on a 64 bit machine for a c++ program,I noticed that printing the address of a variable (for example) returns just 12 hexadecimal characters, instead of 16. Here's an example code:
int a = 3 ;
cout sizeof(&a) << " bytes" << endl ;
cout << &a << endl ;
The output is:
8 bytes
0x7fff007bcce0
Obviously, the address of a variable is 8 byte (64 bit system). But when I print it, I get only 12 hexadecimal digits instead of 16.
Why this? I think that is due to the fact that the 4 "lost" digits
were leading zeroes, that were not printed. But this is only my
thought, and I wish to have a definitive and technically correct
answer.
How could I print the entire address? Is there a built-in solution,
or should I manually use "sizeof" in order to get the real lenght and
then add to the address the right number of zeroes?
Forgive me, I googled for a day for an answer to my stupid question, but I wasn't able to find an answer. I'm just a newbie.
(On stackoverflow I did not find any question/answer about what I needed to know, but maybe I'm wrong.)
Someone asks a pretty similar question here: c++ pointer on 64 bit machine
Hope this helps :)
To print the full 64bit address with leading zeros you can use:
std::cout
<< "0x"
<< std::hex
<< std::noshowbase
<< std::setw(16)
<< std::setfill('0')
<< n
<< std::endl ;
Got it from: How can I pad an int with leading zeros when using cout << operator?
I am currently writing a book on C++ and windows 32-bit programming for peeps such as you, but unfortunately I am not yet done with it :(
The following code demonstrates how you would display a 64-bit unsigned number using cout method:
// Define a 64-bit number. You may need to include <stdint.h> header file depending on your C++ compiler.
uint64_t UI64 = 281474976709632ULL; // Must include ULL suffix and this is C99 C++ compiler specific.
// unsigned __int64 UI64 = 281474976709632ULL; // Must include ULL suffix and this is Microsoft C++ compiler specific.
// Set decimal output.
cout << dec;
// Display message to user.
cout << "64-bit unsigned integer value in decimal is: " << UI64 << endl;
cout << "\n64-bit unsigned integer value in hexadecimal is: ";
// Set the uppercase flag to display hex value in capital letters.
cout << uppercase;
// Set hexadecimal output.
cout << hex;
// Set the width output to be 16 digits.
cout.width(16);
// Set the fill output to be zeros.
cout.fill('0');
// Set right justification for output.
right(cout);
// Display the 64-bit number.
cout << UI64 << endl;
You may need to (type) cast the address into a 64-bit unsigned value.
In this case, you can do the following:
// (type) cast pointer adddress into an unsigned 64-bit integer.
uint64_t UADD64 = (uint64_t)&UI64; // C99 C++ compiler specific.
Related
This question already has answers here:
What does "dereferencing" a pointer mean?
(6 answers)
Closed 2 years ago.
I am new to C++, but I am curious enough to dig into these strange things.
I was wondering what happens when I convert a pointer to an int and realized could they indicate something. So I wrote this program to test my ideas as pointers in the same array are close enough in terms of memory location to be compared.
This is the code that will explain my question clearly.
#include <iostream>
using namespace std;
int main() {
cout << "--------------------[ Pointers ]--------------------" << endl;
const unsigned int NSTRINGS = 9;
string strArray[NSTRINGS] = { "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine" };
string *pStartArray = &strArray[0]; // Setting pStartArray pointer location to the first block of the array.
string *pEndArray = &strArray[NSTRINGS - 1]; // Setting pEndArray pointer location to the last block of the array.
cout << "---[ pStartArray value : " << *pStartArray << endl; // Showing the value of the pStartArray pointer (Just for safety check).
cout << "---[ pEndArray value : " << *pEndArray << endl; // Showing the value of the pEndArray pointer (Just for safety check).
short int blockDifferential = pEndArray - pStartArray; // Calculating the block differential of those two pointers.
cout << "---[ Differential of the block locations that pointers are pointing to in array (pEndArray - pStartArray) : " << blockDifferential << endl;
long long pStartIntLocation = (long long)pStartArray; // Converting the memory location (Hexadecimal) of pStartArray pointer to int (Maybe it's byte, regardless of being positive or negative). What's your opinion on this?
cout << "---[ (long long) pStartArray current memory location to int : \"" << pStartIntLocation << "\"" << endl;
long long pEndIntLocation = (long long)pEndArray;
cout << "---[ (long long) pEndArray current memory location converted to int : \"" << pEndIntLocation << "\"" << endl; // Converting the memory location (Hexadecimal) of pEndArray pointer to int (Maybe it's byte, regardless of being positive or negative). What's your opinion on this?
short int locationDifferential = pEndIntLocation - pStartIntLocation; // And subtracting the integer convetred location of pEndArray from pStartArray.
cout << "---[ Differential of the memory locations converted to int ((long long)pStartArray - (long long)pEndArray) : " << locationDifferential << " (Bytes?)" << endl; // Seems like even after running the program multiple times, this number does not change. Something's fishy. Doesn't it seem like it's a random thing. It must be investigated.
cout << "---[ Size of variable <string> (According to the computer that it's running) : " << sizeof(string) << " (Bytes)" << endl; // To know how much memory does a string consume. For example mine was 40.
// Here it goes interesting. I can get the block differential of the pointers using <locationDifferential>.
cout << "---[ Differential of the cell location (AGAIN) using the <locationDifferential> that I have calculated : " << locationDifferential/sizeof(string) << endl; // So definately <locationDifferential> was in bytes. Because I got 8 again. I just wonder is it a new discovery. LOL.
/*
I might look really crazy, because I can't tell it another way. It just can't happen.
This is the last try to make it as clear as I can.
pStrArray ]--\ pEndArray ]--\
\ - 8 cell difference - \
Array = | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
|-------- Differential ---------|
Cell difference : 8 string cells
String size that I considered : 32 Bytes
data (location difference) : 8 * 32 = 256
So if you see this, all this might make sense.
I am excited to see what opinions you professional programmers have come up with.
- D3F4U1T
*/
cout << "----------------------------------------------------" << endl;
return 0;
}
How does this exactly work?
pStartIntLocation, pEndIntLocation are all in bytes?
If so, why sometimes they return negative value?
This is strange.
Also correct me if I am wrong about any information I provided.
- Best regards.
- D3F4U1T.
Edit 2:
Does the value that results from the conversion from pointer to a long long mean anything? Like the memory address but with the difference that this one is in bytes?
Edit 3: Seems like this is related to virtual address space. Correct me if I am wrong. Does the OS have any mechanism to number the memory as bytes. For example: Byte 1 , Byte 2 , ....
A "pointer" is an integer quantity of some length whose contents are understood to represent a memory address. (By convention, zero means NULL ... no address.)
If you typecast it into an integer, you are simply declaring to the compiler that *"no, these however-many bits should not be treated as an address ... treat them as an integer." The content of the location does not change, only the compiler's interpretation of it.
Typecasting does not change the bits – only the momentary interpretation of what they are and what they mean.
FYI: unions are another way to do a similar thing: every element of a union overlaps the others and describes various interpretations of the same area of storage. (In the Fortran language, this was called EQUIVALENCE.)
I've noticed some weird behaviour in c++ which i don't understand,
i'm trying to print a truncated double in a hexadecimal representation
this code output is 17 which is a decimal representation
double a = 17.123;
cout << hex << floor(a) << '\n';
while this code output is 11 and also my desirable output
double a = 17.123;
long long aASll = floor(a);
cout << hex << aASll << '\n';
as double can get really big numbers i'm afraid of wrong output while storing the truncated number in long long variable, any suggestions or improvements?
Quoting CPPreference's documentation page for std::hex (and friends)
Modifies the default numeric base for integer I/O.
This suggests that std::hex does not have any effect on floating point inputs. The best you are going to get is
cout << hex << static_cast<long long>(floor(a)) << '\n';
or a function that does the same.
uintmax_t from <cstdint> may be useful to get the largest available integer if the values are always positive. After all, what is a negative hex number?
Since a double value can easily exceed the maximum resolution of available integers, this won't cover the whole range. If the floored values exceed what can fit in an integer type, you are going to have to do the conversion by hand or use a big integer library.
Side note: std::hexfloat does something very different and does not work correctly in all compilers due to some poor wording in the current Standard that is has since been hammered out and should be corrected in the next revision.
Just write your own version of floor and have it return an integral value. For example:
long long floorAsLongLong(double d)
{
return (long long)floor(d);
}
int main() {
double a = 17.123;
cout << hex << floorAsLongLong(a) << endl;
}
This question already has answers here:
How can I pad an int with leading zeros when using cout << operator? [duplicate]
(7 answers)
Closed 7 years ago.
In a C++ program, I want to display a column of floating point values so that the sign, digits, and decimal point all line up. Multiple leading zeros should pad the whole number part of each value, when necessary. For example:
A column of floating point values:
+000.0012
-000.0123
+000.1235
-001.2346
+012.3457
-123.4568
I had an elaborately commented test program that demonstrated the problem. But, as I was editing this post, I found the answer I need here:
- Extra leading zeros when printing float using printf?
The essential problem was that I was using a format code of "%+04.4f" when I should use "%+09.4f", because the total field width I want is 9:
1 for the sign
3 for the whole digits
1 for the decimal point
4 for the fractional digits
I do not have enough reputation points to comment on that post, so thank you from here, #AndiDog.
I still do not know how to get multiple leading zeros using just stream formatting flags. But that is a battle for another day. I will stick with a mixture of printf and stream for now.
A couple of comments have mentioned std::setfill('0') and std::setw. While these are necessary, they're not sufficient to the task. For example, this code:
std::cout << std::setfill('0') << std::setw(7) << std::showpos << 0.012;
will produce: 0+0.012 as its output. This is obviously not quite what we wanted.
We need to add the std::internal flag to tell the stream to insert "internal padding" -- i.e., the padding should be inserted between the sign and the rest of the number, so code like this:
std::cout << std::setfill('0') << std::setw(7) << std::internal << std::showpos << 0.012;
...produces the output we want: +00.012.
Also note that the padding character is "sticky", so if you alternate between using std::setw with numeric and non-numeric types, you'll probably need/want to change it each time. Otherwise, something like std::cout << setw(12) << name; will produce results like: 0000000Jerry, which is rarely desired either.
To assure that we always get the same number of places after the decimal point, we also need to set the std::fixed flag, and specify the number of places with std::setprecision, such as:
#include <iostream>
#include <iomanip>
#include <vector>
int main() {
std::vector<double> values { 0.1234, 1.234, 1.5555 };
for (auto d : values)
std::cout << std::internal << std::showpos << std::setw(9)
<< std::setprecision(3) << std::setfill('0') << d << "\n";
}
Which produces the output I believe is desired:
+0000.123
+0001.234
+0001.556
There is one circumstance under which you won't get aligned results this way though: if you have a number too large to fit into the field provided, all the places before the decimal point will still be printed. For example, if we added 1e10 to the list of numbers to be printed by the preceding code, it would be printed out as: +10000000000.000, which obviously won't align with the rest.
The obvious way to deal with that would be to just put up with it, and if it arises often enough to care about, increase the field size to accommodate the larger numbers.
Another possibility would be to use fixed notation only the number is below a certain threshold, and switch to (for example) scientific notation for larger numbers.
At least in my experience, code like this tends to be used primarily for financial data, in which case the latter option usually isn't acceptable though.
To show the positive sign, you use std::showpos.
To show the leading zeros, you use std::setw(n) and std::setfill('0').
To show the digits after zero, you use std::setprecision(m).
To show the zeros between the + sign and the first digit, you use std::internal.
To keep the digit at a fixed position, you use std::fixed.
#include <iostream> // std::cout, std::fixed
#include <iomanip> // std::setprecision
int main () {
double f =1.234;
double g =-12.234;
std::cout << std::showpos<< std::internal << std::fixed << std::setprecision(4)<<std::setw(9) <<std::setfill('0') << f << '\n';
std::cout <<std::setw(9)<< std::setfill('0') <<g<<"\n"; //repeat these for a new number
return 0;
}
//output:
//+001.2340
//-012.2340
The only way I now how to do this is to display the sign first and then set the fill, width and precision after and display the positive value as you have already displayed the sign. You also need to set the format flag to ios::fixed
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
float x[] = { 000.0012, .0123, .1235, 1.2346, 12.3457, 123.4568 };
cout.setf(ios::fixed);
for (int i = 0; i < 6; i++)
cout << (x[i] > 0 ? '+' : '-') << setfill('0') << setw(8) << setprecision(4) << abs(x[i]) << endl;
return 0;
}
Displays
+000.0012
-000.0123
+000.1235
-001.2346
+012.3457
-123.4568
In a simple console application I am trying to read a file containing a hex value on each line.
It works for the first few, but after 4 or 5 it starts outputting cdcdcdcd.
Any idea why this is? Is there a limit on using read in this basic manner?
The first byte of the file is its size.
std::ifstream read("file.bin");
int* data;
try
{
data = new int [11398];
}
catch (int e)
{
std::cout << "Error - dynamic array not created. Code: [" << e << "]\n";
}
int size = 0;
read>>std::hex>>size;
std::cout<<std::hex<<size<<std::endl;
for( int i = 0; i < size; i++)
{
read>>std::hex>>data[i];
std::cout<<std::hex<<data[i]<<std::endl;
}
The values I get returned are:
576 (size)
1000323
2000000
1000005
cdcdcdcd
cdcdcdcd
cdcdcdcd
...
The first value that is meant to be output in cdcdcdcd's place is 80000000.
You are overflowing an int.
If you change to unsigned int. You will be able to fill to 0xFFFFFFFF
You can check with:
std::cout << "Range of integer: "
<< std::numeric_limits<int>::max()
<< " <Value> "
<< std::numeric_limits<int>::min()
<< "\n";
std::cout << "Range of integer: "
<< std::numeric_limits<unsigned int>::max()
<< " <Value> "
<< std::numeric_limits<unsigned int>::min()
<< "\n";
Note: There is no negative hex values (it is designed as a compact representation for a bit representation).
You should really check that the read worked:
if (read>>std::hex>>data[i])
{
// read worked
}
else
{
// read failed.
}
It sounds very much like your read fails.
Note that on a 32-bit int system, 0x80000000 is out of range for int. The range of valid values is probably -0x80000000 through to 0x7FFFFFFF.
It's important not to mix up values with representations. "0x80000000" , when read via std::hex, means the positive integer which is written as 80000000 in base 16. It's neither here nor there that a particular negative integer may be stored internally in a signed int in 2's complement with the same binary representation as a positive value of type unsigned int has when the positive integer 80000000 is stored in it.
Consider reading into unsigned int if you intend to use this technique. Also, it is essential that you check the read operation for success or failure. If a stream extraction fails then the stream is put into an error state, where all subsequent reads fail until you call .clear() on the stream.
NB. std::hex (and all other modifiers actually) are "sticky": once you set it, it stays set until you actually specify std::dec to restore the default.
Would I would like to be able to do is convert a char array (may be binary data) to a list of HEX values of the form: ab 0d 12 f4 etc....
I tried doing this with
lHexStream << "<" << std::hex << std::setw (2) << character << ">";
but this did not work since I would get the data printing out as:
<ffe1><2f><ffb5><54>< 6><1b><27><46><ffd9><75><34><1b><ffaa><ffa2><2f><ff90><23><72><61><ff93><ffd9><60><2d><22><57>
Note here that some of the values would have 4 HEX values in them? e.g.
What I would be looking for is what they have in wireshark, where they represent a char aray (or binary data) in a HEX format like:
08 0a 12 0f
where each character value is represented by just 2 HEX characters of the form shown above.
It looks like byte values greater than 0x80 are being sign-extended to short (I don't know why it's stopping at short, but that's not important right now). Try this:
IHexStream << '<' << std::hex << std::setw(2) << std::setfill('0')
<< static_cast<unsigned int>(static_cast<unsigned char>(character))
<< '>';
You may be able to remove the outer cast but I wouldn't rely on it.
EDIT: added std::setfill call, which you need to get <06> instead of < 6>. Hat tip to jkerian; I hardly ever use iostreams myself. This would be so much shorter with fprintf:
fprintf(ihexfp, "<%02x>", (unsigned char)character);
As Zack mentions, The 4-byte values are because it is interpreting all values over 128 as negative (the base type is signed char), then that 'negative value' is extended as the value is expanded to a signed short.
Personally, I found the following to work fairly well:
char *myString = inputString;
for(int i=0; i< length; i++)
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< static_cast<unsigned int>(myString[i]) << " ";
I think the problem is that the binary data is being interpreted as a multi-byte encoding when you're reading the characters. This is evidenced byt he fact that each of the 4-character hex codes in your example have the high bit set in the lower byte.
You probably want to read the binary stream in ascii mode.