std::dec still outputting memory address to hex? - c++

I have:
std::cout << "Start = " << std::dec << (&myObject) << std::endl;
to output an address in decimal. However, the address is still coming out in hex??
(I am outputting one of these for each of ten members, so I don't want to assign each one to a variable and then std::dec the variable separately)

The hex and dec manipulators are for integers, not pointers. Pointers are always rendered in the form that printf's %p formatter would have used on your system (which is, usually, hexadecimal notation).
This helps to emphasise the fact that pointers and numbers are distinct. You may consider it to be one of the rare cases in which number semantics and number representation are, to some degree, coupled.
The best you can do is to cast the pointer to uintptr_t before streaming it:
std::cout << "Start = " << std::dec << uintptr_t(&myObject) << std::endl;
…but please consider whether you really need to do so.

Related

How to format padded signed numbers in a stringstream

I'm trying to format numbers using C++ streams and am having trouble with the sign character being placed after the fill instead of before. When I do this:
std::cout << std::setfill('0') << std::setw(6) << -1;
The output is
0000-1
But what I want is
-00001
With printf I can use a format like "%06d" and it works the way I expect with the sign placed before the zeroes, but I want to get the same effect in a stream and not use the old formatting functions. I can't use C++20 yet either, so std::format isn't available to me yet.
I should have looked a little longer before posting. The answer is the std::internal manipulator, which causes the padding to be done internally within the field. This works correctly:
std::cout << std::internal << std::setfill('0') << std::setw(6) << -1;
-00001

Can strings and integers/floats be mixed when formatting via ostringstream

I'm trying to fully understand how to use ostringstream to modernize some code that uses sprintf. The problem is in replacing test code that generates random or sequential data. Here's a simplified example:
char num[6], name[26];
sprintf(num, "%05d", i);
sprintf(name, "Customer # %d", i);
Leaving aside the minutia of length calculation, copying the result to the array and null-termination, the num conversion is straightforward:
ostringstream ostr;
ostr << setw(len) << setfill('0') << i;
For example, given i as 123, the result is "00123".
However, for the name, which is actually a mixture of a char-string and an integer, I can't figure how to replicate what sprintf appears to do so easily.
I tried variations of this:
ostr << setw(len) << left << "Customer # " << i << setfill(' ');
Given the same value for i, the result was always "Customer # 123", i.e., the integer always right justified, no matter what combination of left, right or internal, or the placement of the various parts. It seems the only solution (I haven't tried it yet), is to first concatenate the "Customer # " and the i onto a separate ostringstream and then insert that left justified to the ostr variable. Am I missing something? Is there some other way?
Correction: as discussed in the comments, in simplifying the example I overlooked a second sprintf that was actually responsible for right-padding the name. So a second output to the ostringsteam is also needed to accomplish that.

How to understand this unfamiliar casting? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
#include <iostream>
int main() {
__int64 a = (__int64)"J\x10";
return 0;
}
When I run it, the result is a = 12950320.
How to understand (__int64)"J\x10"?
"J\x10" is a string literal. String literals are arrays of characters with static storage.
__int64 is presumably some type. Based on the name, we can presume that it is some implementation defined (non-standard) 1 64 bits wide signed integer type.
Expression (T)expression is an explicit type conversion colloquially called C-style cast. It performs one or combination of 2 static cast, reinterpret cast or const cast on the operand expression. In this case, the expression converts the value of the string literal expression into the type __int64.
When the value of an array (such as string literal) is used, it is implicitly converted to a pointer to the first element. This is called decaying. The value of a pointer is the memory address where the object is stored.
So, this pointer to the first character of the string literal is converted to the implementation defined integer type. There is no static cast from pointer to integer, so this is a reinterpret cast. Assuming the integer type is large enough to represent the value stored in the pointer (that'll be the case for most systems today, but is not something guaranteed by C++), this conversion maps the address value to some integer value in an implementation defined manner.
If you're still confused: That's fine; the program doesn't make much sense even after understanding what it does.
1 This means that using such type makes the program usable only on limited set of systems that support such special type.
2 It is generally recommended to avoid using C-style casts and instead use one of the specific casts that you intend to use. C-style casts often prevent the compiler from catching obvious bugs. Also, reinterpret cast and const cast should not be used unless you know exactly what it does in the context where you use it and what are the ramifications.
"J\x10" is a string (two chars here, "J" and hexa 10), which by default in C++ is considered as a const char*.
You are trying to cast that const char * to a __int64 value and store it at "a". I think this is a nasty cast.
Running this code several times will shows you that the pointer may vary from to execution to execution (it may show as the same, just due to OS cache).
Another thing to take into account is that __int64 is not a standard type, but a MS one.
Well, that is an explicit type conversion as mentioned in comments. In this case is it relies heavily on programmer knowing what she/he is doing and memory space needed for the conversion.
Let's say we have memory of 2 bytes, and value of those 2 bytes are 0:
int16_t memory = 0; // 16 bits is 2 bytes,
+-----------+-----------+
| 0000-0000 | 0000-0000 |
+-----------+-----------+
Now we can read those 2 bytes individually (byte per byte), or collectivly (as whole 2 byte value). Now lets think of it as an union, and imagine that 2 byte memory space as independant spaces.
union test{
int16_t bytes2;
int8_t bytes[2];
char chars[2];
};
So when we input an value of 20299 into that union we can do next:
union test sub;
sub.bytes2 = 20299;
std::cout << sub.bytes2 << std::endl; // 20299
std::cout << (int)sub.bytes[1] << " " << (int)sub.bytes[0] << std::endl; // 79 75
std::cout << sub.chars[1] << " " << sub.chars[0] << std::endl; // O K
Which works like this:
+-----------+-----------+
| 20 299 |
+-----------+-----------+
| 79 | 75 |
+-----------+-----------+
| 'O' | 'K' |
+-----------+-----------+
But we could do the similar if we ignore several compiler warnings:
sub.bytes2 = (int16_t)'OK'; //CygWin will issue warning about this
std::cout << sub.bytes2 << std::endl; // 20299
std::cout << (int)sub.bytes[1] << " " << (int)sub.bytes[0] << std::endl; // 79 75
std::cout << sub.chars[1] << " " << sub.chars[0] << std::endl; // O K
But if we pass an string "OK" instead of char array 'OK' we will get an error cast from 'const char*' to 'int16_t {aka short int}' loses precision [-fpermissive], and this is because std::string has an extra character \0 that isn't shown marking an end of string. So instead of passing 2 byte constant, we are passing 3 byte constant, and our compiler is issuing an error.
Among memory error that I described earlier there is a lot more going on. Strings are pointers to the memory addres similar in functionality as our union. So any string is an reference to specific memory addres with no information how many bytes does that memory addres occupy (aka length of an string). Now there are headers ( <string> and <cstring> ) that help with that issue in many ways, but that is off topic for this question.
That is an pure basis how casting works, it reads 1 memory as a different type from original integer.
In code you provided we have __int64 a = (__int64)"J\x10" , where __int64 in an 64 bit integer on Windows, and "J\x10" is string literal of specific size that holds value of those letters - but \x10 is an integer of value of 16.
union test{
uint64_t val;
char chars[8];
};
int64_t a = (int64_t)"J\x10";
union test sub;
sub.val = a;
// 4299178052
std::cout << sub.val << std::endl;
// ◦ ◦ ◦ ◦ # # D
std::cout << sub.chars[7] << " "<< sub.chars[6] << " "<< sub.chars[5] << " "<< sub.chars[4] << " " << sub.chars[3] << " " << sub.chars[2] << " "<< sub.chars[1] << " "<< sub.chars[0] << " "<< std::endl;
// 0 0 0 1 0 40 40 44
std::cout <<std::hex << (int)sub.chars[7] << " "<<std::hex << (int)sub.chars[6] << " "<<std::hex << (int)sub.chars[5] << " "<<std::hex << (int)sub.chars[4] << " "<<std::hex << (int)sub.chars[3] << " " <<std::hex << (int)sub.chars[2] << " " <<std::hex << (int)sub.chars[1] << " "<<std::hex << (int)sub.chars[0] << " "<< std::endl;
But as you can see I didn't get the same result as you, and the issues range in more than this. __int64 is an Virtual Studio only type only, and it is corresponding to the long long, how constant memory reads is prone to errors of referencing, and in general it is bad code that should be discouraged.
This kind of behaviour, especially with VS type only is not easy to understand, and it doesn't have to have same result, it can also raise numerous errors if copy/pasted that are hard to decipher if OS and IDE brands are unknown or not present in metadata. You should always use known aliases and references

Output of the cout changes according to typecasting way

I'm working on a program and have a strange, cout related problem. Since the program is a bit big and the code talks best, I'll paste the relevant snippets.
First, I have an iterator, *it defined in a for as
for(vector<facet*>::iterator it=facets_to_dump->begin(); it<facets_to_dump->end(); it++)
In this for, if I use the expression
facet* facet_to_work_on = *it;
cout << facet_to_work_on->facet_id << "\t";
Nicely prints out integers.
But, if I use the notation
cout << (facet*)(*it)->facet_id << "\t";
This code prints out hex values. Hex values are equal to the integer values. Any idea why this is happening?
Thanks in advance.
The reason
cout << (facet*)(*it)->facet_id << "\t";
prints out a hex value is that -> binds harder than the (facet*) cast, that is it evaluates
(*it)->facet_id
and casts the result to a facet*. Pointers are output in hex.
Try including <iomanip> and:
cout << dec << (facet*)(*it)->facet_id << "\t";
To say you want numbers in decimal form.

Why is std::cout not printing the correct value for my int8_t number?

I have something like:
int8_t value;
value = -27;
std::cout << value << std::endl;
When I run my program I get a wrong random value of <E5> outputted to the screen, but when I run the program in gdb and use p value it prints out -27, which is the correct value. Does anyone have any ideas?
Because int8_t is the same as signed char, and char is not treated as a number by the stream. Cast into e.g. int16_t
std::cout << static_cast<int16_t>(value) << std::endl;
and you'll get the correct result.
This is because int8_t is synonymous to signed char.
So the value will be shown as a char value.
To force int display you could use
std::cout << (int) 'a' << std::endl;
This will work, as long as you don't require special formatting, e.g.
std::cout << std::hex << (int) 'a' << std::endl;
In that case you'll get artifacts from the widened size, especially if the char value is negative (you'd get FFFFFFFF or FFFF1 for (int)(int8_t)-1 instead of FF)
Edit see also this very readable writeup that goes into more detail and offers more strategies to 'deal' with this: http://blog.mezeske.com/?p=170
1 depending on architecture and compiler
Most probably int8_t is
typedef char int8_t
Therefore when you use stream out "value" the underlying type (a char) is printed.
One solution to get a "integer number" printed is to type cast value before streaming the int8_t:
std::cout << static_cast<int>(value) << std::endl;
It looks like it is printing out the value as a character - If you use 'char value;' instead, it prints the same thing. int8_t is from the C standard library, so it may be that cout is not prepared for it(or it is just typedefd to char).