Issue with vector<bool> and printf - c++

#include <vector>
#include <iostream>
#include <stdio.h>
using namespace std;
int main(int argc, const char *argv[])
{
vector<bool> a;
a.push_back(false);
int t=a[0];
printf("%d %d\n",a[0],t);
return 0;
}
This code give output "5511088 1". I thought it would be "0 0".
Anyone know why is it?

The %d format specifier is for arguments the size of integers, therefore the printf function is expecting two arguments both the size of an int. However, you're providing it with one argument that isn't an int, but rather a special object returned by vector<bool> that is convertible to bool.
This is basically causing the printf function to treat random bytes from the stack as part of the values, while in fact they aren't.
The solution is to cast the first argument to an int:
printf("%d %d\n", static_cast<int>(a[0]), t);
An even better solution would be to prefer streams over printf if at all possible, because unlike printf they are type-safe which makes it impossible for this kind of situation to happen:
cout << a[0] << " " << t << endl;
And if you're looking for a type-safe alternative for printf-like formatting, consider using the Boost Format library.

%d format specifier is for int type. So, try -
cout << a[0] << "\t" << t << endl;

The key to the answer is that vector isn't really a vector of bools. It's really a vector of proxy objects, which are translatable into ints & bools. This allows each bool to be stored as a single bit, for greater space efficiency (at the cost of speed efficiency), but causes a number of problems like the one seen here. This requirement was voted into the C++ Standard in a rash moment, and I believe most committee members now believe it was a mistake, but it's in the Standard and we're kind-of stuck with it.

The problem is triggered by the specialization for bool of vectors.
The Standard Library defines a specialization of the vector template for bool. The description of this specialization indicates that the implementation should pack the elements so that every bool only uses one bit of memory. This is widely considered a mistake.
Basically std::bool use 1 bit instead of 1 byte, so you face undefined behavior regarding printf.
If you are really willing to use printf, you can solve this issue by defining std::bool as char and print it as integer %d (implicit conversion, 1 for true and 0 for false).
#include <vector>
#include <iostream>
#include <stdio.h>
#define bool char // solved
using namespace std;
int main(int argc, const char *argv[])
{
vector<bool> a;
a.push_back(false);
int t = a[0];
printf("%d %d\n", a[0], t);
return 0;
}

Related

C++ initialization of vector of structs

I am trying to make a keyword-recognizing subroutine under OSX Yosemite, see the listing below. I do have a couple of strange things.
I am using the "playground" for making MWE, and the project builds seemingly OK, but does not want to run:
"My Mac runs OS X 10.10.5, which is lower than String sort's minimum deployment target."
I do not understand even the message, and especially not what my code makes with sorting?
Then, I pasted the relevant code to my app, where the project was generated using CMake, and the same compiler, and the same IDE, in the same configuration presents with the message
"Non-aggregate type 'vector cannot be initialized with an initializer list"
in the "vector QInstructions={..}" construction.
When searching for similar error messages, I found several similar questions, and the suggested solutions use default constructor, manual initialization, and the like. I wonder if standard-resistant compact initialization is possible?
#include <iostream>
using namespace std;
#include <vector>
enum KeyCode {QNONE=-1,
QKey1=100, QKey2
};
struct QKeys
{ /** The code command code*/
std::string Instr; ///< The command string
unsigned int Length; ///< The significant length
KeyCode Code; //
};
vector<QKeys> QInstructions={
{"QKey1",6,QKey1},
{"QKey2",5,QKey2}
};
KeyCode FindCode(string Key)
{
unsigned index = (unsigned int)-1;
for(unsigned int i=0; i<QInstructions.size(); i++)
if(strncmp(Key.c_str(),QInstructions[i].Instr.c_str(),QInstructions[i].Length)==0)
{
index = i;
cout << QInstructions[i].Instr << " " <<QInstructions[i].Length << " " << QInstructions[i].Code << endl;
return QInstructions[i].Code;
break;
}
return QNONE;
}
int main(int argc, const char * argv[]) {
string Key = "QKey2";
cout << FindCode(Key);
}
In your code
vector<QKeys> QInstructions={
("QKey1",6,QKey1),
{"QKey2",5,QKey2}
};
the first line of data is using parenthesis "()". Replace them with accolades "{}" and it will work.
Also, i see you have written unsigned index = (unsigned int)-1;. This is undefined behavior according to the standard. This is also bad because you are using a C-style cast (see here). You should replace it with:
unsigned index = std::numeric_limits<unsigned int>::max();
Finally, I found the right solution as
Initialize a vector of customizable structs within an header file . Unfortunately, replacing parenthesis did not help.
Concerning setting an unsigned int to its highest possible value using -1, I find as overkill to use std::numeric_limits<unsigned int>::max() for such a case, a kind of over-standardization. I personally think that as long as we are using two's complement representation, the assignment will be correct. For example, at
http://www.cplusplus.com/reference/string/string/npos/
you may read:
static const size_t npos = -1;
...
npos is a static member constant value with the greatest possible
value for an element of type size_t.
...
This constant is defined with a value of -1, which because size_t is
an unsigned integral type, it is the largest possible representable
value for this type.

Declaring array of int in C++ under Xcode

What is the difference between these two declarations?
int myints[5];
array<int,5> myints;
If I use the first declarations and the function size(), there will be a error "Member reference base type 'int [5]' is not a structure or union".
But if I use the second declarations and the function size(), the program works.
Why would the first declarations does not work?
#include <iostream>
#include <iomanip>
#include <array>
using namespace std;
int main()
{
//int myints[5]; //illegal
array<int,5> myints; //legal
cout << "size of myints: " << myints.size() << endl; //Error if I use the first declarations
cout << "sizeof(myints): " << sizeof(myints) << endl;
}
As others have pointed out, std::array is an extension added
to C++11 (so you may not have it), which wraps a C style array,
in order to give it some (but not all) of an STL-like interface.
The goal was that it could be used everywhere a C style array
could; in particular, it accepts the same initialization syntax
as C style arrays, and if the initialization type allows static
initialization, its initialization can be static as well. (On
the other hand, the compiler cannot deduce its size from the
length of the initializer list, which it can for the older
C style arrays.)
With regards to size, any experienced programmer will have
a size function in their toolkit, along the same lines as
std::begin and std::end (which are C++11 extensions, and
which everyone had in their toolkit before C++11 standardized
them). Something like:
template <typename T>
size_t
size( T const& c )
{
return c.size();
}
template <typename T, size_t n>
size_t
size( T (&a)[n] )
{
return n;
}
(In modern C++, the second could even be constexpr.)
Given this, you write size( myInts ), regardless of whether it
is an std::array or a C style array.
array<int,5> myints uses an std::array, a template that overlays enhanced functionality on-top of a "basic" C/C++ array (which is what int myints[5] is). With a basic array, you are just reserving a chunk of storage space, and are responsible for keeping track of its size yourself (although you can use sizeof() to help with this).
With the std::array you get helper functions that can make the array safer and easier to use.
std::array is new in C++11. As you have found, it has a size function. This tells you how many items are in the array.
sizeof on the other hand tells you how much memory a variable is taking up i.e. its size in bytes.
array is a template class that has size() as it's member function while int[] is simple C array
By using int myints[5]; , you are declaring an array of 5 ints on the stack, which is the basic C array.
Instead, by using array<int,5> myints; you are declaring an object of type array, which is a container class defined by the STL (http://en.cppreference.com/w/cpp/container/array), which in turns implements the size()function to retrieve the container's size.
The STL containers are built on top of the "basic" C types to provide extra functionality and to make it easier to manage them.
int myints[5]; has no function size() but you can do
int size = sizeof(myints)/ sizeof(int);
to get the size of the array.
so basically you can do:
#include <iostream>
#include <iomanip>
#include <array>
using namespace std;
int main()
{
int myintsArr[5]; //legal
array<int,5> myints; //legal
cout << "size of myints: " << myints.size() << endl; //Error if I use the first declarations
cout << "sizeof(myintsArr): " << sizeof(myintsArr)/ sizeof(int) << endl;
}
and get the same result from both the arrays

Valid use of reinterpret_cast?

Empirically the following works (gcc and VC++), but is it valid and portable code?
typedef struct
{
int w[2];
} A;
struct B
{
int blah[2];
};
void my_func(B b)
{
using namespace std;
cout << b.blah[0] << b.blah[1] << endl;
}
int main(int argc, char* argv[])
{
using namespace std;
A a;
a.w[0] = 1;
a.w[1] = 2;
cout << a.w[0] << a.w[1] << endl;
// my_func(a); // compiler error, as expected
my_func(reinterpret_cast<B&>(a)); // reinterpret, magic?
my_func( *(B*)(&a) ); // is this equivalent?
return 0;
}
// Output:
// 12
// 12
// 12
Is the reinterpret_cast valid?
Is the C-style cast equivalent?
Where the intention is to have the bits located at &a interpreted as a
type B, is this a valid / the best approach?
(Off topic: For those that want to know why I'm trying to do this, I'm dealing with two C libraries that want 128 bits of memory, and use structs with different internal names - much like the structs in my example. I don't want memcopy, and I don't want to hack around in the 3rd party code.)
In C++11, this is fully allowed if the two types are layout-compatible, which is true for structs that are identical and have standard layout. See this answer for more details.
You could also stick the two structs in the same union in previous versions of C++, which had some guarantees about being able to access identical data members (a "common initial sequence" of data members) in the same order for different structure types.
In this case, yes, the C-style cast is equivalent, but reinterpret_cast is probably more idiomatic.

what is wrong with this program?

#include <iostream>
#include <string>
#include <fstream>
using namespace std;
int main() {
string x;
getline(cin,x);
ofstream o("f:/demo.txt");
o.write( (char*)&x , sizeof(x) );
}
I get the unexpected output.I don't get what i write in a string function.
Why is this ?
Please explain .
Like when i write steve pro i get the output as 8/ steve pro ÌÌÌÌÌÌ ÌÌÌÌ in the file
I expect that the output be steve pro
You are treating an std::string like something that it is not. It's a complex object that, somewhere in its internals, stores characters for you.
There is no reason to assume that a character array is at the start of the object (&x), and the sizeof the object has no relation to how many characters it may indirectly hold/represent.
You're probably looking for:
o.write(x.c_str(), x.length());
Or just use the built-in formatted I/O mechanism:
o << x;
You seem to have an incorrect model of sizeof, so let me try to get it right.
For any given object x of type T, the expression sizeof(x) is a compile-time constant. C++ will never actually inspect the object x at runtime. The compiler knows that x is of type T, so you can imagine it silently transforming sizeof(x) to sizeof(T), if you will.
#include <string>
int main()
{
std::string a = "hello";
std::string b = "Stack Overflow is for professional and enthusiast programmers, people who write code because they love it.";
std::cout << sizeof(a) << std::endl; // this prints 4 on my system
std::cout << sizeof(b) << std::endl; // this also prints 4 on my system
}
All C++ objects of the same type take up the exact amount of memory. Of course, since strings have vastly different lengths, they will internally store a pointer to a heap-allocated block of memory. But this does not concern sizeof. It couldn't, because as I said, sizeof operates at compile-time.
You get exactly what you write: the binary raw value of a pointer to char...
#include <iostream>
#include <string>
#include <fstream>
using namespace std;
int main()
{
string x;
getline(cin,x);
ofstream o("tester.txt");
o << x;
o.close();
}
If you insist on writing a buffer directly, you can use
o.write(x.c_str(), x.size());
PS A little attention to code formatting unclouds the mind
You're passing the object's address to write into the file, whereas the original content lies somewhere else, pointed to by one of its internal pointers.
Try this:
string x;
getline(cin,x);
ofstream o("D:/tester.txt");
o << x;
// or
// o.write( x.c_str() , x.length());

Does std::less have to be consistent with the equality operator for pointer types?

I've bumped into a problem yesterday, which I eventually distilled into the following minimal example.
#include <iostream>
#include <functional>
int main()
{
int i=0, j=0;
std::cout
<< (&i == &j)
<< std::less<int *>()(&i, &j)
<< std::less<int *>()(&j, &i)
<< std::endl;
}
This particular program, when compiled using MSVC 9.0 with optimizations enabled, outputs 000. This implies that
the pointers are not equal, and
neither of the pointers is ordered before the other according to std::less, implying that the two pointers are equal according to the total order imposed by std::less.
Is this behavior correct? Is the total order of std::less not required to be consistend with equality operator?
Is the following program allowed to output 1?
#include <iostream>
#include <set>
int main()
{
int i=0, j=0;
std::set<int *> s;
s.insert(&i);
s.insert(&j);
std::cout << s.size() << std::endl;
}
Seems as we have a standard breach! Panic!
Following 20.3.3/8 (C++03) :
For templates greater, less,
greater_equal, and less_equal, the
specializations for any pointer type
yield a total order, even if the
built-in operators <, >, <=, >= do
not.
It seems a situation where eager optimizations lead to improper code...
Edit: C++0x also holds this one under 20.8.5/8
Edit 2: Curiously, as an answer to the second question:
Following 5.10/1 C++03:
Two pointers of the same type compare
equal if and only if they are both
null, both point to the same function,
or both represent the same address
Something is wrong here... on many levels.
No, the result is obviously not correct.
However, MSVC is known not to follow the "unique address" rules to the letter. For example, it merges template functions that happens to generate identical code. Then those different functions will also have the same address.
I guess that you example would work better if you actually did something to i and j, other that taking their address.