C++, Writing vector<char> to ofstream skips whitespace - c++

Despite my sincerest efforts, I cannot seem to locate the bug here. I am writing a vector to an ofstream. The vector contains binary data. However, for some reason, when a whitespace character (0x10, 0x11, 0x12, 0x13, 0x20) is supposed to be written, it is skipped.
I have tried using iterators, and a direct ofstream::write().
Here is the code I'm using. I've commented out some of the other methods I've tried.
void
write_file(const std::string& file,
std::vector<uint8_t>& v)
{
std::ofstream out(file, std::ios::binary | std::ios::ate);
if (!out.is_open())
throw file_error(file, "unable to open");
out.unsetf(std::ios::skipws);
/* ostreambuf_iterator ...
std::ostreambuf_iterator<char> out_i(out);
std::copy(v.begin(), v.end(), out_i);
*/
/* ostream_iterator ...
std::copy(v.begin(), v.end(), std::ostream_iterator<char>(out, ""));
*/
out.write((const char*) &v[0], v.size());
}
EDIT: And the code to read it back.
void
read_file(const std::string& file,
std::vector<uint8_t>& v)
{
std::ifstream in(file);
v.clear();
if (!in.is_open())
throw file_error(file, "unable to open");
in.unsetf(std::ios::skipws);
std::copy(std::istream_iterator<char>(in), std::istream_iterator<char>(),
std::back_inserter(v));
}
Here is an example input:
30 0 0 0 a 30 0 0 0 7a 70 30 0 0 0 32 73 30 0 0 0 2 71 30 0 0 4 d2
And this is the output I am getting when I read it back:
30 0 0 0 30 0 0 0 7a 70 30 0 0 0 32 73 30 0 0 0 2 71 30 0 0 4 d2
As you can see, 0x0a is being ommited, ostensibly because it's whitespace.
Any suggestions would be greatly appreciated.

You forgot to open the file in binary mode in the read_file function.

Rather than muck around with writing vector<>s directly, boost::serialization is a more effective way, using boost::archive::binary_oarchive.

I think 'a' is treated as new line. I still have to think how to get around this.

The istream_iterator by design skips whitespace. Try replacing your std::copy with this:
std::copy(
std::istreambuf_iterator<char>(in),
std::istreambuf_iterator<char>(),
std::back_inserter(v));
The istreambuf_iterator goes directly to the streambuf object, which will avoid the whitespace processing you're seeing.

Related

Why does Microsoft's implementation of std::string require 40 bytes on the stack?

Having recently watched this video about facebook's implementation of string, I was curious to see the internals of Microsoft's implementation. Unfortunately, the string file (in %VisualStudioDirectory%/VC/include) doesn't seem to contain the actual definition, but rather just conversion functions (e.g. atoi) and some operator overloads.
I decided to do some poking and prodding at it from user-level programs. The first thing I did, of course, was to test sizeof(std::string). To my surprise, std::string takes 40 bytes! (On 64-bit machines anyways.) The previously mentioned video goes into detail about how facebook's implementation only requires 24 bytes and gcc's takes 32 bytes, so this was shocking to say the least.
We can dig a little deeper by writing a simple program that prints off the contents of the data byte-by-byte (including the c_str address), as such:
#include <iostream>
#include <string>
int main()
{
std::string test = "this is a very, very, very long string";
// Print contents of std::string test.
char* data = reinterpret_cast<char*>(&test);
for (size_t wordNum = 0; wordNum < sizeof(std::string); wordNum = wordNum + sizeof(uint64_t))
{
for (size_t i = 0; i < sizeof(uint64_t); i++)
std::cout << (int)(data[wordNum + i]) << " ";
std::cout << std::endl;
}
// Print the value of the address returned by test.c_str().
// (Doing this byte-by-byte to match the above values).
const char* testAddr = test.c_str();
char* dataAddr = reinterpret_cast<char*>(&testAddr);
std::cout << "c_str address: ";
for (size_t i = 0; i < sizeof(const char*); i++)
std::cout << (int)(dataAddr[i]) << " ";
std::cout << std::endl;
}
This prints out:
48 33 -99 -47 -55 1 0 0
16 78 -100 -47 -55 1 0 0
-52 -52 -52 -52 -52 -52 -52 -52
38 0 0 0 0 0 0 0
47 0 0 0 0 0 0 0
c_str address: 16 78 -100 -47 -55 1 0 0
Examining this, we can see that the second word contains the address that points to the allocated data for the string, the third word is garbage (a buffer for Short String Optimization), the fourth word is the size, and the fifth word is the capacity. But what about the first word? It appears to be an address, but what for? Shouldn't everything already be accounted for?
For the sake of completeness, the following output shows SSO (the string is set to "Short String"). Note that the first word still seems to represent a pointer:
0 36 -28 19 123 1 0 0
83 104 111 114 116 32 83 116
114 105 110 103 0 -52 -52 -52
12 0 0 0 0 0 0 0
15 0 0 0 0 0 0 0
c_str address: 112 -9 79 -108 23 0 0 0
EDIT: Ok, so having done more testing, it appears that the size of std::string actually decreases down to 32 bytes when compiled for release, and the first word is no longer there. But I'm still really interested in knowing why that is the case, and what that extra pointer is used for in debug mode.
Update: As per the tip by the user Yuushi, the extra word appears to related to Debug Iterator Support. This was verified when I turned off Debug Iterator Support (an example for doing this is shown here) and the size of std::string was reduced to 32 bytes, with the first word now missing.
However, it would still be really interesting to see how Debug Iterator Support uses that additional pointer to check for incorrect iterator use.
Visual Studio 2015 use xstring instead of string to define std::basic_string
NOTE: This answer is applied for VS2015 only, VS2013 uses a different implementation, however, they are more or less the same.
It's implemented as:
template<class _Elem,
class _Traits,
class _Alloc>
class basic_string
: public _String_alloc<_String_base_types<_Elem, _Alloc> >
{
// This class has no member data
}
_String_alloc use a _Compressed_pair<_Alty, _String_val<_Val_types> > to store its data, in std::string, _Alty is std::allocator<char> and _Val_types is _Simple_types<char>, because std::is_empty<std::allocator<char>>::value is true, sizeof _Compressed_pair<_Alty, _String_val<_Val_types> > is the same with sizeof _String_val<_Val_types>
class _String_val inherites from _Container_base which is a typedef of _Container_base0 when #if _ITERATOR_DEBUG_LEVEL == 0 and _Container_base12 otherwise. The difference between them is _Container_base12 contains pointer to _Container_proxy for debug purpose. Beside that _String_val also have those members:
union _Bxty
{ // storage for small buffer or pointer to larger one
_Bxty()
{ // user-provided, for fancy pointers
}
~_Bxty() _NOEXCEPT
{ // user-provided, for fancy pointers
}
value_type _Buf[_BUF_SIZE];
pointer _Ptr;
char _Alias[_BUF_SIZE]; // to permit aliasing
} _Bx;
size_type _Mysize; // current length of string
size_type _Myres; // current storage reserved for string
With _BUF_SIZE is 16.
And pointer_type, size_type is well aligned together in this system. No alignment is necessary.
Hence, when _ITERATOR_DEBUG_LEVEL == 0 then sizeof std::string is:
_BUF_SIZE + 2 * sizeof size_type
otherwise it's
sizeof pointer_type + _BUF_SIZE + 2 * sizeof size_type

boost serialization hexadecimal decimal encoding of data

I am new to boost serialization but this seems very strange to me.
I have a very simple class with two members
int number // always = 123
char buffer[?] // buffer with ? size
so sometimes I set the size to buffer[31] then I serialize the class
22 serialization::archive 8 0 0 1 1 0 0 0 0 123 0 0 31 0 0 0 65 65
we can see the 123 and the 31 so no issue here both are in decimal format.
now I change buffer to buffer[1024] so I expected to see
22 serialization::archive 8 0 0 1 1 0 0 0 0 123 0 0 1024 0 0 0 65 65 ---
this is the actual outcome
22 serialization::archive 8 0 0 1 1 0 0 0 0 123 0 0 0 4 0 0 65 65 65
boost has switched to hex for the buffer size only?
notice the other value is still decimal.
So what happens if I switch number from 123 to 1024 ?
I would imagine 040 ?
22 serialization::archive 8 0 0 1 1 0 0 0 0 1024 0 0 0 4 0 0 65 65
If this is by design, why does the 31 not get converted to 1F ? its not consistent.
This causes problems in our load function for the split_free, we were doing this
unsigned int size;
ar >> size;
but as you might guess, when this is 040, it truncs to zero :(
what is the recommended solution to this?
I was using boost 1.45.0 but I tested this on boost 1_56.0 and it is the same.
EDIT: sample of the serialization function
template<class Archive>
void save(Archive& ar, const MYCLASS& buffer, unsigned int /*version*/) {
ar << boost::serialization::make_array(reinterpret_cast<const unsigned char*>(buffer.begin()), buffer.length());
}
MYCLASS is just a wrapper on a char* with the first element an unsigned int
to keep the length approximating a UNICODE_STRING
http://msdn.microsoft.com/en-gb/library/windows/desktop/aa380518(v=vs.85).aspx
The code is the same if the length is 1024 or 31 so I would not have expected this to be a problem.
I don't think Boost "switched to hex". I honestly don't have any experience with this, but it looks like boost is serializing as an array of bytes, which can only hold numbers from 0 through 255. 1024 would be a byte with a value 4 followed by a byte with the value 0.
"why does the 31 not get converted to 1F ? its not consistent" - your assumptions are creating false inconsistencies. Stop assuming you can read the serialization archive format when actually you're just guessing.
If you want to know, trace the code. If not, just use the archive format.
If you want "human accessible form", consider the xml_oarchive.

Tellg returning unexpected value

I have a function which reads lines from a file. But before reading it returns the address from where its going to read the next line.
my function is:
void print()
{
int j=0;
string a,b,c,d,e;
ifstream i("data.txt");
cout<<setw(15)<<left<<"Hash Value"<<setw(15)<<left<<"Employee Name"<<setw(15)<<left<<"Employee ID"<<setw(15)<<left<<"Salary"<<setw(15)<<left<<"Link"<<endl;
while(j<10)
{
j++;
cout<<i.tellg()<<endl;
i>>a>>b>>c>>d>>e;
cout<<setw(15)<<left<<a<<setw(15)<<left<<b<<setw(15)<<left<<c<<setw(15)<<left<<d<<setw(15)<<left<<e<<endl;
}
i.close();
}
The file it is reading from is data.txt:
0 --- 0 0 -1
1 --- 0 0 -1
2 --- 0 0 -1
3 --- 0 0 -1
4 --- 0 0 -1
5 --- 0 0 -1
6 --- 0 0 -1
7 --- 0 0 -1
8 --- 0 0 -1
9 --- 0 0 -1
And the output I am getting is:
Hash Value Employee Name Employee ID Salary Link
0
0 --- 0 0 -1
81
1 --- 0 0 -1
157
2 --- 0 0 -1
233
3 --- 0 0 -1
309
4 --- 0 0 -1
385
5 --- 0 0 -1
461
6 --- 0 0 -1
541
7 --- 0 0 -1
617
8 --- 0 0 -1
693
9 --- 0 0 -1
Every line is of length 76 characters. So everytime the address printed should increase by 76.
But i dont understand whats going on when the 2nd line is printed[hash value 1], and the 7th line is printed [hash value 6].
Can someone please help me with this?
A couple of things:
First and foremost, you're not reading line by line, so there
is no reason to assume that you advance the number of characters
in a line each time through the loop. If you want to read line
by line, use std::getline, and then extract the fields from
the line, either using std::istringstream or some other
method.
The result of tellg is not an integer, and when converted to
an integral type (not necessarily possible), there is no
guaranteed relationship with the number of bytes you have
extracted. On Unix machines, the results will correspond, and
under Windows if (and only if) the file has been opened in
binary mode. On other systems, there may be no visible
relationship what so ever. The only valid portable use of the
results of tellg is to pass it to a seekg later; anything
else depends on the implementation.
How do you know that each line contains exactly 76 characters?
Depending on how the file was produced, there might be a BOM at
the start (which would count as three characters if the file in
encoded in UTF8 and you are in "C" locale). And what about
trailing whitespace. Again, if your input is line oriented, you
should be reading lines, and then parsing them.
Finally, but perhaps the most important: you're using the
results of >> without verifying that the operator worked. In
your case, the output suggests that it did, but you can never be
sure without verifying.
Globally, your loop should look like:
std::string line;
while ( std::getline( i, line ) ) {
std::istringstream l( line );
std::string a;
std::string b;
std::string c;
std::string d;
std::string e;
l >> a >> b >> c >> d >> e >> std::ws;
if ( !l || l.get() != EOF ) {
// Format error in line...
} else {
// ...
}
}
Outputting tellg still won't tell you anything, but at least
you'll read the input correctly. (Outputting the length of
line might be useful in some cases.)

c++ reading (.cso) compiled shader object returning \0

I've tried two different methods to read from this cso file. Which is microsofts compiled shader
HRESULT BasicReader::ReadData(_In_z_ wchar_t const* fileName, _Inout_ std::unique_ptr<uint8_t[]>& data, _Out_ size_t* dataSize) {
ScopedHandle hFile(safe_handle(CreateFileW(fileName, GENERIC_READ, FILE_SHARE_READ, nullptr, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, nullptr)));
LARGE_INTEGER fileSize = { 0 };
FILE_STANDARD_INFO fileInfo;
GetFileInformationByHandleEx(hFile.get(), FileStandardInfo, &fileInfo, sizeof(fileInfo));
fileSize = fileInfo.EndOfFile;
data.reset(new uint8_t[fileSize.LowPart]);
DWORD bytesRead = 0;
ReadFile(hFile.get(), data.get(), fileSize.LowPart, &bytesRead, nullptr);
*dataSize = bytesRead;
}
GetFileInformationByHandleEx returned true
and ReadFile returned true
HRESULT BasicReader::ReadData(_In_z_ wchar_t const* fileName, _Inout_ std::unique_ptr<uint8_t[]>& data, _Out_ size_t* dataSize) {
std::ifstream fstream;
fstream.open(fileName, std::ifstream::in | std::ifstream::binary);
if (fstream.fail())
return false;
char* val;
fstream.seekg(0, std::ios::end);
size_t size = size_t(fstream.tellg());
val = new char[size];
fstream.seekg(0, std::ios::beg);
fstream.read(val, size);
fstream.close();
auto f = reinterpret_cast<unsigned char*>(val);
data.reset(f);
*dataSize = size;
}
Both of these methods make data = \0
However; when I point to another file in the same directory, it gives me data. What is happening here?
Here's the file.
I read the first few bytes of the file and it's this:
0 2 254 255 254 255 124 1 68 66 85 71 40 0 0 0 184 5 0 0 0 0 0 0 1 0 0 0 144 0 0
0 72 0 0 0 148 0 0 0 4 0 0 0 104 5 0 0 212 2 0 0 67 58 92 85 115 101 114 115 92
106 97 99 111 98 95 48 48 48 92 68 111 99 117 109 101 110 116 115 92 86 105 115
117 97 108 32 8...
And the working file looks like this:
68 68 83 32 124 0 0 0 7 16 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
32 0 0 0 0 64 0 0 0 0 0 0 0 32 0 0 0 0 0 255 0 0 255 0 0 255 0 0 0 0 0 0 0 0 16
0 0 0 0 0 0 0 0 0 0...
Your code working as expected: char* data array contains file's data. What's going wrong here is that your char* data array is misinterpreted by your visualizers (whatever you use to visualize: debugger visualizer, std::cout, etc). They all try to print null-terminated (c-style) string, but it terminates instantly, as first char is 0. Raw arrays can also be visualized in debuggers as pointers: address and only first data member's value (because it cannot know where array ends). In C# situation is different as arrays are objects there, much like std::vectors, so their size is known.
Offtopic (sorry for that):
I would like to comment your second, native C++ BasicReader::ReadData method implementation, as it hurts my C++ feelings ;) You trying to write C code in C++11 style. "There is more than one way to skin a cat", but there are some advices:
don't use raw pointers (char*), use STL containers instead (std::vector, std::string)
have you really good reason to use std::unique_ptr<uint8_t[]> data + size_t dataSize instad of std::vector<uint8_t>?
avoid using raw operator new(), use STL containers, std::make_shared, std::make_unique (if available)
seekg()+tellg() file size counting can report wrong size in case of big files
Doesn't this code looks a little cleaner and more safe:
std::vector<uint8_t> ReadData(const std::string filename)
{
std::vector<uint8_t> data;
std::ifstream fs;
fs.open(filename, std::ifstream::in | std::ifstream::binary);
if (fs.good())
{
auto size = FileSize(filename);
// TODO: check here if size is more than size_t
data.resize(static_cast<size_t>(size));
fs.seekg(0, std::ios::beg);
fs.read(reinterpret_cast<char*>(&data[0]), size);
fs.close();
}
return data;
}
And usage is even more cleaner:
std::vector<uint8_t> vertexShaderData = ReadData("VertexShader.cso");
if(vertexShaderData.empty()) { /* handle it*/ }
auto wannaKnowSize = vertexShaderData.size();
As a bonus, you got a nice-looking debugger visualization.
And safe FileSize() implementation. You can use either boost::filesystem, of std::tr2 if your STL had implemented it.
#include <filesystem>
namespace filesystem = std::tr2::sys;
/* or namespace filesystem = boost::filesystem */
uintmax_t FileSize(std::string filename)
{
filesystem::path p(filename);
if (filesystem::exists(p) && filesystem::is_regular_file(p))
return filesystem::file_size(p);
return 0;
}
Hope it helps somehow.

How do I read one number at a time and store it in an array, skipping duplicates?

I'm trying to read numbers from a file into an array, discarding duplicates. For instance, say the following numbers are in a file:
41 254 14 145 244 220 254 34 135 14 34 25
Though the number 34 occurs twice in the file, I would only like to store it once in the array. How would I do this?
(fixed, but I guess a better term would be a 64 bit Unsigned int) (was using numbers above 255)
vector<int64_t> v;
copy(istream_iterator<int64_t>(cin), istream_iterator<int64_t>(), back_inserter(v));
set<int64_t> s;
vector<int64_t> ov; ov.reserve(v.size());
for( auto i = v.begin(); i != v.end(); ++i ) {
if ( s.insert(v[i]).second )
ov.push_back(v[i]);
}
// ov contains only unique numbers in the same order as the original input file.