I'm trying to export various values, such as ints and simple structs, to a binary file. Here's some code:
#include <iostream>
#include <fstream>
#include <cstdint>
using namespace std;
template<class T> void writeToStream(ostream& o, T& val)
{
o.write((char*)&val, sizeof(T));
cout << o.tellp() << endl; //always outputs 4
}
struct foo {
uint16_t a, b;
};
int main()
{
foo myFoo = {42, 42};
ofstream test("test.txt", ios::binary);
writeToStream(test, myFoo);
test.close();
}
The program should generate an output file 4 bytes long. But when I open it, it's only 2 bytes long. If I change myFoo.a and myFoo.b to contain values of 256 or more (requires more than 1 byte to store), then the file becomes 4 bytes long. I'm using the Visual Studio 11 Developer Preview on Win7; I haven't checked to see if the same happens on other systems or compilers. How can I make it output correctly for values of a or b under 256?
A file can only be read back by a program that understands the format in which it was stored. Notepad++ has no understanding of the format in which your file was stored, so it has no ability to read it back and render it sensibly. Either write the file in a format Notepad++ understands, such as ASCII text, or only read the file with a program that understand the format you wrote it in.
I have cleaned up your code as follows. Though I do not know why the old code output two bytes, the new code does output four.
#include <iostream>
#include <fstream>
#include <cstdint>
using std::cout;
using std::endl;
using std::uint16_t;
using std::ostream;
using std::ofstream;
using std::ios;
template <class T> void writeToStream(ostream& o, T& val)
{
o.write(reinterpret_cast<char *>(&val), sizeof(T));
cout << o.tellp() << endl; //always outputs 4
}
struct foo {
uint16_t a, b;
};
int main()
{
foo myFoo = {42, 42};
ofstream test("test.txt", ios::binary);
writeToStream(test, myFoo);
// Just let the stream "test" pass out of scope.
// It closes automatically.
//test.close();
return 0;
}
(My standard library lacks cstdint, so I used short rather than uint16_t, but I doubt that this matters.)
The std::ofstream type is derived from std::ostream. The writeToStream() function is happier, or at least more regular and more general, if passed a plain std::ostream. Also, for information: to issue using namespace std; is almost never recommended in C++.
Good luck.
Related
I am trying to read from a file in c++ with the catch that the code must work for any char type. This works fine for chars, however when I try to read utf-8 (char8_t) data I simply get zeros. The following using char works as expected.
#include <iostream>
#include <fstream>
int main()
{
typedef char c;
c* buff = new c[38];
std::basic_fstream<c> s;
s.open("test_file_1.txt", std::ios::in);
s.read(buff, 38);
std::cout << reinterpret_cast<char*>(buff) << std::endl;
delete[] buff;
return 0;
}
the file "test_file_1.txt" simple contains This file contains very important data written as plain text.
Now when I simply change typedef char c to use typedef char8_t c I get no output. When I read the data as integers the entire buffer is full of zeros which explains why there is no output in cout. I tried writing a non zero value to the buffer before reading in the data and the fstream::read does not modify the data in any way.
Any explanation of what is going on and how to fix it would be greatly appreciated. All code was compiled with g++ -std=c++20 test.cpp using gcc 11.1.0 on arch linux.
edit1: fixed ambiguity in my wording
edit2: here is the code that does not work
#include <iostream>
#include <fstream>
int main()
{
typedef char8_t c;
c* buff = new c[38];
std::basic_fstream<c> s;
s.open("test_file_1.txt", std::ios::in);
s.read(buff, 38);
std::cout << reinterpret_cast<char*>(buff) << std::endl;
delete[] buff;
return 0;
}
So I have the following code:
#include <iostream>
#include <string>
#include <array>
using namespace std;
int main()
{
array<long, 3> test_vars = { 121, 319225, 15241383936 };
for (long test_var : test_vars) {
cout << test_var << endl;
}
}
In Visual Studio I get this output:
121
319225
-1938485248
The same code executed on the website cpp.sh gave the following output:
121
319225
15241383936
I expect the output to be like the one from cpp.sh. I don't understand the output from Visual Studio. It's probably something simple; but I'd appreciate it nonetheless if someone could tell me what's wrong. It's has become a real source of annoyance to me.
The MSVC uses a 4Byte long. The C++ standard only guarantees long to be at least as large as int. Therefore the max number representable by a signed long is 2.147.483.647. What you input is too large to hold by the long and you will have to use a larger datatype with at least 64bit.
The other compiler used a 64bit wide long which is the reason why it worked there.
You could use int64_t which is defined in cstdint header. Which would guarantee the 64bit size of the signed int.
Your program would read:
#include <cstdint>
#include <iostream>
#include <array>
using namespace std;
int main()
{
array<int64_t, 3> test_vars = { 121, 319225, 15241383936 };
for (int64_t test_var : test_vars) {
cout << test_var << endl;
}
}
Question
I have a few structures I want to write to a binary file. They consist of integers from cstdint, for example uint64_t. Is there a way to write those to a binary file that doesn not involve me manually splitting them into arrays of char and using the fstream.write() functions?
What I've tried
My naive idea was that c++ would figure out that I have a file in binary mode and << would write the integers to that binary file. So I tried this:
#include <iostream>
#include <fstream>
#include <cstdint>
using namespace std;
int main() {
fstream file;
uint64_t myuint = 0xFFFF;
file.open("test.bin", ios::app | ios::binary);
file << myuint;
file.close();
return 0;
}
However, this wrote the string "65535" to the file.
Can I somehow tell the fstream to switch to binary mode, like how I can change the display format with << std::hex?
Failing all that above I'd need a function that turns arbitrary cstdint types into char arrays.
I'm not really concerned about endianness, as I'd use the same program to also read those (in a next step), so it would cancel out.
Yes you can, this is what std::fstream::write is for:
#include <iostream>
#include <fstream>
#include <cstdint>
int main() {
std::fstream file;
uint64_t myuint = 0xFFFF;
file.open("test.bin", std::ios::app | std::ios::binary);
file.write(reinterpret_cast<char*>(&myuint), sizeof(myuint)); // or with recent C++: reinterpret_cast<std::byte*>
}
I don't know how to use bitset as a member of structure.As I am getting this
[ERROR]: ISO C++ forbids declaration of 'bitset' with no type
code:
typedef struct
{
bitset<10> status; //bitwise status
}Status;
It's often considered courteous on Stack Overflow to give more examples of what you're tried, and where you've looked for help. For example you might say that you're tried to understand the contents of http://en.cppreference.com/w/cpp/utility/bitset
But here goes:
#include <iostream>
#include <bitset> // you'll need to include this
struct status_t {
std::bitset<11> status; // note the std - it's in that namespace
};
int main()
{
status_t stat;
for (auto i = 0; i < 11 ; i += 2)
stat.status.set(i);
std::cout << "values: " << stat.status << "!\n";
}
You can see it run at cpp.sh - Bitset example
This sort of error can be caused by either omitting the bitset include, or failing to specify the std namespace.
To rectify the problem:
1) Make sure you're including bitset:
#include <bitset>
2) Make sure the std namespace is specified:
This can be done either 'globally' within the file using the directive:
using namespace std;
or by prefixing the bitset declaration with std:
std::bitset<10> status; //bitwise status
So, your final file fragment could look something like this:
#include <bitset>
// other code ...
typedef struct {
std::bitset<10> status; // bitwise status
}Status;
// the rest of the file ...
I've written my own specialisation of each virtual member function of std::ctype<char16_t>, so that this now works:
#include <string>
#include <locale>
#include "char16_facets.h" // Header containing my ctype specialisation
#include <sstream>
#include <iostream>
// Implemented elsewhere using iconv
std::string Convert(std::basic_string<char16_t>);
int main() {
std::basic_string<char16_t> s("Hello, world.");
std::basic_stringstream<char16_t> ss(s);
ss.imbue(std::locale(ss.getloc(), new std::ctype<char16_t>()));
std::basic_string<char16_t> t;
ss >> t;
std::cout << Convert(t) << " ";
ss >> t;
std::cout << Convert(t) << std::endl;
}
Is there a way to make streams use the new ctype specialisation by default, so I don't have to imbue each stream with a new locale?
I haven't written a new class, just provided
template<>
inline bool std::ctype<char16_t>::do_is (std::ctype_base::mask m, char16_t c) const {
etc. I'd sort of hoped it would be picked up automatically, so long as it was declared before I #include <sstream> but it isn't.
Most of the work for the above was done using G++ and libstdc++ 4.8, but I get the same result with them built from SVN trunk.
Edit - Update This question originally asked about how to get number extraction working. However, given a stream imbued with correct ctype and numpunct implementations, then no specialisation of num_get is necessary; simply
ss.imbue(std::locale(ss.getloc(), new std::num_get<char16_t>()));
and it will work, with either gcc version.
Again, is there some way to get the streams to pick this up automatically, rather than having to imbue every stream with it?
Use std::locale::global():
std::locale::global(std::locale(std::locale(), new std::ctype<char16_t>()));