When I try to write the file using C; fwrite which accepts void type as data, it is not interpreted by text editor.
struct index
{
index(int _x, int _y):x(_x), y(_y){}
int x, y;
}
index i(4, 7);
FILE *stream;
fopen_s(&stream, "C:\\File.txt", "wb");
fwrite(&i, sizeof(index), 1, stream);
but when I try with C++; ofstream write in binary mode, it is readable. why doesn't it come up same as written using fwrite?
This is the way to write binary data using a stream in C++:
struct C {
int a, b;
} c;
#include <fstream>
int main() {
std::ofstream f("foo.txt",std::ios::binary);
f.write((const char*)&c, sizeof c);
}
This shall save the object in the same way as fwrite would. If it doesn't for you, please post your code with streams - we'll see what's wrong.
C++'s ofstream stream insertion only does text. The difference between opening a iostream in binary vs text mode is weather or not end of line character conversion happens. If you want to write a binary format where a 32 bit int takes exactly 32 bits use the c functions in c++.
Edit on why fwrite may be the better choice:
Ostream's write method is more or less a clone of fwrite(except it is a little less useful since it only takes a byte array and length instead of fwrite's 4 params) but by sticking to fwrite there is no way to accidentally use stream insertion in one place and write in another. More less it is a safety mechanism. While you gain that margin of safety you loose a little flexibility, you can no longer make a iostream derivative that compresses output with out changing any file writing code.
Related
The well known way of creating an fstream object is:
ifstream fobj("myfile.txt");
ie. using a filename.
But I want to create an ifstream object using a file descriptor.
Reason: I want to execute a command using _popen(). _popen() returns the output as a FILE*. So there is a FILE* pointer involved but no filename.
You cannot do that just in standard C++, since iostreams and C I/O are entirely separate and unrelated. You could however write your own iostream that's backed by a C FILE stream. I believe that GCC comes with one such stream class as a library extension.
Alternatively, if all you want is an object-y way of wrapping a C FILE stream, you could use a unique pointer for that purpose.
You can try QTextStream.
See in https://doc.qt.io/qt-6/qtextstream.html#QTextStream-2
You can create a string, and use fread to read and append to it. It's not clean, but you're working with a C interface.
Something like this should work:
FILE * f = popen(...)
const unsigned N=1024;
string total;
while (true) {
vector<char> buf[N];
size_t read = fread((void *)&buf[0], 1, N, f);
if (read) { total.append(buf.begin(), buf.end()); }
if (read < N) { break; }
}
pclose(f);
I was just thinking after reading about Java & C#, whether C++ can also read image & pdf files without the use of external libraries ? C++ doesn't have the byte type like Java & C#. Then how can we accomplish the task ( again without using an external library) ?
Can anyone give a small demonstration (ie a program or code to read or copy or write image or pdf files) ?
You can use unsigned char or char reinterpreted as some integer type to parse binary file formats like pdf, jpeg etc. You can create a buffer as std::vector<char> and read it as following:
std::vector<char> buffer((
std::istreambuf_iterator<char>(infile)), // Ensure infile was opened with binary attribute
(std::istreambuf_iterator<char>()));
Related questions: Reading and writing binary file
There is no difference what file you are reading opened in binary mode, there is only difference is how you should interpret the data you get from the file.
It's significantly better to take ready to use library like e.g. libjpeg or whatever. There are plenty of them. But If you really want to do this, at first you should define suitable structures and constants (see links below) to make code to be convinient and useable. Then you just read the data and try to interpret it step by step. The code below is just pseudo code, I didn't compile it.
#include <fstream>
// define header structure
struct jpeg_header
{
enum class marker: unsigned short { eoi = 0xffd8, sof0 = 0xffc0 ... };
...
};
bool is_eoi(unsigned short m) { return jpeg_header::eoi == m; }
jpeg_header read_jpeg_header(const std::string& fn)
{
std::ifstream inf(fn, std::ifstream::binary);
if (!inf)
{
throw std::runtime_error("Can't open file: " + fn);
}
inf.exceptions(std::ifstream::failbit | std::ifstream::eofbit);
unsigned short marker = inf.get() << 8;
marker |= inf.get();
if (!is_eoi(marker))
{
throw std::runtime_error("Invalid jpeg header");
}
...
jpeg_header header;
// read further and fill header structure
...
return header;
}
To read huge block of data use ifstream::read(), ifstream::readsome() methods. Here is the good example http://en.cppreference.com/w/cpp/io/basic_istream/read.
Those functions also work faster then stream iterators. It's also better define your own exception classes derived from std::runtime_error.
For details on file formats you interested in look here
Structure of a PDF file?
https://en.wikipedia.org/wiki/JPEG_File_Interchange_Format
https://en.wikipedia.org/wiki/JPEG
It would be a strange world to have a system language like C and in this case C++ without a type byte :).
Yeah, I take it, it has strange name, unsigned char, but it is still there:).
Really just think about the magnitude of re-development of all things to avoid byte:). Peripherals, many registers in CPU's and other chips, communication, data protocols. It would all have to be redone:).
and thank you in advance for your help!
I am in the process of learning C++. My first project is to write a parser for a binary-file format we use at my lab. I was able to get a parser working fairly easily in Matlab using "fread", and it looks like that may work for what I am trying to do in C++. But from what I've read, it seems that using an ifstream is the recommended way.
My question is two-fold. First, what, exactly, are the advantages of using ifstream over fread?
Second, how can I use ifstream to solve my problem? Here's what I'm trying to do. I have a binary file containing a structured set of ints, floats, and 64-bit ints. There are 8 data fields all told, and I'd like to read each into its own array.
The structure of the data is as follows, in repeated 288-byte blocks:
Bytes 0-3: int
Bytes 4-7: int
Bytes 8-11: float
Bytes 12-15: float
Bytes 16-19: float
Bytes 20-23: float
Bytes 24-31: int64
Bytes 32-287: 64x float
I am able to read the file into memory as a char * array, with the fstream read command:
char * buffer;
ifstream datafile (filename,ios::in|ios::binary|ios::ate);
datafile.read (buffer, filesize); // Filesize in bytes
So, from what I understand, I now have a pointer to an array called "buffer". If I were to call buffer[0], I should get a 1-byte memory address, right? (Instead, I'm getting a seg fault.)
What I now need to do really ought to be very simple. After executing the above ifstream code, I should have a fairly long buffer populated with a number of 1's and 0's. I just want to be able to read this stuff from memory, 32-bits at a time, casting as integers or floats depending on which 4-byte block I'm currently working on.
For example, if the binary file contained N 288-byte blocks of data, each array I extract should have N members each. (With the exception of the last array, which will have 64N members.)
Since I have the binary data in memory, I basically just want to read from buffer, one 32-bit number at a time, and place the resulting value in the appropriate array.
Lastly - can I access multiple array positions at a time, a la Matlab? (e.g. array(3:5) -> [1,2,1] for array = [3,4,1,2,1])
Firstly, the advantage of using iostreams, and in particular file streams, relates to resource management. Automatic file stream variables will be closed and cleaned up when they go out of scope, rather than having to manually clean them up with fclose. This is important if other code in the same scope can throw exceptions.
Secondly, one possible way to address this type of problem is to simply define the stream insertion and extraction operators in an appropriate manner. In this case, because you have a composite type, you need to help the compiler by telling it not to add padding bytes inside the type. The following code should work on gcc and microsoft compilers.
#pragma pack(1)
struct MyData
{
int i0;
int i1;
float f0;
float f1;
float f2;
float f3;
uint64_t ui0;
float f4[64];
};
#pragma pop(1)
std::istream& operator>>( std::istream& is, MyData& data ) {
is.read( reinterpret_cast<char*>(&data), sizeof(data) );
return is;
}
std::ostream& operator<<( std::ostream& os, const MyData& data ) {
os.write( reinterpret_cast<const char*>(&data), sizeof(data) );
return os;
}
char * buffer;
ifstream datafile (filename,ios::in|ios::binary|ios::ate);
datafile.read (buffer, filesize); // Filesize in bytes
you need to allocate a buffer first before you read into it:
buffer = new filesize[filesize];
datafile.read (buffer, filesize);
as to the advantages of ifstream, well it is a matter of abstraction. You can abstract the contents of your file in a more convenient way. You then do not have to work with buffers but instead can create the structure using classes and then hide the details about how it is stored in the file by overloading the << operator for instance.
You might perhaps look for serialization libraries for C++. Perhaps s11n might be useful.
This question shows how you can convert data from a buffer to a certain type. In general, you should prefer using a std::vector<char> as your buffer. This would then look like this:
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
int main() {
std::ifstream input("your_file.dat");
std::vector<char> buffer;
std::copy(std::istreambuf_iterator<char>(input),
std::istreambuf_iterator<char>(),
std::back_inserter(buffer));
}
This code will read the entire file into your buffer. The next thing you'd want to do is to write your data into valarrays (for the selection you want). valarray is constant in size, so you have to be able to calculate the required size of your array up-front. This should do it for your format:
std::valarray array1(buffer.size()/288); // each entry takes up 288 bytes
Then you'd use a normal for-loop to insert the elements into your arrays:
for(int i = 0; i < buffer.size()/288; i++) {
array1[i] = *(reinterpret_cast<int *>(buffer[i*288])); // first position
array2[i] = *(reinterpret_cast<int *>(buffer[i*288]+4)); // second position
}
Note that on a 64-bit system this is unlikely to work as you expect, because an integer would take up 8 bytes there. This question explains a bit about C++ and sizes of types.
The selection you describe there can be achieved using valarray.
I have the following code and i am trying to write some data in a binary file.
The problem is that i don't have any experience with binary files and i cant understand what i am doing.
#include <iostream>
#include <fstream>
#include <string>
#define RPF 5
using namespace std;
int write_header(int h_len, ofstream& f)
{
int h;
for(h=0;h<h_len;h++)
{
int num = 0;
f.write((char*)&num,sizeof(char));
}
return 0;
}
int new_file(const char* name)
{
ofstream n_file(name,ofstream::binary);
write_header(RPF,n_file);
n_file.close();
return 0;
}
int main(int argc, char **argv)
{
ofstream file("file.dat",ofstream::binary);
file.seekp(10);
file.write("this is a message",3);
new_file("file1.dat");
cin.get();
return 0;
}
1. As you can see i am opening file.dat and writing inside the word "thi". Then i open the file and i see the ASCII value of it. Why does this happen?
Then i make a new file file1.dat and i try to write in it the number 0 five times.
What i am supposed to use?
this
f.write((char*)&num,sizeof(char));
or this
f.write((char*)&num,sizeof(int));
and why i cant write the value of the number as is and i have to cast it as a char*?
Is this because write() works like this or i am able to write only chars to a binary file?
Can anyone help me understand what's happening?
Function write() that a pointer to your data buffer and the length in bytes of the data to be streamed to the file. So when you say
file.write("this is a message",3);
you tell the write function to write 3 bytes in the file. And that is "thi".
This
f.write((char*)&num,sizeof(char));
tells the write function to put sizeof(char) bytes in the file. That is 1 byte. You probably want it
f.write((char*)&num,sizeof(int));
as num is a int variable.
You are writing the ASCII string "thi" to file.dat. If you opened the file in a hex editor, you would see "74 68 69", which is the numeric representations of those characters. But if you open file.dat in an editor that understands ASCII, it will most likely translate those values back to their ASCII representation to make it easier to view. Opening the ofstream in ios::binary mode means that data is output to file as is, and no transformations may be applied by the stream before hand.
The function ofstream::write(const char *data, streamsize len) has two parameters. data is like this so that write is operating on individual bytes. That is why you have to cast num to a char* first. The second parameter, len, indicates how many bytes, starting from data that will be written to the file. My advise would be to use write(static_cast<char*>(num), sizeof(num)), then set num to be a type big enough to store the data required. If you declare int num, then on a 32bit platform, 20 zero bytes would be written to the file. If you only want 5 zero bytes, then declare as char num.
struct Vector
{
float x, y, z;
};
func(Vector *vectors) {...}
usage:
load float *coords = load(file);
func(coords);
I have a question about the alignment of structures in C++. I will pass a set of points to the function func(). Is is OK to do it in the way shown above, or is this relying on platform-dependent behavior? (it works at least with my current compiler) Can somebody recommend a good article on the topic?
Or, is it better to directly create a set of points while loading the data from the file?
Thanks
Structure alignment is implementation-dependent. However, most compilers give you a way of specifying that a structure should be "packed" (that is, arranged in memory with no padding bytes between fields). For example:
struct Vector {
float x;
float y;
float z;
} __attribute__((__packed__));
The above code will cause the gcc compiler to pack the structure in memory, making it easier to dump to a file and read back in later. The exact way to do this may be different for your compiler (details should be in your compiler's manual).
I always list members of packed structures on separate lines in order to be clear about the order in which they should appear. For most compilers this should be equivalent to float x, y, z; but I'm not certain if that is implementation-dependent behavior or not. To be safe, I would use one declaration per line.
If you are reading the data from a file, you need to validate the data before passing it to func. No amount of data alignment enforcement will make up for a lack of input validation.
Edit:
After further reading your code, I understand more what you are trying to do. You have a structure that contains three float values, and you are accessing it with a float* as if it were an array of floats. This is very bad practice. You don't know what kind of padding that your compiler might be using at the beginning or end of your structure. Even with a packed structure, it's not safe to treat the structure like an array. If an array is what you want, then use an array. The safest way is to read the data out of the file, store it into a new object of type struct Vector, and pass that to func. If func is defined to take a struct Vector* as an argument and your compiler is allowing you to pass a float* without griping, then this is indeed implementation-dependent behavior that you should not rely on.
Use an operator>> extraction overload.
std::istream& operator>>(std::istream& stream, Vector& vec) {
stream >> vec.x;
stream >> vec.y;
stream >> vec.z;
return stream;
}
Now you can do:
std::ifstream MyFile("My Filepath", std::ios::openmodes);
Vector vec;
MyFile >> vec;
func(&vec);
Prefer passing by reference than passing by pointer:
void func(Vector& vectors)
{ /*...*/ }
The difference here between a pointer and a reference is that a pointer can be NULL or point to some strange place in memory. A reference refers to an existing object.
As far as alignment goes, don't concern yourself. Compilers handle this automagically (at least alignment in memory).
If you are talking about alignment of binary data in a file, search for the term "serialization".
First of all, your example code is bad:
load float *coords = load(file);
func(coords);
You're passing func() a pointer to a float var instead of a pointer to a Vector object.
Secondly, Vector's total size if equal to (sizeof(float) * 3), or in other words to 12 bytes.
I'd consult my compiler's manual to see how to control the struct's aligment, and just to get a peace of mind I'd set it to, say 16 bytes.
That way I'll know that the file, if contains one vector, is only 16 bytes in size always and I need to read only 16 bytes.
Edit:
Check MSVC9's align capabilities .
Writing binary data is non portable between machines.
About the only portable thing is text (even then can not be relied as not all systems use the same text format (luckily most accept the 127 ASCII characters and hopefully soon we will standardize on something like Unicode (he says with a smile)).
If you want to write data to a file you must decide the exact format of the file. Then write code that will read the data from that format and convert it into your specific hardware's representation for that type. Now this format could be binary or it could be a serialized text format it does not matter much in performance (as the disk IO speed will probably be your limiting factor). In terms of compactness the binary format will probably be more efficient. In terms of ease of writing decoding functions on each platform the text format is definitely easier as a lot of it is already built into the streams.
So simple solution:
Read/Write to a serialized text format.
Also no alignment issues.
#include <algorithm>
#include <fstream>
#include <vector>
#include <iterator>
struct Vector
{
float x, y, z;
};
std::ostream& operator<<(std::ostream& stream, Vector const& data)
{
return stream << data.x << " " << data.y << " " << data.z << " ";
}
std::istream& operator>>(std::istream& stream, Vector& data)
{
return stream >> data.x >> data.y >> data.z;
}
int main()
{
// Copy an array to a file
Vector data[] = {{1.0,2.0,3.0}, {2.0,3.0,4.0}, { 3.0,4.0,5.0}};
std::ofstream file("plop");
std::copy(data, data+3, std::ostream_iterator<Vector>(file));
// Read data from a file.
std::vector<Vector> newData; // use a vector as we don't know how big the file is.
std::ifstream input("inputFile");
std::copy(std::istream_iterator<Vector>(input),
std::istream_iterator<Vector>(),
std::back_inserter(newData)
);
}