c++ cout instead of fstream - c++

Normally I live in my guarded world of C#. But sometimes I have to break out and do something outside.
At the moment I have to decode an audiostream and have to output this directly in my c++ console application.
If I write the content into a file, I can hear the correct result.
But if I use instead of a fstream cout, I get only a noisy sound.
How I have to do it correct?
Here the working filestream code:
fstream wavefile;
wavefile.open(output, ios::out | ios::binary | ios::trunc);
//do something
wavefile.write((char*) &waveheader, waveheadersize);
//do something else
do {
//do something
//decodedBuffer is of type BYTE* , decodedLength is of type DWORD
wavefile.write((char*) decodedBuffer, decodedLength);
wavefile.flush();
} while (encodedLength > 0);
My not working cout code:
std::cout.setf(ios::out | ios::binary | ios::trunc);
//do something
//this works, I got the same output
cout << structToString(&waveheader) << endl;
//do something else
do {
//do something
cout << (char *)decodedBuffer;
} while (encodedLength > 0);
Thanks in advance

First, there is absolutely no reason to use different code for a std::fstream and for std::cout (Beside which, your cout-code uses formatted output, and wrong):
You are using their ostream-interface in both cases.
So, first things first, repair the second code (as far as possible) by replacing it with the first.
Now, we come to the one difference (which you tried to paper over with setf): std::cout is in text-mode, not binary mode!
Unfortunately, none of ios::out, ios::binary and ios::trunc are formatting-flags, so you cannot set them with setf.
Actually, the mode cannot be changed at all after the fact (at least not portably).
Fortunately, you can simply ignore having the wrong mode on many systems, as Linux and others equate textmode and binary-mode. On windows, this hack should get you around it:
cout.flush();
fflush(stdout);
_setmode(_fileno(stdout), _O_BINARY);

The simple answer is that you can't. You need to output in
binary mode. The mode is selected, once and for all, when you
open the file, and std::cout is always opened in text mode.
Since you're just writing a block of bytes, the simplest
solution is to use the system level requests. Under Windows,
for example, you can use GetStdHandle( STD_OUTPUT_HANDLE ) to
get the handle to standard out, and WriteFile to write a block
of bytes to it. (Under Unix, the file descriptor for standard
out is always 1, and the function is write. But since there
is no difference between text mode and binary mode under Unix,
I assume that this isn't your case.)

Try this,
std::cout.write(reinterpret_cast<char*>(decodedBuffer), decodedLength);
But I am not sure if structToString(&waveheader) work correctly, but you seem okay with it.

Related

QFile: multiple handles to same physical file do not write all data

I was wondering how QFile behaves when multiple handles are opened to the same file (using C++ in Visual Studio 2013 on Windows 7), so I wrote the following little program:
QFile file("tmp.txt");
file.open(QIODevice::WriteOnly | QIODevice::Truncate);
QTextStream ts(&file);
ts << "Hallo\n";
QFile file2("tmp.txt");
file2.open(QIODevice::WriteOnly | QIODevice::Append);
QTextStream ts2(&file2);
ts2 << "Hallo 2\n";
file.close();
ts2 << "Hello again\n";
file2.close();.
This produces the following output in file tmp.txt:
Hallo 2
Hello again
So the first Hallo statement got lost. If I do a ts.flush() right after the ts << "Hallo\n" this does not happen, which makes me think that the statement got lost in the internal buffers of QString or it was overwritten by the subsequent output statements. However, I want to use QFile in a logging framework, so I don't want to always flush, as this would decrease the performance.
I also tried the same thing with std::basic_ostream<char> instead of QFile:
std::basic_ofstream<char> file;
file.open("tmp.txt", std::ios_base::out | std::ios_base::ate | std::ios_base::app);
file << "Hallo\n";
std::basic_ofstream<char> file2;
file2.open("tmp.txt", std::ios_base::out | std::ios_base::ate | std::ios_base::app);
file2 << "Hallo 2\n";
file.close();
file2 << "Hello again\n";
file2.close();
which outputs as I would expect:
Hallo
Hallo 2
Hello again
So what is the problem with the QFile example? Is QFile not intended to be used with multiple handles pointing to the same file or what is going on here exactly? I thought that my use case is quite a common one, so I'm a bit surprised to find this behaviour. I couldn't find more specifics in the Qt documentation. I've read here that Qt opens the file in shared mode, so this shouldn't be a problem.
I eventually want to use QFile for logging (where access to the function that does the actual writing is of course synchronized), but this little example worries me that some log statements might get lost on the way. Do you think it would be better to use STL streams instead of QFile?
Edit
As it was pointed out, std::endl causes a flush, so I changed the STL example above to only use \n which according to here does not cause a flush. The behavior described above is unchanged, though.
I seems you want it both ways.
If you want several write buffers and don't want to flush them, it's hard to be sure of having all the writes in the file, and in the right order.
Your small test with std::basic_ostream is not a proof: will it work with larger writes ? Will it work on other OSes ? Do you want to risk your process for a (yet unproven) speed gain ?
There are several suspicious things going on. For starters, you are introducing two levels of buffering.
First and foremost QTextStream has an internal buffer, that you can flush by calling flush on it.
Second, QFile is also buffering (or, better, it's using the buffered APIs from your library -- fopen, fwrite, and so on). Pass QIODevice::Unbuffered to open to make it use the unbuffered APIs (open, write, ...).
Now since this is terribly error prone, QTextStream::flush actually flushes also the underlying file device.
Also, you're passing WriteOnly | Append which doesn't make sense. It's only one of the two.
However, note that your writes may still interleave. POSIX.1-2013 says that
A write is atomic if the whole amount written in one operation is not interleaved with data from any other process. This is useful when there are multiple writers sending data to a single reader. Applications need to know how large a write request can be expected to be performed atomically. This maximum is called {PIPE_BUF}.
(On Windows I have no idea).
QFile file("tmp.txt");
file.open(QIODevice::WriteOnly | QIODevice::Truncate);
QTextStream ts(&file);
ts << "Hallo\n";
QFile file2("tmp.txt");
file2.open(QIODevice::Append);
QTextStream ts2(&file2);
ts2 << "Hallo 2\n";
file.close();
ts2 << "Hello again\n";
file2.close();
Try this, i changed it so the Truncate not getting invoke by WriteOnly. Silly me didn't read that one.
{UPDATE}
QIODevice::WriteOnly 0x0002 The device is open for writing. Note that this mode implies Truncate.
QIODevice::Truncate 0x0008 If possible, the device is truncated before it is opened. All earlier contents of the device are lost.
Reading Source : http://doc.qt.io/qt-5/qiodevice.html#OpenModeFlag-enum

Read and write image data C++

I've just started learning C++, and I'm working on a program that is supposed to grab an image from the hard disk and then save it as another name. The original image should still remain. I've got it work with text files, because with those I can just do like this:
ifstream fin("C:\\test.txt");
ofstream fout("C:\\new.txt");
char ch;
while(!fin.eof())
{
fin.get(ch);
fout.put(ch);
}
fin.close();
fout.close();
}
But I suppose that it's not like this with images. Do I have to install a lib or something like that to get it work? Or can I "just" use the included libraries? I know I'm not really an expert of C++ so please tell me if I'm totally wrong.
I hope someone can and want to help me! Thanks in advance!
Btw, the image is a .png format.
You can use the std streams but use the ios::binary argument when you open the stream. It's well documented and there is several examples around the internet
You are apparently using MS Windows: Windows distinguishes between "text" and "binary" files by different handling of line separators. For a binary file, you do not want it to translate \n\r to \n on reading. To prevent it, using the ios::binary mode when opening the file, as #Emil tells you.
BTW, you do not have to use \\ in paths under windows. Just use forward slashes:
ifstream fin("C:/test.txt");
This worked even back in WWII using MS-DOS.
If the goal is just to copy a file then CopyFile is probably better choice than doing it manually.
#include <Windows.h>
// ...
BOOL const copySuccess = CopyFile("source.png", "dest.png", failIfExists);
// TODO: handle errors.
If using Windows API is not an option, then copying a file one char at a time like you have done is very inefficient way of doing this. As others have noted, you need to open files as binary to avoid I/O messing with line endings. A simpler and more efficient way than one char at a time is this:
#include <fstream>
// ...
std::ifstream fin("source.png", std::ios::binary);
std::ofstream fout("dest.png", std::ios::binary);
// TODO: handle errors.
fout << fin.rdbuf();

Ifstream fails for unknown reason

I have a problem with the function used to read the pgm file format to the memory .
I used the sources in the following link http://www.cse.unr.edu/~bebis/CS308/Code/ReadImage.cpp . You can find others in the same directory ; and some instructions in CS308 ; if you’re interested .
The problem is ifstream ifp fails ; and I think this piece of code maybe the reason ; but it looks fine with me .
Any ideas will be appreciated
charImage = (unsigned char *) new unsigned char [M*N];
ifp.read( reinterpret_cast<char *>(charImage), (M*N)*sizeof(unsigned char));
if (ifp.fail()) {
cout << "Image " << fname << " has wrong size" << endl;
exit(1);
}
The problem is that your input file is not formatted properly. It should have enough data to fill charImage, but it doesn't, and this is why it's failing. Another possibility is that you are trying to run this code on windows, and need to open the file in binary mode.
Specifically (for the binary part) change:
ifp.open(fname, ios::in);
to:
ifp.open(fname, ios::in | ios::binary);
As an aside, it is generally inappropriate to cast the result of a new operator. Here, it's just redundant and doesn't make any sense.
Anything using reinterpret_cast<...>() looks suspicious to me, to say the least. It is probably not the root of the problem, though. My personal guess is that the root of the problem is running the code on a Windows machine and not opening the file in binary mode. Try using
std::ifstream in("filename", std::ios_base:::binary);
Since the code opening the file isn't part of the question it is just a wild guess, though.

C++ - ofstream doesn't output to file until I close the program

I have the following code:
ofstream mOutFile.open(logPath, ios_base::app);
string lBuilder;
lBuilder.append("========================================================\n");
lBuilder.append("Date: ");
lBuilder.append(asctime(timeinfo));
lBuilder.append("\n");
lBuilder.append("Log Message:\n");
lBuilder.append(toLog);
lBuilder.append("\n");
lBuilder.append("========================================================\n\n");
int lSize = lBuilder.size();
char* lBuffer = new char[lSize];
int index = 0;
for each (char c in lBuilder)
lBuffer[index++] = c;
mOutFile.write(lBuffer, lSize);
mOutFile.flush();
Unfortunately, until I close the app (I assume that closing the ofstream would work as well) the output does not get written to the text file. I know I could probably close and reopen the stream and everything will "just work" but that seems like a silly and incorrect solution. What am I doing wrong here?
I have also tried the following variations based on other questions I have found here, but these solutions did not work:
mOutputFile << flush;
mOutputFile << endl;
Thanks in advance for any assistance on this.
edit Everything in this code is working visual c++, it builds and works fine except the file is not written to until the stream is closed, even if I force a flush. Also, I switched from using the << operator to the char * and .write () to see if anything behaved differently.
std::ofstream file(logPath, ios_base::app);
file << "========================================================\n"
<< "Date: " << asctime(timeinfo)
<< "\nLog Message:\n" << toLog
<< "\n========================================================\n\n"
<< std::flush;
//if you want to force it write to the file it will also flush when the the file object is destroyed
//file will close itself
This is not only easier to read but it will probably also be faster than your method + it is a more standard appraoch
I ended up just "making it work" by closing and reopening the stream after the write operation.
mOutputFile << "all of my text" << endl;
mOutputFile.close();
mOutputFile.open(mLogPath);
EDIT After trying out forcing the flush on a few other systems, it looks like something just isn't performing correctly on my development machine. Not good news but at least the above solution seems to work when programmatically flushing the ofstream fails. I am not sure of the implications of the above code though, so if anyone wants to chime in if there are implications of closing and reopening the stream like this.
You can perform the following steps to validate some assumptions:
1.) After flush(), the changes to the file should be visible to your application. Open the file as std::fstream instead of std::ofstream. After flushing, reset the file pointer to the beginning and read the contents of the file. Your newly written record should be there. If not, you probably have a memory corruption somewhere in your code.
2.) Open the same file in an std::ifstream after your call to flush(). Then read the contents of the file. Your newly written record should be there. If not, then there's probably another process interfering with your file.
If both works, then you may want to read up on "file locking" and "inter-process syncronization". The OS can (theoretically) take as much time as it wants to make file changes visible to other processes.

Copying contents of one file to another in C++

I am using the following program to try to copy the contents of a file, src, to another, dest, in C++. The simplified code is given below:
#include <fstream>
using namespace std;
int main()
{
fstream src("c:\\tplat\test\\secClassMf19.txt", fstream::binary);
ofstream dest("c:\\tplat\\test\\mf19b.txt", fstream::trunc|fstream::binary);
dest << src.rdbuf();
return 0;
}
When I built and executed the program using CODEBLOCKS ide with GCC Compiler in windows, a new file named "....mf19.txt" was created, but no data was copied into it, and filesize = 0kb. I am positive I have some data in "...secClassMf19.txt".
I experience the same problem when I compiled the same progeam in windows Visual C++ 2008.
Can anyone please help explain why I am getting this unexpected behaviour, and more importantly, how to solve the problem?
You need to check whether opening the files actually succeeds before using those streams. Also, it never hurts to check if everything went right afterwards. Change your code to this and report back:
int main()
{
std::fstream src("c:\\tplat\test\\secClassMf19.txt", std::ios::binary);
if(!src.good())
{
std::cerr << "error opening input file\n";
std::exit(1);
}
std::ofstream dest("c:\\tplat\\test\\mf19b.txt", std::ios::trunc|std::ios::binary);
if(!dest.good())
{
std::cerr << "error opening output file\n";
std::exit(2);
}
dest << src.rdbuf();
if(!src.eof())
std::cerr << "reading from file failed\n";
if(!dst.good())
std::cerr << "writing to file failed\n";
return 0;
}
I bet you will report that one of the first two checks hits.
If opening the input file fails, try opening it using std::ios::in|std::ios::binary instead of just std::ios::binary.
Do you have any reason to not use CopyFile function?
Best
As it is written, your src instance is a regular fstream, and you are not specifying an open mode for input. The simple solution is to make src an instance of ifstream, and your code works. (Just by adding one byte!)
If you had tested the input stream (as sbi suggests), you would have found that it was not opened correctly, which is why your destination file was of zero size. It was opened in write mode (since it was an ofstream) with the truncation option to make it zero, but writing the result of rdbuf() simply failed, with nothing written.
Another thing to note is that while this works fine for small files, it would be very inefficient for large files. As is, you are reading the entire contents of the source file into memory, then writing it out again in one big block. This wastes a lot of memory. You are better off reading in chunks (say 1MB for example, a reasonable size for a disk cache) and writing a chunk at a time, with the last one being the remainder of the size. To determine the source's size, you can seek to the end and query the file offset, then you know how many bytes you are processing.
And you will probably find your OS is even more efficient at copying files if you use the native APIs, but then it becomes less portable. You may want to look at the Boost filesystem module for a portable solution.