I have a cpp program that constantly prints out readings from a gyro. I want to write these values to a file but the problem is that the cpp program can be exited anytime (either power down of system or user press ctrl + c etc). What is a good way to safely write these values to a files as they are being read without having to safely close the file after. I am thinking of somehow using the bash >> operator.
.
.
.
while(1)
{
printf("acc: %+5.3f", ax);
//write the printed line to file...
}
.
.
.
To protect against the program being terminated with Ctl-c, flush the buffer after each write:
fstream << "acc: " << ax << std::flush;
Note that if you end the output with std::endl, this writes a newline and also flushes the buffer.
Protecting against the system losing power is harder. There are OS-specific functions like fsync() on Unix, which force any kernel file buffers to be written to disk. But to use this you need the underlying Unix file descriptor, and there's no standard way to get that from a C++ fstream. See Retrieving file descriptor from a std::fstream
In this case you have 2 points of possible synchronization issues:
printf buffers
bash buffers and file open/close operations.
You can improve the 'printf' part by using the 'fflush(stdout)' call, but you have no control over the bash process.
The best solution would be to use your own file output with fprintf, followed with fflush and sync or similar. The latter would guarantee that system buffers would be flushed as well.
In c++ world you can use output streams to a file with 'endl' or 'flush' at the end. They would flush output buffere, though you might still need 'sync'.
Related
If you want to spawn a Windows console in an otherwise SUBSYSTEM:WINDOWS application you can use this code:
if (AllocConsole())
{
FILE* file = nullptr;
_wfreopen_s(&file, L"CONIN$", L"r", stdin);
_wfreopen_s(&file, L"CONOUT$", L"w", stdout);
_wfreopen_s(&file, L"CONOUT$", L"w", stderr);
}
The _wfreopen_s function maps stdin to CONIN$ and provides a pointer to pointer in the file variable (which we are effectively discarding).
What I'd like to do is instead map an input from something other than stdin, for example, another file stream and then write that stream to CONIN$.
For a larger picture of what I'm trying to do here, I've got a secondary thread running std::getline(std::cin... which blocks. I'd like the thread context object to just send a \n to the console to break the blocking call.
If there are other ideas, I'm open. The alternative currently is that I print a message to the console that says "Shutting down, press ENTER to quit..." Which, I guess, also works ;)
What I tried was using the FILE* conin = new FILE(); and then did a memcpy to fill it with a \n and then I used WriteFile to that pointer, thinking that it might write the file stream out to CONIN$, and while the code compiles, and the contents of the FILE* appears to be correct (0x0a), it does not appear to send that stream to the console.
I tested this by having std::cout above and below the code testing the stream write. If it works, I'd expect the two lines to be on separate lines, but they always show up on the same, suggesting that I'm not sending the file stream.
Thanks for reading!
You should not discard the FILE* handle, otherwise you won't be able to manipulate it, in particular you won't be able to properly flush/close it if required.
If you're working with threads, simply give the FILE* to the thread that requires it. Threads share the same memory space.
If you're working with processes, then you should create a pipe between the two processes involved (see Win32 API for CreatePipe for details), and connect one's stdout to the other's stdin.
I had this snippet in a program (in Visual Studio 2005):
if(_eof(fp->_file))
{
break;
}
It broke the enclosing loop when eof was reached. But the program was not able to parse the last few thousand chars in file. So, in order to find out what was happening, I did this:
if(_eof(fp->_file))
{
cout<<ftell(fp)<<endl;
break;
}
Now the answer that I got from ftell was different (and smaller) than the actual file-size (which isn't expected). I thought that Windows might have some problem with the file, then I did this:
if(_eof(fp->_file))
{
cout<<ftell(fp)<<endl;
fseek(fp, 0 , SEEK_END);
cout<<ftell(fp)<<endl;
break;
}
Well, the fseek() gave the right answer (equal to the file-size) and the initial ftell() failed (as previously told).
Any idea about what could be wrong here?
EDIT: The file is open in "rb" mode.
You can't reliably use _eof() on a file descriptor obtained from a FILE*, because FILE* streams are buffered. It means that fp has sucked fp->_file dry and stores the remaining byte in its internal buffer. Eventually fp->_file is at eof position, while fp still has bytes for you to read. Use feof() after a read operation to determine if you are at the end of a file and be careful if you mix functions which operate on FILE* with those operating on integer file descriptors.
You should not be using _eof() directly on the descriptor if your file I/O operations are on the FILE stream that wraps it. There is buffering that takes place and the underlying descriptor will hit end-of-file before your application has read all the data from the FILE stream.
In this case, ftell(fp) is reporting the state of the stream and you should be using feof(fp) to keep them in the same I/O domain.
When I construct an iostream when say opening a file will this always read the entire file from the hard disk and then put it into memory, or is it streamed in and buffered by the OS on demand?
I ask because one way to check if a file exists is to see if opening it fails, but I fear if the files I am opening are very large then this take a long time if iostream must read the entire file in on open.
To check whether a file exists can be done like this if you want to use boost.
#include <boost/filesystem.hpp>
bool fileExists = boost::filesystem::exists("foo.txt");
No, it will not read the entire file into memory when you open it. It will read your file in chunks though, but I believe this process will not start until you read the first byte. Also these chunks are relatively small (on the order of 4-128 kibibytes in size), and the fact it does this will speed things up greatly if you are reading the file sequentially.
In a test on my Linux box (well, Linux VM) simply opening the file only results in the OS open system call, but no read system call. It doesn't start reading anything from the file until the first attempt to read from the stream. And then it reads 8191 (why 8191? that seems a very strange number) byte chunks as I read the file in.
Opening a file is a bad way of testing if the file exists - all it does is tell you if you can open it. Opening might fail for a number of reasons, typically because you don't have read permission, but the file will still exist. It is usually better to use an operating system specific function to test for existence. And no, opening an fstream will not cause the contents to be read.
What I think is, when you open a file, the corresponding data structures for the process opening the file are populated which include file pointer, file descriptor, v node etc.
Now one can read and write to a file using buffered streams (fwrite , fread) or using system calls (read and write).
When we use buffered streams, we buffer the data and then write or read it[This is done for efficiency puposes]. This statement itself means that the whole file is not read into memory but certain bytes are read into buffer and then made available.
In case of sys calls such as read and write , kernel level buffering is done (using fsync one can flush out kernel buffer too), but data is actually read and written to the device .file
checking existance of file
#include < sys/stat.h >
int main(){
struct stat file_i;
std::string f("myfile.txt");
if (stat(f.c_str(),&file_i) != 0){
cout << "File not found" << endl;
}
return 0;
}
Hope this clarifies a bit.
I'm writing an emulator for my Operating Systems course. The problem I have is that we need to get all our .job files (they are like application programs being fed to the emulator) from STDIN and read them in.
Call:
./RMMIX < aJob.job
I just slurp it with
while(getline(std::cin, line))
line by line. The problem is, if I do not put anything to STDIN, then cin will wait for user input- NOT what I want. I need the program to recognize a lack of text on STDIN and terminate, not wait for user input instead.
I have determined that I can query the length like so:
size_t beg = std::cin.tellg();
std::cin.seekg(0, std::ios_base::end);
size_t end = std::cin.tellg();
std::cin.seekg(0, std::ios_base::beg);
and terminate if std::cin has a length of 0.
Are there any other solutions to this? Is this a portable solution?
I don't think there's a platform independent way of doing this, but on Unix-based systems you should be able to do:
#include <unistd.h>
...
int main() {
if (!isatty(0)) {
// stdin is being streamed from a file or something else that's not a TTY.
}
...
}
However, I think doing it via a command line option is the preferred approach.
You need to redesign your program. Instead of reading from standard input, read from a named file, who's name you provide on the command line. Then instead of:
./RMMIX < aJob.job
you say:
./RMMIX aJob.job
This is much easier and more portable than trying to determine if there is anything in standard input.
You might also look at this http://www.programmersheaven.com/mb/CandCPP/232821/232821/non-blocking-reads-on-stdin/ for an idea that comes at the problem from another direction -- don't check the number of bytes on the stream, but instead just make the read succeed immediately and then check to see if anything was read.
You can press Ctrl+D on the command line to signal end-of-file for standard input on the running program.
This is desired behavior. Otherwise, if programs exited immediately when no input remained, pipelines could randomly be broken by commands that were waiting on another command that had not been scheduled to run (and that had not produced any additional output), or that buffered output and emitted it all at once, like sort does.
When using io redirection to pull stdin from a file via something like ./RMMIX < file.txt, this end-of-file condition is signaled automatically when there is no more data left in the file. For input read from a terminal, waiting is probably the desired behavior.
I have two processes (firsts source in perl, seconds source in c++) and both of them use same file. One of them writes to the file line by line, and another reads from file if new line is appended to it. For second process to know that file is modified the first proces flush - es after each apendence. But second process checks only modification time to be increased to start read, but actualy after adding new line and flushing the file doesn't change its "last modification time". So there is another aporache is needed to do this. So the question is here, how detect if new line is appended to the file, to start its processing?
here are fragments from source codes of this proceses:
1.
open FILE, ">>", $file or die $!;
for($i = 0; $i <= $#ticks; ++$i)
{
print FILE $ticks[$i]."\n";
FILE->flush();
sleep(10);
}
close FILE;
2.
struct stat64 file_info;
if(fstat64(fileno(this->auto_file_ptr.get_file()), &file_info)!=0)
{
//throw error that file have been changed
}
data_file_new_modification_time = 1000LL * file_info.st_mtime;
if(this->data_file_last_modification_time!=data_file_new_modification_time)
{
//processs the file
}
Use IPC mechanisms to synchronize you C++ and Perl application. You can use semaphores or mutexes for this purpose.
Welcome to event-based programming.
Employ select - you do not want to know when the file is changed, but when new data is available. This system call does exactly that.
See the implementation in File::Tail for a real-world example with all the edge cases nicely grated off. Downporting this to C should be easy for you, or use libev with the select backend.
It seems that stat64 doesn't change it's st_mtime member when file is flushed. Instead the st_size changes. So I can use st_size to detect if file was changed.