C++ file operations cause "crash" on embedded Linux - c++

I'm in a embedded led measuring system project now. It uses ARM & linux, and has 64M memory and 1G storage. When measuring, it's supposed to write data to a .csv file. I did it this way:
Create/open a file before measurement begins
In the measuring loop, when data is ready, put it into the file, then go to next measuring
When user stop the measurement, the file will be closed
But, when I add this feature, the program keep running several hours, then the machine won't respond to anything ( measuring stopped, UI still display but doesn't respond to any action, etc.). And the csv file is about 15MB.
While without this feature, the machine can work well all day.
I've thought about this, maybe It's because the memory is used up. With such a small memory, is it possible to keep writing a file? Or should I close it every time I finished writing data? (In that case, I will have to open/close the file very frequently, it will cause our system to be slow, what is not glad to see)
Apologize for my poor English, maybe someone can understand it and give me some help.
God is lighting your path, thank you all!
ps: I do believe the file operations itself is correct.
the code like this:
std::ofstream out_put;
out_put.open(filePath, std::ofstream::out | std::ofstream::trunc);
while(!userStoped()){
doSomeMesuring();
for(int itemIndex = 0; itemIndex < itemCount; ++itemIndex){
out_put << ',' << itemName.toStdString() << ','
<< data->mdata.item[itemIndex].mvalue << ','
<< data->mdata.item[itemIndex].judge << std::endl;
}
}
out_put.close();

You write to 'out_put', the ofstream, but never check if the stream is still valid.
You could change it to
while (out_put.good() && (!userStoped())
To prove to yourself that it is the writing to a stream which is causing the problem, comment out all of the measuring code, just write lots of 'x' (or your choice of character!) to the stream to see if you have the same result.

Related

C++ read text line-by-line, speed/efficiency savings needed

I have a series of large text files (10s - 100s of thousands of lines) that I want to parse line-by-line. The idea is to check if the line has a specific word/character/phrase and to, for now, record to a secondary file if it does.
The code I've used so far is:
ifstream infile1("c:/test/test.txt");
while (getline(infile1, line)) {
if (line.empty()) continue;
if (line.find("mystring") != std::string::npos) {
outfile1 << line << '\n';
}
}
The end goal is to be writing those lines to a database. My thinking was to write them to the file first and then to import the file.
The problem I'm facing is the time taken to complete the task. I'm looking to minimize the time as far as possible, so any suggestions as to time savings on the read/write scenario above would be most welcome. Apologies if anything is obvious, I've only just started moving into C++.
Thanks
EDIT
I should say that I'm using VS2015
EDIT 2
So this was my own dumb fault, when switching to Release and changing the architecture type I had noticeable speed increases. Thanks to everyone for pointing me in that direction. I'm also looking at the mmap stuff and that's proving useful too. Thanks guys!
When you use ifstream to read and process to/from really big files, you have to increase the default buffer size that is used (normally 512 bytes).
The best buffer size depends on your needs, but as a hint you can use the partition block size of the file(s) your reading/writing. To know that information you can use a lot of tools or even code.
Example in Windows:
fsutil fsinfo ntfsinfo c:
Now, you have to create a new buffer to ifstream like this:
size_t newBufferSize = 4 * 1024; // 4K
char * newBuffer = new char[newBufferSize];
ifstream infile1;
infile1.rdbuf()->pubsetbuf(newBuffer, newBufferSize);
infile1.open("c:/test/test.txt");
while (getline(infile1, line)) {
/* ... */
}
delete newBuffer;
Do the same with the output stream and don't forget set new buffer before open file or it may not work.
You can play with values to find the very best size for you.
You'll note the difference.
C-style I/O functions are much faster than fstream.
You may use fgets/fputs to read/write each text line.

Writing to a .csv file with C++?

TL;DR I am trying to take a stream of data and make it write to a .csv file. Everything is worked out except the writing part, which I think is simply due to me not referencing the .csv file correctly. But I'm a newbie to this stuff, and can't figure out how to correctly reference it, so I need help.
Hello, and a big thank you in advance to anyone that can help me out with this! Some advance info, my IDE is Xcode, using C++, and I'm using the Myo armband from Thalmic Labs as a device to collect data. There is a program (link for those interested enough to look at it) that is supposed to stream the EMG, accelerometer, gyroscope, and orientation values into a .csv file. I am so close to getting the app to work, but my lack of programming experience has finally caught up to me, and I am stuck on something rather simple. I know that the app can stream the data, as I have been able to make it print the EMG values in the debugging area. I can also get the app to open a .csv file, using this code:
const char *path= "/Users/username/folder/filename";
std::ofstream file(path);
std::string data("data to write to file");
file << data;
But no data ends up being streamed/printed into that file after I end the program. The only thing that I can think might be causing this is that the print function is not correctly referencing this file pathway. I would assume that to be a straightforward thing, but like I said, I am inexperienced, and do not know exactly how to address this. I am not sure what other information is necessary, so I'll just provide everything that I imagine might be helpful.
This is the function structure that is supposed to open the files: (Note: The app is intended to open the file in the same directory as itself)
void openFiles() {
time_t timestamp = std::time(0);
// Open file for EMG log
if (emgFile.is_open())
{
emgFile.close();
}
std::ostringstream emgFileString;
emgFileString << "emg-" << timestamp << ".csv";
emgFile.open(emgFileString.str(), std::ios::out);
emgFile << "timestamp,emg1,emg2,emg3,emg4,emg5,emg6,emg7,emg8" << std::endl;
This is the helper to print accelerometer and gyroscope data (There doesn't appear to be anything like this to print EMG data, but I know it does, so... Watevs):
void printVector(std::ofstream &path, uint64_t timestamp, const myo::Vector3< float > &vector)
{
path << timestamp
<< ',' << vector.x()
<< ',' << vector.y()
<< ',' << vector.z()
<< std::endl;
}
And this is the function structure that utilizes the helper:
void onAccelerometerData(myo::Myo *myo, uint64_t timestamp, const myo::Vector3< float > &accel)
{
printVector(accelerometerFile, timestamp, accel);
}
I spoke with a staff member at Thalmic Labs (the guy who made the app actually) and he said it sounded like, unless the app was just totally broken, I was potentially just having problems with the permissions on my computer. There are multiple users on this computer, so that may very well be the case, though I certainly hope not, and I'd still like to try and figure it out one more time before throwing in the towel. Again, thanks to anyone who can be of assistance! :)
My imagination is failing me. Have you tried writing to or reading from ostringstream or istringstream objects? That might be informative. Here's a line that's correct:
std::ofstream outputFile( strOutputFilename.c_str(), std::ios::app );
Note that C++ doesn't have any native support for streaming .csv code, though, you may have to do those conversions yourself. :( Things may work better if you replace the "/"'s by (doubled) "//" 's ...

Open a txt file with texteditor while its already opend by "fopen()" and in use?

Logger for my program. I saw in another program that it’s somehow possible to open and read a file with text editor while the program is still using it. Seems it just opens a copy for me and continue logging in the background. This kind of log system I need too. But if I use fopen() I only can open and read the file with my text editor if the Programm already closed it with fclose(); This way would work but I think its a very bad solution and also very slow... to open and close the file on every log :S
Someone knows how the needed log system is working?
P.S. I'm working in VisualStudio 2013 on Windows 8.1
Sry for my bad English :S
There are 2 different problems.
First is writing of logs. In a Windows system, the buffering will cause the data to be actually written to disk :
if you close the file
when you have a fair quantity of new data (unsure between several ko and several Mo)
if you explicitely flush
Unless if you have a high throughput, I would advise to at least flush (if not close) after each write to avoid loosing logs if program crashes. And it also allows you to read the log file in real time.
Second is reading. Vim for example is known to be able to monitor a file that can be modified by an external process. It will open a popup saying that file has been modified and offer to reload it. I do not know what notepad does in same conditions. But :
it does not have sense unless first problem has gone
it is not very efficient since you will reload whole file each time
IMHO, you'd better write a custom reader that mimics Linux tail -f :
read (and display) until end of file
repeteadly read (with a short sleep after an unsuccessful read) to process newly added data
It all depends on the text editor you are using. Some will notice edit to the file and ask you if you want to reload a fresh version.
If you work on linux, and you'd like to have an idea of what's happening in real time you could do someting like
tail -f <path-to-file>
or if the file doesnt yet exist
watch -n 0,2 "cat <path-to-file> | tail"
which will display the content of the file and refresh it every 0.2 sec
Thx for your fast answers :)
Crazy.. i was working so long with fopen() and found no solution.. also the fflush(pFile) didnt help (I wasnt able to open file.. always error that its already in use by another program). I never tryed the fstream. Seems fstream solved my problem now. I can open my file with msnotepad.exe while the program is still writing to the file :) Here a small test-code:
#include <fstream> #include <iostream> using namespace std;
int main(){
ofstream FILE;
FILE.open("E:\\Log.txt");
for (size_t i = 0; i < 50; i++)
{
FILE << "Hello " << i << endl;
cout << "log" << endl;
_sleep(500);
}
FILE.close();
cout << "finish" << endl;
return 0;}

Delay in ofstream::open, possibly due to mixing with _iobuf?

I have a C++ program that creates an output file "A" with ofstream. This file is then read by some legacy C code that opens the file with _iobuf. The legacy code then creates its own output file "B" using _iobuf, and this file is then read by the C++ program using ifstream. This sequence is iterated many times, with the same file names for A and B in each iteration.
Occasionally, the C++ program cannot open the output file A for writing, and I must try several times before it succeeds. This occurs nondeterministically, and maybe once in a thousand iterations. Note that the C program never has to wait to open its input or output file, nor does the C++ program ever have to wait to open its input file. This informal observation is based on hundreds of thousands of iterations.
I'm wondering if this has something to do with mixing ofstream and _iobuf in the same program? Both the C++ code and the C code are linked into the same program. And the legacy C code is technically C++ code, but was written in a very C-like style. Is there anything I can do to eliminate this waiting to open the ofstream file? I do not want to change the legacy code if I can possibly avoid it.
Pseudo code (not compiled):
void someObject::someMethod()
{
for (int count = 0; count < someLimit; ++count)
{
newerObject::firstMethod();
olderObject::secondMethod();
newerObject::thirdMethod();
}
}
void newerObject::firstMethod()
{
// do some processing first
// then write the results of the processing to a file
ofstream A;
A.open("A", ofstream::out); // this sometimes must be tried multiple times
// write data to file A
A.close();
}
void olderObject::secondMethod()
{
FILE* f;
f = fopen("A", "rt"); // this always works the first time
// read data from file A
fclose(f);
// do some processing
f = fopen("B", "w");
// write data to file B
fclose(f);
}
void newerObject::thirdMethod()
{
ifstream B;
B.open("B"); // this always works the first time
// read data from file B
B.close();
// do some processing
}
Currently, as a work around, I put the ofstream::open in a do-while loop. I would love to get rid of this awkwardness. Thanks in advance for any advice you can give.
First off, the problem is almost certainly not the use of different methods to access the files: under the hood, the C and C++ I/O functions use the same system I/O facilities. You seem to be using Windows (on other systems files typically can be open multiple times simultaneously) and I don't know much about the system but I would suspect that the file system hasn't been updated to reflect that the file is closed when you try to open it. This may have to do with the "t" open flag: I don't know what this is about.
On UNIXes you can force the I/O operations to wait until the actual change on disk completed. Something like this could help avoiding the problem but has the significant cost that operations become hideously slow. On UNIXes one approach would be to blow away the file system entry the moment the file was opened successfully (after all, at this point its name isn't used anymore):
if (FILE* fp = fopen("file", "r")) {
remove("file");
// do processing
}
However, if I recall correctly on Windows you can neither remove the file nor rename it. Personally, in solving the problem I would proceed as follows:
Determine under which situations the file can't be opened, e.g. by keeping the file open and trying to open it. This is mainly intended to create a setup where the problem is reproducible so you can verify later that you indeed found a solution.
Once I found a way to reproduce the problem I would probably a better idea of the actual root cause and possibly googling would help. In any case this is the point where researching the root cause comes in.
Once the cause is understood it is hopefully easy to devise a solution. If not, opening the file multiple times under it is successful may very well be the right solution.

Raise I/O error while writing to an unreliable disk in C++

Imagine you have the following in C++:
ofstream myfile;
myfile.open (argv[1]);
if (myfile.is_open()){
for(int n=0;n<=10;n++){
myfile << "index="<<n<<endl;
sleep(1);
}
}else{
cerr << "Unable to open file";
}
myfile.close();
And while writing, the disk or medium you are writing to becomes unavailable but comes back on for the close() so that you have missing data in between. Or imagine you write to a USB flash drive and the device is withdrawn and re-inserted during the writing process.
How can you detect that ? I tried checking putting the write in try {} catch, flags(), rdstate(), you name it, but none thus far seem to work.
I don't think that is something you can detect at the stdio level. Typically when a hard drive temporarily stops responding, the operating system will automatically retry the commands either until they succeed or a timeout is reached, at which point your system call may receive an error. (OTOH it may not, because your call may have returned already, after the data was written into the in-memory filesystem cache but before any commands were sent to the actual disk)
If you really want to detect flakey hard drive, you'll probably need to code to a much lower level, e.g. write your own hardware driver.
IMHO you can try to:
Use ios:exceptions
Use low-level OS interactions
Verify that IO was successful (if 1 and 2 doesn't work)
I'm not sure if this will cover your scenario (removing a USB drive mid-write), but you can try enabling exceptions on the stream:
myfile.exceptions(ios::failbit | ios::badbit);
In my experience, iostreams do a "great" job of making it hard to detect errors and the type of error.
for(int n=0;n<=10;n++){
if (!(myfile << "index="<<n<<endl))
throw std::runtime_error("WRITE FAILED")
sleep(1);
}
If the std::ostream fails for any reason, it sets it's state bit, which is checked then the std::stream is in a boolean context. This is the same way you check if an std::istream read in data to a variable correctly.
However, this is the same as rdstate(), which you say you tried. If that's the case, the write has gotten to a buffer. endl, which flushes the programs buffer, shows that it's in the Operating System's buffer. From there, you'll have to use OS-specific calls to force it to flush the buffer.
[Edit] According to http://msdn.microsoft.com/en-us/library/17618685(v=VS.100).aspx, you can force a flush with _commit if you have a file descriptor. I can't find such a guarantee for std::ostreams.