I want to create a series of files under "log" directory which every file named based on execution time. And in each of these files, I want to store some log info for my program like the function prototype that acts,etc.
Usually I use the hard way of fopen("log/***","a") which is not for this purpose.And I just write a timestamp function:
char* timeStamp(char* txt){
char* rc;
char timestamp[16];
time_t rawtime = time(0);
tm *now = localtime(&rawtime);
if(rawtime != -1) {
strftime(timestamp,16,"%y%m%d_%H%M%S",now);
rc = strcat(txt,timestamp);
}
return(rc);
}
But I don't know what to do next. Please help me with this!
Declare a char array big enough to hold 16 + "log/" (so 20 characters total) and initialize it to "log/", then use strcat() or something related to add the time string returned by your function to the end of your array. And there you go!
Note how the string addition works: Your char array is 16 characters, which means you can put in 15 characters plus a nul byte. It's important not to forget that. If you need a 16 character string, you need to declare it as char timestamp[17] instead. Note that "log/" is a 4 character string, so it takes up 5 characters (one for the nul byte at the end), but strcat() will overwrite starting at the nul byte at the end, so you'll end up with the right number. Don't count the nul terminator twice, but more importantly, don't forget about it. Debugging that is a much bigger problem.
EDIT: While we're at it, I misread your code. I thought it just returned a string with the time, but it appears that it adds the time to a string passed in. This is probably better than what I thought you were doing. However, if you wanted, you could just make the function do all the work - it puts "log/" in the string before it puts the timestamp. It's not that hard.
What about this:
#include <stdio.h>
#include <time.h>
#define LOGNAME_FORMAT "log/%Y%m%d_%H%M%S"
#define LOGNAME_SIZE 20
FILE *logfile(void)
{
static char name[LOGNAME_SIZE];
time_t now = time(0);
strftime(name, sizeof(name), LOGNAME_FORMAT, localtime(&now));
return fopen(name, "ab");
}
You'd use it like this:
FILE *file = logfile();
// do logging
fclose(file);
Keep in mind that localtime() is not thread-safe!
Steps to create (or write to) a sequential access file in C++:
1.Declare a stream variable name:
ofstream fout; //each file has its own stream buffer
ofstream is short for output file stream
fout is the stream variable name
(and may be any legal C++ variable name.)
Naming the stream variable "fout" is helpful in remembering
that the information is going "out" to the file.
2.Open the file:
fout.open(filename, ios::out);
fout is the stream variable name previously declared
"scores.dat" is the name of the file
ios::out is the steam operation mode
(your compiler may not require that you specify
the stream operation mode.)
3.Write data to the file:
fout<<grade<<endl;
fout<<"Mr";
The data must be separated with space characters or end-of-line characters (carriage return), or the data will run together in the file and be unreadable. Try to save the data to the file in the same manner that you would display it on the screen.
If the iomanip.h header file is used, you will be able to use familiar formatting commands with file output.
fout<<setprecision(2);
fout<<setw(10)<<3.14159;
4.Close the file:
fout.close( );
Closing the file writes any data remaining in the buffer to the file, releases the file from the program, and updates the file directory to reflect the file's new size. As soon as your program is finished accessing the file, the file should be closed. Most systems close any data files when a program terminates. Should data remain in the buffer when the program terminates, you may loose that data. Don't take the chance --- close the file!
Sounds like you have mostly solved it already - to create a file like you describe:
char filename[256] = "log/";
timeStamp( filename );
f = fopen( filename, "a" );
Or do you wish do do something more?
Related
I can take the text file string with "fgets" and print it out one line using "SetWindowTextA". Like This code
FILE *p_file = fopen("Test.txt", "r");
if (p_file != NULL) {
text = fgets(temp, sizeof(temp), p_file);
m_Edit_load.SetWindowTextA(text);
fclose(p_file)
}
But I want to print out all the lines. I've used the code at the bottom, but only the last line printed
FILE *p_file = fopen("Test.txt", "r");
if (p_file != NULL) {
while(NULL != fgets(temp, sizeof(temp), p_file){
m_Edit_load.SetWindowTextA(temp);
}
fclose(p_file);
}
How can I print out all the rows?
Problem here is, SetWindowTextA is setting text, not appending. Hence, your window might be ending up with last line. To remove this problem, first create a dynamic array, append all characters, then call SetWindowTextA at last.
The most straightforward way is to open the file in binary mode, load it into a buffer and put it in the control through a single SetWindowText() call.
Depending on the format of the text file, it may require some additional steps:
If the file is ASCII, and of the same codepage as the system it runs on, a SetWindowTextA() call is OK.
If the file is Unicode it can be loaded onto the control by calling SetWindowTextW() - the control must be Unicode as well.
If the file is UTF-8 or ASCII, but in a codepage other than that of the system, the text must be converted to Unicode using the MultiByteToWideChar() function, before loaded onto the control.
Another conversion that may be needed is LF to CR-LF, if the lines are joined in the control. You need to write some code for this.
As already stated in one of the other answers, the problem is that SetWindowText will overwrite the text. Your code seems to incorrectly assume that this function will append the text instead.
If you want to set the edit control to the entire text of a file, then you will have to read the entire text of the file into a memory buffer, and pass a pointer to that memory buffer to SetWindowText.
The function fgets is used for reading a single line. Although you can solve your problem with fgets, there is no reason to limit yourself to only reading one line at a time. It would therefore be more efficient to read as much data as possible in one function call, for example by using the function fread instead of fgets.
Another issue is that you are opening your text file in text mode, which means that the \r\n line endings will get translated to \n. However, this is not what you want, because when using SetWindowText on a multiline edit control, the line endings must be \r\n, not \n. Therefore, you should change the line
FILE *p_file = fopen("Test.txt", "r");
to
FILE *p_file = fopen("Test.txt", "rb");
in order to open the file in binary mode.
The whole code should look like this:
FILE *fp = fopen( "Test.txt", "rb" );
if ( fp != NULL )
{
char buffer[4096];
size_t bytes_read;
bytes_read = fread( buffer, 1, (sizeof buffer) - 1, fp );
buffer[bytes_read] = '\0';
m_Edit_load.SetWindowTextA( buffer );
fclose(p_file);
}
If it is possible that 4096 bytes is not sufficient to contain the entire file, then you could increase the size of the buffer. However, you should not increase it too much, because otherwise, there is a danger of a stack overflow. Instead of allocating the memory buffer on the stack, you could also allocate it on the heap, by using malloc instead. Another alternative would be to use a static buffer, which also does not get allocated on the stack.
I am writing a program to reformat a DNS log file for insertion to a database. There is a possibility that the line currently being written to in the log file is incomplete. If it is, I would like to discard it.
I started off believing that the eof function might be a good fit for my application, however I noticed a lot of programmers dissuading the use of the eof function. I have also noticed that the feof function seems to be quite similar.
Any suggestions/explanations that you guys could provide about the side effects of these functions would be most appreciated, as would any suggestions for more appropriate methods!
Edit: I currently am using the istream::peek function in order to skip over the last line, regardless of whether it is complete or not. While acceptable, a solution that determines whether the last line is complete would be preferred.
The specific comparison I'm using is: logFile.peek() != EOF
I would consider using
int fseek ( FILE * stream, long int offset, int origin );
with SEEK_END
and then
long int ftell ( FILE * stream );
to determine the number of bytes in the file, and therefore - where it ends. I have found this to be more reliable in detecting the end of the file (in bytes).
Could you detect an (End of Record/Line) EOR marker (CRLF perhaps) in the last two or three bytes of the file? (3 bytes might be used for CRLF^Z...depends on the file type). This would verify if you have a complete last row
fseek (stream, -2,SEEK_END);
fread (2 bytes... etc
If you try to open the file with exclusive locks, you can detect (by the failure to open) that the file is in use, and try again in a second...(or whenever)
If you need to capture the file contents as the file is being written, it's much easier if you eliminate as many layers of indirection and buffering between your logic and the actual bytes of data in the file.
Do not use C++ IO streams of any type - you have no real control over them. Don't use FILE *-based functions such as fopen() and fread() - those are buffered, and even if you disable buffering there are layers of code between your code and the data that once again you can't control and don't know what's happening.
In a POSIX environment, you can use low-level C-style open() and read()/pread() calls. And use fstat() to know when the file contents have changed - you'll see the st_size member of the struct stat argument change after a call to fstat().
You'd open the file like this:
int logFileFD = open( "/some/file/name.log", O_RDONLY );
Inside a loop, you could do something like this (error checking and actual data processing omitted):
size_t lastSize = 0;
while ( !done )
{
struct stat statBuf;
fstat( logFileFD, &statBuf );
if ( statBuf.st_size == lastSize )
{
sleep( 1 ); // or however long you want
continue; // go to next loop iteration
}
// process new data - might need to keep some of the old data
// around to handle lines that cross boundaries
processNewContents( logFileFD, lastSize, statBuf.st_size );
}
processNewContents() could look something like this:
void processNewContents( int fd, size_t start, size_t end )
{
static char oldData[ BUFSIZE ];
static char newData[ BUFSIZE ];
// assumes amount of data will fit in newData...
ssize_t bytesRead = pread( fd, newData, start, end - start );
// process the data that was read read here
return;
}
You may also find that you need to add some code to close() then re-open() the file in case your application doesn't seem to be "seeing" data written to the file. I've seen that happen on some systems - the application somehow sees a cached copy of the file size somewhere while an ls run in another context gets the more accurate, updated size. If, for example, you know your log file is written to every 10-15 seconds, if you go 30 seconds without seeing any change to the file you know to try reopening the file.
You can also track the inode number in the struct stat results to catch log file rotation.
In a non-POSIX environment, you can replace open(), fstat() and pread() calls with the low-level OS equivalent, although Windows provides most of what you'd need. On Windows, lseek() followed by read() would replace pread().
Consider I created a file using this way:
std::ofstream osf("MyTextFile.txt");
string buffer="spam\neggs\n";
osf.write(buffer,buffer.length());
osf.close();
When I was trying to read that file using the following way, I realized that more characters than present was read.
std::ifstream is("MyTextFile.txt");
is.seekg (0, is.end);
int length = is.tellg();
is.seekg (0, is.beg);
char * buffer = new char [length];
is.read (buffer,length);
//work with buffer
delete[] buffer;
For example, if the file contains spam\neggs\n, then this procedure reads 12 characters instead of 10. The first 10 chars are spam\neggs\n as expected but there are 2 more, which have the integer value 65533.
Moreover, this problem happens only when \n is present in the file. For example, there is no problem if file contains spam\teggs\t instead.
The question is;
Am I doing anything wrong? Or doesn't this procedure work as it should do?
Bonus Q: Can you suggest an alternative for reading the whole file at once?
Note: I found this way here.
The problem is that you wrote the string
"spam\neggs\n"
initially to an ofstream, without setting the std::ios::binary flag at the open (or on the initializator). This causes the runtime to translate to the "native text format", i. e., to convert each \n to \r\n on the output (as you are on Windows OS). So, after being written, the contents of your file was actually:
"spam\r\neggs\r\n"
(i. e., 12 chars). That was returned by
int length = is.tellg();
But, when you tried to read 12 chars you got
"spam\neggs\n"
back, because the runtime converted each \r\n back to \n.
As a final advice, please, please, don't use new char[length]... use std::string and reserve so you won't leak memory etc. And if your file can be very big, maybe it's not a good idea to slurp the whole file to memory at once, also.
Just an idea, since the number 2 corresponds to the count of \ns: Are you doing this on Windows? It might have something to do with the file actually containing \r\n. What happens if you open the file in binary mode (std::ios::binary)?
Can you suggest an alternative for reading the whole file at once?
Yes:
std::ifstream is("MyTextFile.txt");
std::string str( std::istreambuf_iterator<char>{is}, {} ); // requires <iterator>
str now contains the file. Does this solve your problem?
I've a producer/consumer set-up: Our client is giving us data that our server processes, and our client is giving data to our server by constantly writing to a file. Our server uses inotify to look for any file modifications, and processes the new data.
Problem: The file reader in the server has a buffer of size 4096. I've a unit test that simulates the above situation. The test constantly writes to an open file, which the file reader constantly tries to read an process. But, I noticed that after the first record is read, which is much smaller than 4096, an error flag is set in the ifstream object. This means that any new data arriving is not being processed. A simple workaround seems to be to call ifstream::clear after every read, and this does solve the issue. But, what is going on? Is this the right solution?
First off, depending on your system it may or may not be possible to read a file another process writes to: On Windows the normal settings when opening a file make the access exclusive. I don't know enough about Window to tell whether there are other settings. On POSIX system a file with suitable permissions can be opened for reading and writing by different processes. From the sounds of it you are using Linux, i.e., something following the POSIX specification.
The approach to polling a file upon change isn't entirely ideal, though: As you noticed, you get an "error" every time you reach the end of the current file. Actually, reaching the end of a file isn't really an error but trying to decode something beyond end of file is an error. Also, reading beyond the end of file will still set std::ios_base::eofbit and, thus, the stream won't be good(). If you insist on using this approach there isn't much choice than reading up to the end of the file and dealing with the incomplete read somehow.
If you have control over creating the file, however, you can do a simple trick: Instead of having the file be a normal file, you can create it is mkfifo to create a named pipe using the file name the writing program will write to: When opening a file on a POSIX system it doesn't create a new file if there is already one but uses the existing file. Well, file or whatever else is addressed by the file name (in addition to files and named pipe you may see directories, character or block special devices, and possibly others).
Named pipes are curious beasts intended to have two processes communicate with each other: What is written to one end by one process is readable at the other end by another process! The named pipe itself doesn't have any content, i.e., if you need both the content of the file and the communication with another process you might need to replicate the content somewhere. Opening a named pipe for reading which will block whenever it has reached the current end of the file, i.e., initially the read would block until there is a writer. Similarly writes to the named pipe will block until there is a reader. Once there two processes communicating the respective other end will receive an error when reading or writing the named pipe after the other process has exited.
If you are good with opening and closing file again and again,
The right solution to this problem would be to store the last read pos and start from there once file is updated:
Exact algo will be :
set start_pos = 0 , end pos =0
update end_pos = infile.tellg(),
move get pointer to start_pos (use seekg()) and read the block (end_pos - start_pos).
update start_pos = end_pos and then close the file.
sleep for some time and open file again.
if file stream is still not good , close the file and jump to step 5.
if file stream is good, Jump to step 1.
All c++ reference is present at http://www.cplusplus.com/reference/istream/istream/seekg/
you can literally utilize the sample code given here.
Exact code will be:
`
#include <iostream>
#include <fstream>
int main(int argc, char *argv[]) {
if (argc != 2)
{
std::cout << "Please pass filename with full path \n";
return -1;
}
int end_pos = 0, start_pos = 0;
long length;
char* buffer;
char *filePath = argv[1];
std::ifstream is(filePath, std::ifstream::binary);
while (1)
{
if (is) {
is.seekg(0, is.end);
end_pos = is.tellg(); //always update end pointer to end of the file
is.seekg(start_pos, is.beg); // move read pointer to the new start position
// allocate memory:
length = end_pos - start_pos;
buffer = new char[length];
// read data as a block: (end_pos - start_pos) blocks form read pointer
is.read(buffer, length);
is.close();
// print content:
std::cout.write(buffer, length);
delete[] buffer;
start_pos = end_pos; // update start pointer
}
//wait and restart with new data
sleep(1);
is.open(filePath, std::ifstream::binary);
}
return 0;
}
`
When accessing a text file, I want to read from a specific line. Let's suppose that my file has 1000 rows and I want to read row 330. Each row has a different number of characters and could possibly be quite long (let's say around 100,000,000 characters per row). I'm thinking fseek() can't be used effectively here.
I was thinking about a loop to track linebreaks, but I don't know how exactly how to implement it, and I don't know if that would be the best solution.
Can you offer any help?
Unless you have some kind of index saying "line M begins at position N" in the file, you have to read characters from the file and count newlines until you find the desired line.
You can easily read lines using std::getline if you want to save the contents of each line, or std::istream::ignore if you want to discard the contents of the lines you read until you find the desired line.
There is no way to know where row 330 starts in an arbitrary text file without scanning the whole file, finding the line breaks, and then counting.
If you only need to do this once, then scan. If you need to do it many times, then you can scan once, and build up a data structure listing where all of the lines start. Now you can figure out where to seek to to read just that line. If you're still just thinking about how to organize data, I would suggest using some other type of data structure for random access. I can't recommend which one without knowing the actual problem that you are trying to solve.
Create an index on the file. You can do this "lazily" but as you read a buffer full you may as well scan it for each character.
If it is a text file on Windows that uses a 2-byte '\n' then the number of characters you read to the point where the newline occurs will not be the offset. So what you should do is a "seek" after each call to getline().
something like:
std::vector< off_t > lineNumbers;
std::string line;
lineNumbers.push_back(0); // first line begins at 0
while( std::getline( ifs, line ) )
{
lineNumbers.push_back(ifs.tellg());
}
last value will tell you where EOF is.
I think you need to scan the file and count the \n occurrences since you find the desired line. If this is a frequent operation, and you are the only one you write the file, you can possibly mantain an index file containing such information side by side with the one containing the data, a sort of "poor-man-index", but can save a lot of time.
Try running fgets in a loop
/* fgets example */
#include <stdio.h>
int main()
{
FILE * pFile;
char mystring [100];
pFile = fopen ("myfile.txt" , "r");
if (pFile == NULL) perror ("Error opening file");
else {
fgets (mystring , 100 , pFile);
puts (mystring);
fclose (pFile);
}
return 0;
}