I've got a function which reads in a filename from command line and then attempts to open the file and parse it. However fopen always returns null with error codes 2, 3, or 123 depending on the filename given.
The original non working code is:
void CProfiler::ExecuteIrFile( LPCTSTR pszFile)
{
FILE *fp = fopen( pszFile, "r");
if ( !fp) return;
}
Changing to fopen( "c:\\temp\\file.txt", "r") does however work.
So I've been led to believe that its a problem with escaping in the string i'm passing to fopen
Replacing \ with \\ in the string does'nt work either though. For good measure the code I used to do that is:
CString tempStr(pszFile);
tempStr.Replace("\\", "\\\\");
FILE *fp = fopen( tempStr, "r");
Is their a method of escaping a string properly for fopen, or something else i'm missing?
Uncomplicated answers would be welcomed happily as I haven't used C++ for very much at all in the past.
Solved
I had a leading space in the string getting passed, the resolution on the screen with the debugger on was too low and thus did'nt notice the space until I tried printing the string out to a file as binary.
Thanks all for your help
You can also use / in windows filenames - it's easier since you can replace single characters without having to change the string length.
eg. "c:/temp/file.txt" instead of "c:\\temp\\file.txt"
Related
I can take the text file string with "fgets" and print it out one line using "SetWindowTextA". Like This code
FILE *p_file = fopen("Test.txt", "r");
if (p_file != NULL) {
text = fgets(temp, sizeof(temp), p_file);
m_Edit_load.SetWindowTextA(text);
fclose(p_file)
}
But I want to print out all the lines. I've used the code at the bottom, but only the last line printed
FILE *p_file = fopen("Test.txt", "r");
if (p_file != NULL) {
while(NULL != fgets(temp, sizeof(temp), p_file){
m_Edit_load.SetWindowTextA(temp);
}
fclose(p_file);
}
How can I print out all the rows?
Problem here is, SetWindowTextA is setting text, not appending. Hence, your window might be ending up with last line. To remove this problem, first create a dynamic array, append all characters, then call SetWindowTextA at last.
The most straightforward way is to open the file in binary mode, load it into a buffer and put it in the control through a single SetWindowText() call.
Depending on the format of the text file, it may require some additional steps:
If the file is ASCII, and of the same codepage as the system it runs on, a SetWindowTextA() call is OK.
If the file is Unicode it can be loaded onto the control by calling SetWindowTextW() - the control must be Unicode as well.
If the file is UTF-8 or ASCII, but in a codepage other than that of the system, the text must be converted to Unicode using the MultiByteToWideChar() function, before loaded onto the control.
Another conversion that may be needed is LF to CR-LF, if the lines are joined in the control. You need to write some code for this.
As already stated in one of the other answers, the problem is that SetWindowText will overwrite the text. Your code seems to incorrectly assume that this function will append the text instead.
If you want to set the edit control to the entire text of a file, then you will have to read the entire text of the file into a memory buffer, and pass a pointer to that memory buffer to SetWindowText.
The function fgets is used for reading a single line. Although you can solve your problem with fgets, there is no reason to limit yourself to only reading one line at a time. It would therefore be more efficient to read as much data as possible in one function call, for example by using the function fread instead of fgets.
Another issue is that you are opening your text file in text mode, which means that the \r\n line endings will get translated to \n. However, this is not what you want, because when using SetWindowText on a multiline edit control, the line endings must be \r\n, not \n. Therefore, you should change the line
FILE *p_file = fopen("Test.txt", "r");
to
FILE *p_file = fopen("Test.txt", "rb");
in order to open the file in binary mode.
The whole code should look like this:
FILE *fp = fopen( "Test.txt", "rb" );
if ( fp != NULL )
{
char buffer[4096];
size_t bytes_read;
bytes_read = fread( buffer, 1, (sizeof buffer) - 1, fp );
buffer[bytes_read] = '\0';
m_Edit_load.SetWindowTextA( buffer );
fclose(p_file);
}
If it is possible that 4096 bytes is not sufficient to contain the entire file, then you could increase the size of the buffer. However, you should not increase it too much, because otherwise, there is a danger of a stack overflow. Instead of allocating the memory buffer on the stack, you could also allocate it on the heap, by using malloc instead. Another alternative would be to use a static buffer, which also does not get allocated on the stack.
I'm working on a project where I'm required to take input from a file with an extension ".input". when run, the user gives the filename without the file extension as a command line argument. I then take that argument, argv[1] and open the file specified but I can't get it to work without the user typing in the entire filename
for example:
user enters> run file.input
//"run" is the executable, "file.input" is the filename
user is supposed to enter> run file
how do I get this file extension implied when using this code:
fopen(argv[1],"r");
I tried using a string, setting it to argv[1] and then appending ".input" to it but fopen won't accept that string.
Without seeing your code, I can't say for certain what went wrong, but I suspect you did something like this:
string filename = argv[1];
filename += ".input";
FILE* f = fopen(filename, "r"); // <--- Error here
The issue here is that the C++ std::string type is not a char *, which is what's expected by fopen. To fix this, you can use the .c_str() member function of the std::string type, which gives back a null-terminated C-style string:
FILE* f = fopen(filename.c_str(), "r"); // No more errors!
As I mentioned in my comment, though, I think you'd be better off just using ifstream:
string filename = argv[1];
filename += ".input";
ifstream input(filename);
There's no longer a need for .c_str(), and you don't need to worry about leaking resources. Everything's managed for you. Plus, it's type-safe!
Im using this piece of code to read a file to a string, and its working perfectly with files manually made in notepad, notepad++ or other text editors:
std::string utils::readFile(std::string file)
{
std::ifstream t(file);
std::string str((std::istreambuf_iterator<char>(t)),
std::istreambuf_iterator<char>());
return str;
}
When I create a file via notepad (or any other editor) and save it to something, I get this result in my program:
But when I create a file via CMD (example command below), and run my program, I receive an unexpected result:
cmd /C "hostname">"C:\Users\Admin\Desktop\lel.txt" & exit
Result:
When I open this file generated by CMD (lel.txt), this is the file contents:
If I edit the generated file (lel.txt) with notepad (adding a space to the end of the file), and try running my program again, I get the same weird 3char result.
What might cause this? How can I read a file made via cmd, correctly?
EDIT
I changed my command (now using powershell), and added a function I found, named SkipBOM, and now it works:
powershell -command "hostname | Out-File "C:\Users\Admin\Desktop\lel.txt" -encoding "UTF8""
SkipBOM:
void SkipBOM(std::ifstream &in)
{
char test[3] = { 0 };
in.read(test, 3);
if ((unsigned char)test[0] == 0xEF &&
(unsigned char)test[1] == 0xBB &&
(unsigned char)test[2] == 0xBF)
{
return;
}
in.seekg(0);
}
This is almost certainly BOM (Byte Order Mark) : see here, which means that your file is saved in UNICODE with BOM.
There is a way to use C++ streams to read files with BOM (you have to use converters) - let me know if you need help with that.
That is how unicode looks when treated as an ANSI string. In notepad use File - Save As to see what the current format of a file is.
Now CMD uses OEM font, which is the same as ANSI for English characters. So any unicode will be converted to OEM by CMD. Perhaps you are grabbing the data yourself.
In VB you would use StrConv to convert it.
I'm writing two programs that communicate by reading files which the other one writes.
My problem is that when the other program is reading a file created by the first program it outputs a weird character at the end of the last data. This only happens seemingly at random, as adding data to the textfile can result in a normal output.
I'm utilizing C++ and Qt4. This is the part of program 1:
std::ofstream idxfile_new;
QString idxtext;
std::string fname2="some_textfile.txt"; //Imported from a file browser in the real code.
idxfile_new.open (fname2.c_str(), std::ios::out);
idxtext = ui->indexBrowser->toPlainText(); //Grabs data from a dialog of the GUI.
//See 'some_textfile.txt' below
idxfile_new<<idxtext.toStdString();
idxfile_new.clear();
idxfile_new.close();
some_textfile.txt:
3714.1 3715.1 3716.1 3717.1 3719.1 3739.1 3734.1 3738.1 3562.1 3563.1 3623.1
part of program 2:
std::string indexfile = "some_textfile.txt"; //Imported from file browser in the real code
std::ifstream file;
std::string sub;
file.open(indexfile.c_str(), std::ios::in);
while(file>>sub)
{
cerr<<sub<<"\n"; //Stores values in an array in the real code
}
This outputs:
3714.1
3715.1
3716.1
3717.1
3719.1
3739.1
3734.1
3738.1
3562.1
3563.1
3623.1�
If I add more data it works at times. Sometimes it can output data such as
3592.�
or
359�
at the end. So it is not consistent in reading the whole data either. At first I figured it wasn't reading the eof properly, and I have read and tried many solutions to similar problems but can't get it to work correctly.
Thank you guys for the help!
I managed to solve the problem by myself this morning.
For anyone with the same problem I will post my solution.
The problem was the UTF-8 encoding when creating the file. Here's my solution:
Part of program 1:
std::ofstream idxfile_new;
QString idxtext;
std::string fname2="some_textfile.txt";
idxfile_new.open (fname2.c_str(), std::ios::out);
idxtext = ui->indexBrowser->toPlainText();
QByteArray qstr = idxtext.toUtf8(); //Enables Utf8 encoding
idxfile_new<<qstr.data();
idxfile_new.clear();
idxfile_new.close();
The other program is left unchanged.
A hex converter displayed the extra character as 'ef bf bd', which is due to the replacement character U+FFFD that replace invalid bytes when encoding to Utf8.
What translation occurs when writing to a file that was opened in text mode that does not occur in binary mode? Specifically in MS Visual C.
unsigned char buffer[256];
for (int i = 0; i < 256; i++) buffer[i]=i;
int size = 1;
int count = 256;
Binary mode:
FILE *fp_binary = fopen(filename, "wb");
fwrite(buffer, size, count, fp_binary);
Versus text mode:
FILE *fp_text = fopen(filename, "wt");
fwrite(buffer, size, count, fp_text);
I believe that most platforms will ignore the "t" option or the "text-mode" option when dealing with streams. On windows, however, this is not the case. If you take a look at the description of the fopen() function at: MSDN, you will see that specifying the "t" option will have the following effect:
line feeds ('\n') will be translated to '\r\n" sequences on output
carriage return/line feed sequences will be translated to line feeds on input.
If the file is opened in append mode, the end of the file will be examined for a ctrl-z character (character 26) and that character removed, if possible. It will also interpret the presence of that character as being the end of file. This is an unfortunate holdover from the days of CPM (something about the sins of the parents being visited upon their children up to the 3rd or 4th generation). Contrary to previously stated opinion, the ctrl-z character will not be appended.
In text mode, a newline "\n" may be converted to a carriage return + newline "\r\n"
Usually you'll want to open in binary mode. Trying to read any binary data in text mode won't work, it will be corrupted. You can read text ok in binary mode though - it just won't do automatic translations of "\n" to "\r\n".
See fopen
Additionally, when you fopen a file with "rt" the input is terminated on a Crtl-Z character.
Another difference is when using fseek
If the stream is open in binary mode, the new position is exactly offset bytes measured from the beginning of the file if origin is SEEK_SET, from the current file position if origin is SEEK_CUR, and from the end of the file if origin is SEEK_END. Some binary streams may not support the SEEK_END.
If the stream is open in text mode, the only supported values for offset are zero (which works with any origin) and a value returned by an earlier call to std::ftell on a stream associated with the same file (which only works with origin of SEEK_SET.
Even though this question was already answered and clearly explained, I think it would be interesting to show the main issue (translation between \n and \r\n) with a simple code example. Note that I'm not addressing the issue of the Crtl-Z character at the end of the file.
#include <stdio.h>
#include <string.h>
int main() {
FILE *f;
char string[] = "A\nB";
int len;
len = strlen(string);
printf("As you'd expect string has %d characters... ", len); /* prints 3*/
f = fopen("test.txt", "w"); /* Text mode */
fwrite(string, 1, len, f); /* On windows "A\r\nB" is writen */
printf ("but %ld bytes were writen to file", ftell(f)); /* prints 4 on Windows, 3 on Linux*/
fclose(f);
return 0;
}
If you execute the program on Windows, you will see the following message printed:
As you'd expect string has 3 characters... but 4 bytes were writen to file
Of course you can also open the file with a text editor like Notepad++ and see yourself the characters:
The inverse conversion is performed on Windows when reading the file in text mode.
We had an interesting problem with opening files in text mode where the files had a mixture of line ending characters:
1\n\r
2\n\r
3\n
4\n\r
5\n\r
Our requirement is that we can store our current position in the file (we used fgetpos), close the file and then later to reopen the file and seek to that position (we used fsetpos).
However, where a file has mixtures of line endings then this process failed to seek to the actual same position. In our case (our tool parses C++), we were re-reading parts of the file we'd already seen.
Go with binary - then you can control exactly what is read and written from the file.
In 'w' mode, the file is opened in write mode and the basic coding is 'utf-8'
in 'wb' mode, the file is opened in write -binary mode and it is resposible for writing other special characters and the encoding may be 'utf-16le' or others