There is a function to load files:
int loadfile(char *fn)
{
printf( "\n my path: '%s' \n", fn );
FILE *f = fopen( fn, "r");
printf( "got\n" );
...
return 1;
}
The first file is in main() newfile( argv[1] );, works. The second is get by flex parsing/reading the first file, what I belive isn't related to the problem.
The console:
path: 'file1.hot'
got
path: 'file2.hot'
Segmentation fault: 11
The printf was able to print the char *fn, but fopen get a segmentation fault.
The next situation was, I tried explicit to put the file inside the loadfile doing fopen( "file2.hot", "r"); and works.
I'm compiling with g++, there is a different approaching when using c++ to use char * or fopen?
EDIT
Sorry, there's no newfile( argv[1] );. Correct: loadfile( argv[1] );.
General remark: When using C++, please prefer std::fstream to the C-style fopen/fread/etc.; also prefer std::string to char*. The latter alone will cure many memory headaches. If you, for some reason, have to stick to the C-style functions, be sure to check the return values (as mentioned in the other answers here already) - fopen for example returns a NULL pointer when it fails.
More specific to your question: Try to run your program under gdb (e.g. gdb <your-program>), to see where the Segmentation fault occurs exactly; that way you will also be able to see more details (e.g. variable contents etc.). Alternatively, if working under linux, use analysis tools such as valgrind to detect any kind of memory access problems.
you should always check return values from functions that have a return value. In this case make sure when you fopen the file that the return handle is not NULL and also make sure if you open the file, to fclose it.
Learn to use the GDB debugger (assuming you are on Linux), or the valgrind utility.
Perhaps putting your file path inside a std::string is worthwhile, and, as nyarlathotep mentioned, use a std::fstream.
It could happen that your fn is not null terminated....
Compile with g++ -Wall -g
As Anders K answered, always check the result of fopen against NULL. Pierre Vittet's Talpo extension to GCC (coded in MELT) is able to check automatically that you do check that.
Related
I am trying to write a lexer using flex, and want to open and read from a file ending with a particular extension. E.g filename.k. I am only able to do it if I specify the file name as well as the extension.
FILE *myfile = fopen("a.k", "r");
if (!myfile) {
cout << "I can't open a.k!" << endl;
Can someone show me the way to open *.k files in C++.
I am running flex on Ubuntu. What I am trying to do is to run a flex program. The above code executes fine. I wanted a way where I can open a file with .k extension irrespective of the file name. Example. ./myprogram a.k or ./myprogram b.k. In the above example I always have to specify the file name in the code itself all the time.
Comment to Basile's anser:
[...] Such as ./myprogram a.k, I wanted a way where I can write any filename instead of a but ending with a .k extension.
While the cited answer technically is correct, I think your true problem is how to get some arbitrary, but specific file path from the command line:
Example: ./myprogram a.k or ./myprogram b.k
The thing is quite easy: you get the command line parameters passed directly to your main function, provided you use the variant accepting them:
int main(int argc, char* argv[]);
First parameter (argv[0]) is always the name of your programme (or an empty string, if not available), so argc will always be at least one. Afterwards the parameters provided follow, so invoking "./myprogram b.k" will result in argc being two and argv pointing to a char* array equivalent to the following:
char* argv[] =
{
"./myprogram",
"b.k",
nullptr // oh, yes, the array is always null terminated...
};
And then, the matter gets easy: Check, if the parameter is given at all: if(argc == 2) or, if you are willing to accept but ignore any additional parameters, if(argc >= 2) or simply if(argv[1]) (as it will be nullptr, if no parameter given, or the first parameter otherwise) and then use it for fopen or, if you prefer a more C++ like way, to open a std::ifstream. You might want to have additional checks, e. g. if the file name really ends with ".k", but that's up to you now...
Your fopen-ing code is good, but running in conditions (e.g. in some weird working directory, or without sufficient permissions) which make the fopen fail.
I recommend to use errno (perhaps implicitly thru perror) in that failure case to get an idea of the failure reason:
FILE *myfile = fopen("a.k", "r");
if (!myfile) {
perror("fopen of a.k");
exit(EXIT_FAILURE);
}
See e.g. fopen(3), perror(3), errno(3) (or their documentation for your particular implementation and system).
Notice that file extensions don't really exist in standard C++11 (but C++17 has filesystem). On Linux and POSIX systems, file extensions are just a convention.
Can someone show me the way to open *.k files in C++.
If you need to open all files with a .k extension, you may rely on globbing (on POSIX, run something like yourprog *.k in your shell, which will expand the *.k into a sequence of file names ending with .k before running your program, whose main would get an array of arguments; see glob(7)), or you have to loop explicitly using operating system primitives or functions (perhaps with glob(3), nftw(3), opendir(3), readdir(3), ... on Linux; for Windows, read about FindFirstFile etc...)
Standard C++11 don't provide a way to iterate on all files matching a given pattern. Some framework libraries (Boost, Poco, Qt) do provide such a way. Or you need to use operating system specific functions (e.g. to read the current directory. But directories are not known to C++11 and are an abstraction provided by your operating system). But C++17 has filesystem, but you need a very recent compiler and C++ standard library to get that.
BTW, on Unix or POSIX systems, you could have one single file named *.k. Of course that is very poor taste and should be avoided (but you might run touch '*.k' in your shell to make such a file).
Regarding your edit, for Linux, I recommend running
./myprogram *.k
(then your shell will expand *.k into one or several arguments to myprogram)
and code the main of your program myprog appropriately to iterate on arguments. See this.
If you want to run just myprogram without any additional arguments, you need to code the globbing or the expansion inside it. See glob(3), wordexp(3). Or scan directories (with opendir(3), readdir(3), closedir, stat(2) or nftw(3))
I've read a few threads that said I should close a certain file when this error pops up when I use cppcheck. But my issue is this:
2 weeks back I ran a shell script which called another file inside it to execute and it worked fine.
But for the past two days, I got segmentation fault in when running the main code using Cygwin. I had posted about it earlier, and then on analysing the executable file (in cpp) using cppcheck, I got a line 31 resource leak : fin.
This particular block of code is pasted:
void load_fasta_list(char * file_name, vector<string> &file_list){
FILE * fin;
fin = fopen(file_name, "rt");
char temp_file[512];
char * temp_file2;
while (!feof(fin)){
fgets(temp_file, 512, fin);
if (!feof(fin)){
temp_file2 = strtok(temp_file, "\n");
file_list.push_back(temp_file2);
}
}
cout<<file_list.size()<<" FASTA files to be analyzed."<<endl;
}
The line 31 is the last bracket there.
These codes are available from Washington U which I am using (and I am a beginner), and I am getting this error without having done anything to the set of codes.
Any idea on how to progress?
P.S. When I tried the fclose statement, cppcheck showed no error, but when I ran the shell script again, I got the segmentation fault
The problem is that your lecturer taught you C instead of C++. He used a primitive C construct to open a file which must be manually closed instead of the automatic cleanup offered by C++. He forgot to do so, aptly demonstrating why using such constructs is inherently unsafe. It is also exception-unsafe, and there are other unpleasant potential bugs lurking in here such as off-by-one errors, and the fun which is non-reentrant strtok, due to the use of C string handling.
You should rewrite it (or make your lecturer fix it) to use the equivalent C++ constructs, which automatically clean up all the memory and file handles needed.
The code contains other offences too, like the output parameter, using namespace std;, and such. Whoever wrote it is simply unfit to teach C++. You need to kick them into gear.
To prevent a leak, call
fclose(fin);
This call will close the file.
I see two problems:
Check to see that fin is valid (fopen returns a null if the file doesn't exist!)
Use fclose(fin) to close your file
I'm getting this in my program, C++
Program received signal SIGSEGV, Segmentation Fault.
0xb7d62153 in __strtol_l_internal () from /lib/libc.so.6
I got that by using GDB. CC compiled it fine along with G++
sockf = openSocket(domainname, portc);
if(sockf > 0){
log("ZONTRECK","COMPLETED SOCKET!");
int newsockfd;
newsockfd = openListen(sockf,portc);
log("ZONTRECK","Starting console!");
It's an internal function within libc, related to strtol() -- if I had to hazard a guess, I'd say you're trying to read in a number, and something is blowing up.
Use the backtrace command in gdb to see how the program got to that point from your code - that will help find what parameter is being passed that's causing the problem (probably a NULL or otherwise invalid pointer).
Maby you are trying here to read memory corrupted by some code ran before this. If this is the case than the best way is to debug it by Valgrind.
I had to edit a file i didn't post on this site. Its my file that contains openSocket,openListen.
The atoi function requires const char, not char.
I was passing a char to it instead of a const char.
I fixed this issue by changing the char in int main() to const char.
I'm working with a C++ library which makes extensive use of constructs like:
FILE *out_file1, *out_file2 ... *out_fileN;
//for some output files, but not all:
out_file1 = fopen( filename, "w" )
//later
if( out_file1 ) fprintf( ... )
if( out_file2 ) fprintf( ... )
This seems to work OK under g++ on OS X. When I run it on linux, however, I get segfaults. Checking through the code, out_file is often initialised to non zero values.
I've tried adding
out_file = NULL
but this doesn't seem to help - in fact, according to the debugger, it doesn't change the value of out_file.
Can anyone help as to:
Is this a recognised and sensible way to do file IO (i.e. using the file pointers in conditionals)
Why is the value of the pointer not being set to null?
How can I set it to null?
Just to be clear - I'm trying to change the code as little as possible, as I'm coding a wrapper to someone else's library. So, even if the general structure is a strange way to do things, I'd rather find a workaround which doesn't change it if possible.
EDIT:
Since this seems to be a reasonable, if outdated way to do conditional file IO, I can narrow the scope of my question to the second two out of three, i.e.
class IO
{
private:
FILE* opFile
IO()
{
//At this point, opFile == 0x40
opFile = NULL; //At this point opFile is still 0x40
}
}
So obviously, if it comes out of the constructor with a non-null value, anything like:
if( opFile ) fprintf( ... )
will fail. But how is it managing to come out of the constructor with a non-null value?
And in case it helps, this works "as expected" under gcc on OSX, but not g++-4.3 or g++4.4 on Ubuntu.
Your problem is elsewhere in the code, most likely in the *printf calls you mention?
Show us more code, or use a debugger to find where it crashes.
g++ -O0 -Wall -g mysource.cpp -o test
gdb ./test
(gdb) run argument1 argument2
Also, look at valgrind for additional memory checking tools
valgrind ./test
$0.02
Update
Added -O0 to avoid confusing analysis with results of proper compiler optimization :)
in C++, you should use iostreams, will help you avoid all these issues...
std::ifstream in ("some_file");
if (in)
{
// do stuff with stream...
}
Is this the actual code from your program?
FILE* out_file1, out_file2 ... out_fileN
Then only out_file1 is a FILE* and all the rest are just FILE. That would explain their "funny values".
In C++ you should use std::fstream(or std::istream/std::ostream) for file-IO, unless you have a very good reason not to. Then you would most likely not have this problem, as you could just write this:
std::ifstream file("myfile.txt");
while(file) { // this checks for any error
// do stuff with file
}
Possibly the compiler is optimizing your code. Assigning out_file to NULL a moment before it gets assigned to by the return value of fopen is pointless. The compiler might know that and not bother assigning NULL.
You could define and initialize the value on one line:
FILE* out_file = fopen( filename, "w" )
But, this won't make your problem go away. As somebody commented, you might be looking at the wrong bit of code as there doesn't appear to be much wrong with this.
You could try creating a minimal app that does just the operation you want and see if that works ok before reintroducing the rest of the code.
I see C's getcwd via:
man 3 cwd
I suspect C++ has a similar one, that could return me a std::string .
If so, what is it called, and where can I find it's documentation?
Thanks!
Ok, I'm answering even though you already have accepted an answer.
An even better way than to wrap the getcwd call would be to use boost::filesystem, where you get a path object from the current_path() function. The Boost filesystem library allows you to do lots of other useful stuff that you would otherwise need to do a lot of string parsing to do, like checking if files/directories exist, get parent path, make paths complete etcetera. Check it out, it is portable as well - which a lot of the string parsing code one would otherwise use likely won't be.
Update (2016): Filesystem has been published as a technical specification in 2015, based on Boost Filesystem v3. This means that it may be available with your compiler already (for instance Visual Studio 2015). To me it also seems likely that it will become part of a future C++ standard (I would assume C++17, but I am not aware of the current status).
Update (2017): The filesystem library has been merged with ISO C++ in C++17, for
std::filesystem::current_path();
std::string's constructor can safely take a char* as a parameter. Surprisingly there's a windows version too.
Edit: actually it's a little more complicated:
std::string get_working_path()
{
char temp[MAXPATHLEN];
return ( getcwd(temp, sizeof(temp)) ? std::string( temp ) : std::string("") );
}
Memory is no problem -- temp is a stack based buffer, and the std::string constructor does a copy. Probably you could do it in one go, but I don't think the standard would guarantee that.
About memory allocation, via POSIX:
The getcwd() function shall place an absolute pathname of the current working directory in the array pointed to by buf, and return buf. The pathname copied to the array shall contain no components that are symbolic links. The size argument is the size in bytes of the character array pointed to by the buf argument. If buf is a null pointer, the behavior of getcwd() is unspecified.
Let's try and rewrite this simple C call as C++:
std::string get_working_path()
{
char temp [ PATH_MAX ];
if ( getcwd(temp, PATH_MAX) != 0)
return std::string ( temp );
int error = errno;
switch ( error ) {
// EINVAL can't happen - size argument > 0
// PATH_MAX includes the terminating nul,
// so ERANGE should not be returned
case EACCES:
throw std::runtime_error("Access denied");
case ENOMEM:
// I'm not sure whether this can happen or not
throw std::runtime_error("Insufficient storage");
default: {
std::ostringstream str;
str << "Unrecognised error" << error;
throw std::runtime_error(str.str());
}
}
}
The thing is, when wrapping a library function in another function you have to assume that all the functionality should be exposed, because a library does not know what will be calling it. So you have to handle the error cases rather than just swallowing them or hoping they won't happen.
It's usually better to let the client code just call the library function, and deal with the error at that point - the client code probably doesn't care why the error occurred, and so only has to handle the pass/fail case, rather than all the error codes.
You'll need to just write a little wrapper.
std::string getcwd_string( void ) {
char buff[PATH_MAX];
getcwd( buff, PATH_MAX );
std::string cwd( buff );
return cwd;
}
I used getcwd() in C in the following way:
char * cwd;
cwd = (char*) malloc( FILENAME_MAX * sizeof(char) );
getcwd(cwd,FILENAME_MAX);
The header file needed is stdio.h.
When I use C compiler, it works perfect.
If I compile exactly the same code using C++ compiler, it reports the following error message:
identifier "getcwd" is undefined
Then I included unistd.h and compiled with C++ compiler.
This time, everything works.
When I switched back to the C compiler, it still works!
As long as you include both stdio.h and unistd.h, the above code works for C AND C++ compilers.
All C functions are also C++ functions. If you need a std::string, just create one from the char* that getcwd gets for you.
I also used boost::filesystem as stated in another answer above. I just wanted to add that since the current_path() function does not return a std::string, you need to convert it.
Here is what I did:
std::string cwd = boost::filesystem::current_path().generic_string();
You could create a new function, which I would prefer over linking to a library like boost(unless you already are).
std::string getcwd()
{
char* buff;//automatically cleaned when it exits scope
return std::string(getcwd(buff,255));
}