segfault because of missing header files - c++

I had forgotten to put a 3rd party "header only" library header (.h) files into the correct path when building a shared object. It built fine - retrospectively surprising.
When run a segfault occurred exactly at the line when that 3rd party lib was used in my shared object.
The part I do not understand is when I copied those header files to the path specified with #include, I could not cause a segfault. I did not even re-build the object. The very strange thing is that when I mv the dir the header files are in, it still worked - no segfault. However, when I completely rm the dir, it crashed. Does it look for header files the current dir and subdirs? I've also got that header-only library in the standard(?) /usr/local/include
I've not worked with shared objects before. I usually create static objects and include them in the build. The flags I used to create the shared object in question are -shared -fPIC
I'd like to understand this behavior. It's interesting because of deployment. Do I need to include those header files when deploying on the production machine? Essentially I don't want to have that as a dependency as it is a "header-only" lib.
edit
Code:
#include <rapidjson/document.h>
#include <rapidjson/writer.h>
#include <rapidjson/stringbuffer.h>
void MyClass::myFunction()
{
rapidjson::StringBuffer string;
rapidjson::Writer<rapidjson::StringBuffer> jsonWriter(string);
}
Here is a link to the debug session:
http://pastebin.com/a0FaQwf1

You never need to supply header files to the user in order to run the program.
Your library probably just refines defaults,
that is the reason why it do not fail when missing at compile time

An explanation for the strange move/remove behavior could be that the shared object was still loaded into memory during the move and kept an open file handle to something in the include dir.
You know, under ext2/3/4 open files are connected to inodes and not to dir paths. Thus moving an open file won't harm. On the other hand IIRC also removing won't harm. The freeing of the inode will be delayed until all processes have closed the file. Maybe this just happened between your mv and rm.

Related

Need help understanding C++ libraries, compiling, linking, header files for specific project

This is my first stab at C++, also I know that the question is broad but I have a specific example that I'm working with so hopefully that will narrow everything down a bit.
I'm basically attempting to compile a C++ game manually in Linux (Ubuntu 14.04). The source code I am attempting to compile is located in this directory: https://github.com/akadmc/SmashBattle/tree/master/battle.
I'm CD'ing into the battle directory and, perhaps naively running
gcc *.cpp
I started seeing multiple issues as such:
compilation terminated.HealthPowerUp.cpp:1:21:
fatal error: SDL/SDL.h: No such file or directory #include "SDL/SDL.h"
and
compilation terminated.LaserBeamPowerUp.cpp:1:21:
fatal error: SDL/SDL.h: No such file or directory #include <SDL/SDL.h>
After researching header file includes I concluded that includes without <>'s are basically just relative paths to include a header file, and that when they are wrapped in <>'s they can either lookup the file through a listing of directories specified in an enviornment variable, or a command line option.
So my first question is, is there any reason the developer used
#include "SDL/SDL.h
AND
#include <SDL/SDL.h>
in different files? There was no SDL directory in the source code...
After realizing that SDL was missing from the source code / environment in one way or another I did tinkering. I was pretty confused (and still am) because I downloaded the SDL source files, didn't see any header files, ended up building a version of SDL by using cmake, and then build. I realized afterwards that I just made a local executable and didn't yield any header files. Then I realized that I just needed the development library, downloaded that, and put higher in the directory tree and then included it at compile with
c++ *.cpp -I $HOME/Desktop/smashProject/source/
Afterwards, the previous header file errors went away - but I started getting errors like the following:
Text.cpp:(.text+0x17): undefined reference to `SDL_RWFromFile'
Text.cpp:(.text+0x24): undefined reference to `SDL_LoadBMP_RW'
Text.cpp:(.text+0x34): undefined reference to `SDL_DisplayFormat'
And so on. Am I generally headed in the right path or do I have some misunderstanding about compiling, including development libraries, etc? Also I've read the the order of the compilation matters, and I'm not using any order + the developer didn't put a makefile in the source code or anything. I'm generally just confused as to how I should be doing this. Any help would be greatly appreciated.
Yes, you are on the right track. However, now you need to have a linkage to the SDL libraries. The -I just includes an extra library path but you have to actually link your assembly to the SDL files.
See this stack overflow question for more information.
How to compile an example SDL program written in C?

Successfully Included File, now dealing with semantics issues

Earlier I asked for help in including an external library called Eigen in xcode 4. I finally managed to get it to include the header file I wanted to use, Array, by going to build phases, link binary with libraries, and then adding the sub-folder within the Eigen archive where Array.h was located, Core. I also added the filepath to Core's parent directory, src, in header search paths.
When I finally managed to add the line of code #include <Core/Array.h> without it getting highlighted as an error, I ran the application (which worked previously) and XCode said that the build failed, with the error messages citing semantic issues. I checked the error message and they include, "Uknown identifier 'Array'" in a file named Array.h.
All of the header files are in src and according to the Eigen website, they are all that's needed to use Eigen with c++. I've attempted to reformat the binary links so they go to src instead of Core, and adjusting the buildpath to lead to the parent directory of src, ensuring that all header files can now be accessed, but I'm still getting semantics issues. Does anyone have a solution to this?
You generally want to include the Core file, not the individual .h files, i.e.
#include <Eigen/Core>
There are exceptions, but again, you won't be including the .h files, those are used internally. Additionally, it appears that your include path points to the ./Eigen/src/ directory. You want to move it up two directories so that when you write #include <Eigen/Core> it finds the Core file correctly. The files that you'll most likely include are the extension-less ones in the Eigen directory.

C++ cross-folder include failing

I've got a project which has two source folders (main and lib). It produces a shared library and an executable. It is currently built as so:
copy all files from both folders into a new temp folder
run lib_makefile
run main_makefile
copy binaries out
delete temp folder
This struck me as being a weird way to do things, so I tried building each in-place by adding -I../main to lib_makefile (and vice-versa). Unfortunately, this doesn't seem to work.
Illustrative example:
foo.cpp (in lib) includes bar.h (in main), which includes baz.h (back in lib).
When I try to compile the shared lib, it correctly locates bar.h in main/, but then bails out with "no such file or directory" claiming it cannot find baz.h, even though baz.h is in the same directory as lib_makefile!
All includes are in the format #include "xxx.h" (i.e no relative paths in the include statements).
Is there a way to get this to work? I feel like I must be missing something obvious..
(nb: I can't modify the #includes because other people still build this the copy-everything-across way)
You should add something like -I../lib (or whatever your library path is) to the makefile for the library as well.
The reason for this is that the pre-processor looks for include-files relative to the directory the current file is in, not from where the original file is in.

C++ include libraries

Ok, so it's been a while, and i'm having problems with #includes
So I'm doing
#include "someheader.h"
but it's giving me
fatal error: someheader.h: No such file or directory
It's a system wide library I guess you could say.
I'm running arch linux and I installed the library from the repo, and I think the .h files are in /usr/include.
I could just copy all the header files into the folder my code is in but that would be a hack.
What is the "right" way to do this?
Edit: I wasn't correct by saying the .h files were in /usr/include, what I meant was that the library folder was in there
So, Emile Cormier's answer worked to a certain extent.
The problem now is that there are some include in the header file and it seems from the methods I'm trying to access that those includes are not happening
it's giving my the error
undefined reference to Namespace::Class::method()
Edit:
Ok so the final answer is:
#include <library_name/someheader.h>
And compile with
g++ code.cpp -llibrary_name
Sometimes, header files for a library are installed in /usr/include/library_name, so you have to include like this:
#include <library_name/someheader.h>
Use your file manager (or console commands) to locate the header file on your system and see if you should prefix the header's filename with a directory name.
The undefined reference error you're getting is a linker error. You're getting this error because you're not linking in libsynaptics along with your program, thus the linker cannot find the "implementation" of the libsynaptics functions you're using.
If you're compiling from the command-line with GCC, you must add the -lsynaptics option to link in the libsynaptics library. If you're using an IDE, you must find the place where you can specify libraries to link to and add synaptics. If you're using a makefile, you have to modify your list of linker flags so that it adds -lsynaptics.
Also the -L <path_to_library> flag for the search path needs to be added, so the linker can find the library, unless it's installed in one of the standard linker search paths.
See this tutorial on linking to libraries with GCC.
You'd use #include <someheader.h> for header files in system locations.
#include "someheader.h" would try to include the file someheader.h in the directory of your .c file.
In addition to including the header file, you also need to link in the library, which is done with the -l argument:
g++ -Wall youprogram.cpp -lname_of_library
Not doing so is the reason for the "undefined reference .. " linker errors.
The quick fix is to do use:
#include <someheader.h>
assuming that someheader.h is in the standard include locations (to find it use the command locate someheader.h in a shell. If it is in /usr/include it is in a standard location. If it is in a subdirectory of /usr/include you only need to add the part of the directory up to /usr/include in the #include directive (e.g. #include <fancy_lib/someheader.h>)
However, this is only half of the story. You also will need to set up your build system in a way that locates the given library and adds its include path (the path under which it's header files are stored) to the compiler command (for gcc that is -I/path/to/header). That way you can also build with different versions by configuring them in your build system. If the library is not header-only you will also have to add it to the linker dependencies. How this is achieved in your build system is best found out by consulting its documentation.

Header files dependencies between C++ modules

In my place we have a big C++ code base and I think there's a problem how header files are used.
There're many Visual Studio project, but the problem is in concept and is not related to VS. Each project is a module, performing particular functionality. Each project/module is compiled to library or binary. Each project has a directory containing all source files - *.cpp and *.h. Some header files are API of the module (I mean the to the subset of header files declaring API of the created library), some are internal to it.
Now to the problem - when module A needs to work with module B, than A adds B's source directory to include search path. Therefore all B's module internal headers are seen by A at compilation time.
As a side effect, developer is not forced to concentrate what is the exact API of each module, which I consider a bad habit anyway.
I consider an options how it should be on the first place. I thought about creating in
each project a dedicated directory containing interface header files only. A client module wishing to use the module is permitted to include the interface directory only.
Is this approach ok? How the problem is solved in your place?
UPD On my previous place, the development was done on Linux with g++/gmake and we indeed used to install API header files to a common directory is some of answers propose. Now we have Windows (Visual Studio)/Linux (g++) project using cmake to generate project files. How I force the prebuild install of API header files in Visual Studio?
Thanks
Dmitry
It sounds like your on the right track. Many third party libraries do this same sort of thing. For example:
3rdParty/myLib/src/ -contains the headers and source files needed to compile the library
3rdParty/myLib/include/myLib/ - contains the headers needed for external applications to include
Some people/projects just put the headers to be included by external apps in /3rdParty/myLib/include, but adding the additional myLib directory can help to avoid name collisions.
Assuming your using the structure: 3rdParty/myLib/include/myLib/
In Makefile of external app:
---------------
INCLUDE =-I$(3RD_PARTY_PATH)/myLib/include
INCLUDE+=-I$(3RD_PARTY_PATH)/myLib2/include
...
...
In Source/Headers of the external app
#include "myLib/base.h"
#include "myLib/object.h"
#include "myLib2/base.h"
Wouldn't it be more intuitive to put the interface headers in the root of the project, and make a subfolder (call it 'internal' or 'helper' or something like that) for the non-API headers?
Where I work we have a delivery folder structure created at build time. Header files that define libraries are copied out to a include folder. We use custom build scripts that let the developer denote which header files should be exported.
Our build is then rooted at a substed drive this allows us to use absolute paths for include directories.
We also have a network based reference build that allows us to use a mapped drive for include and lib references.
UPDATE: Our reference build is a network share on our build server. We use a reference build script that sets up the build environment and maps(using net use) the named share on the build server(i.e. \BLD_SRV\REFERENCE_BUILD_SHARE). Then during a weekly build(or manually) we set the share(using net share) to point to the new build.
Our projects then a list of absolute paths for include and lib references.
For example:
subst'ed local build drive j:\
mapped drive to reference build: p:\
path to headers: root:\build\headers
path to libs: root:\build\release\lib
include path in project settings j:\build\headers; p:\build\headers
lib path in project settings j:\build\release\lib;p:\build\release\lib
This will take you local changes first, then if you have not made any local changes(or at least you haven't built them) it will use the headers and libs from you last build on the build server.
I've seen problems like this addressed by having a set of headers in module B that get copied over to the release directory along with the lib as part of the build process. Module A then only sees those headers and never has access to the internals of B. Usually I've only seen this in a large project that was released publicly.
For internal projects it just doesn't happen. What usually happens is that when they are small it doesn't matter. And when they grow up it's so messy to separate it out no one wants to do it.
Typically I just see an include directory that all the interface headers get piled into. It certainly makes it easy to include headers. People still have to think about which modules they're taking dependencies on when they specify the modules for the linker.
That said, I kinda like your approach better. You could even avoid adding these directories to the include path, so that people can tell what modules a source file depends on just by the relative paths in the #includes at the top.
Depending on how your project is laid out, this can be problematic when including them from headers, though, since the relative path to a header is from the .cpp file, not from the .h file, so the .h file doesn't necessarily know where its .cpp files are.
If your projects have a flat hierarchy, however, this will work. Say you have
base\foo\foo.cpp
base\bar\bar.cpp
base\baz\baz.cpp
base\baz\inc\baz.h
Now any header file can include
#include "..\baz\inc\baz.h
and it will work since all the cpp files are one level deeper than base.
In a group I had been working, everything public was kept in a module-specific folder, while private stuff (private header, cpp file etc.) were kept in an _imp folder within this:
base\foo\foo.h
base\foo\_imp\foo_private.h
base\foo\_imp\foo.cpp
This way you could just grab around within your projects folder structure and get the header you want. You could grep for #include directives containing _imp and have a good look at them. You could also grab the whole folder, copy it somewhere, and delete all _imp sub folders, knowing you'd have everything ready to release an API.
Within projects headers where usually included as
#include "foo/foo.h"
However, if the project had to use some API, then API headers would be copied/installed by the API's build wherever they were supposed to go on that platform by the build system and then be installed as system headers:
#include <foo/foo.h>