C++ import library by source - c++

I'm new to C++ and wonder if it is good practice to include a library by source code. If it is, what would be the best way to achieve this? Just copying in a subfolder and using include?
In my special case, I have written a small library and I'm going to use it on two different microprocessors. Compiling the library separately, copying all headers and using this "package" seems to be overkill for me.

Compiling the library separately is what should be done.
It's not that overkill either : you're just compiling the .o files for your library, then wrapping them in an archive and handling that archive around.

Normally libraries are used as libraries because it is much easier and comfortable that way. If you are using dynamic libraries (.dll or .so) things get even better because you can replace libraries on the fly and things should continue to work smoothly.
You decided to use code repositories instead of libraries which means probably more work for you. If you are happy this way that's OK, but just make sure you do not break any license, some lgpl packages (like Qt) clearly
require their libraries to be linked dynamically.
The best way to do this: hard to say but in your place I would probably use git and include the libraries as submodules.

Just #includeing source code is a bad idea since it means just to copy the code into your own, things can go wrong that way. For example if there is a static variable somewhere in the library code and the same named static variable in your code you will have a conflict.
Instead you should probably compile the library separately and link it, possibly the same way as you would do anyway (ie you build the library and then you link with that library). But the light weight alternative would be just to compile the additional C++ files and then link the object files together to an executable. Details on how you do that is compiler specific.
There's valid reasons for including the library source in this way, for example if your project needs to modify the library during development it would be easier to do so if the rebuilding of the library is done as a part of the build process of the project. With a well designed build process the library shouldn't have to be rebuilt unless there are actual changes to it.

The value of a library is in part that you link it more often than you compile it, leading to a net saving.
If you control all the source, then whatever build process works best for you is fine.

I agree with πάντα ῥεῖ but I'll also add that the reason it is bad practice is because the compiled library can be stored in your computer in a common location and used by tons of different programs, thereby reducing the amount of data your computer has to store, in memory as well as RAM(if more than one running program uses the same library). An example is openGL which is a library that many games use and is probably already in your system somewhere. If you use windows, software installers link up these libraries to their programs and add them if you don't have them. If you use linux, you will be notified if libraries are missing and prompted to install them. All of that aside, you can, technically use un-compiled libraries but that introduces a number of potential licensing problems as well as additional problems with THEIR dependencies.

By copying source code to other projects and "mixing" it with other source code will stop this library from being a "library". Later on you will be tempted to make a small change in one copy (for CPU) or fix a bug and forget to do the same in the other copy.
There might be additional consideration but you should try to keep the code in one place. Do not Repeat Yourself (DRY) is a very strong and fundamental principal of software engineering with many benefits.

Related

wxWidgets: Which files to link?

I am learning C++ and, in order to do so, am writing a sample wxWidgets application.
However, none of the documentation I can find on the wxWidgets website tell me what library names to pass to the linker.
Now, apart from wxWidgets, is there a general rule of thumb or some other convention by which I should/would know, for any library, what the names of the object files are against which I am linking?
We have more of a "rule of ring finger", instead of a thumb
Generally, if you compile the library by hand, it will produce several library files (usually .a .lib or something similar, depending entirely on your compiler and your ./configure) these are produced (typically) because of a makefile's build script.
Now a makefile can be edited in any way the developer pleases, but there are some good conventions (there is, in fact, a standard) many follow- and there are tools to auto generate the make files for the library (see automake)
Makefiles are usually consistent
You can simply use the makefile to generate the files, and if it's compliant, the files will be placed in a particular folder (the lib folder I believe?) all queued up and ready to use!
Keep in mind, a library file is simply the implementation of your code in precompiled format, you could create a library yourself from your code quite easily using the ar tool. Because it is code, just like any other code, you don't necessarily want to include all of the library files for a given library. For instance with wxWidgets if you're not using rich text, you certainly don't want to waste the extra space in your end executable by including that library file. Later if you want to use it, you can add it to your project (just like adding a .cpp file)
Oh and also with wxWidgets, in their (fantastic) documentation, each module will say what header you need to include, and what library it is a part of.
Happiness
Libraries are amazing, magical, unicorns of happiness. Just try not to get too frustrated with them and they'll prance in the field of your imagination for the rest of your programming career!
After a bit more Googling, I found a page on the wxWidgets wiki which relates to the Code::Blocks IDE, but which also works for me. By adding the following to the linker options, it picks up all the necessary files to link:
`wx-config --libs`
(So that does not solve my "general rule" problem; for any library I am working with, I still have to find out what files to link against, but at least this solves the problem for wxWidgets).
The build instructions are different for each platform and so you need to refer to the platform-specific files such as docs/gtk/install.txt (which mentions wx-config) or docs/msw/install.txt to find them.
FWIW wxWidgets project would also definitely gratefully accept any patches to the main manual improving the organization of the docs.

Is it possible to read code of a C++ library and modify it?

A bit of a simple question, though the answer may not be. I am fairly new to C++ and was wondering if it was possible to open a C++ library and see it's code. It sounds like a potentially risky move to accidentally change the core code of the library, but I would still like to see if it is possible. Thank you!
There are too kinds of libraries that C++ can use:
compiled to binary libraries which are linked with linker to your
executable;
headers-only libraries which are just included with include into
your source code
You can "open" headers of headers-only libraries and modify code if you wish (but not recommended).
Also many compiled libraries are open source. You can open source files there. If you want to modify such library, you will need to compile it and link your executable against this modified version.
Yes it s possible to open a c++ library and see its code.
If you want to make changes to any functionality simply create your own version of it giving it a different name, or if you want to add functionality just simply extend the class you are interested in. (read up on inheritance for this).

Sharing C++ static libraries between multiple solutions causes unnecessary rebuild

I have two solutions that both include and reference the same static library. And I'm including the library using the "Add reference..." feature, as opposed to specifying additional an linker input. It seems as though when I build one of the solutions, it causes the other solution to think it needs to rebuild the shared library, which then causes it to re-link the second solution. Thus, if I go back and forth between the two solutions building (without making any code changes) the solutions perform the link every time.
It doesn't appear that the shared static library is actually being re-compiled, but VS is performing the librarian step for it. I'm guessing this librarian step is happening because the .lastbuildstate file (which contains a path to the solution that last build the project) is determined to be outdated.
Anybody ever experienced this problem before? Is there a better way to go about this?
If the library is really shared and independent from both solutions, I'd suggest moving it to a separate solution... I understand that's not really what you intended, but it seems logical by the nature of dependencies itself.
Another consideration in favor of this is that the library may be later used for other projects. Hence, it'd be totally better to move it to a separate location (separate solution, separate folder in VCS) and treat it as any third-party library (i.e. openssl, boost, etc), specifying the dependency as linker input.
All of the above is just my thoughts on how I'd do it and is not a representation of any "best practices".

Lib Files and Defines

I'm using a couple of external libraries and I'd rather not have to include all their source and header files in my main source directory or in my project file. One option would be to compile the libraries as lib files and link them like that. However I'm not sure the defines get evaluated before or after the lib file gets created (which one is it?). If it's before then obviously I can't just pack them because they might not work properly on different compilers or systems.
So if I can't pack the libraries as lib files, is there any way for me to link in the c or cpp source files? Probably not, since they would have to be compiled first, but maybe I'm wrong.
Edit: Here's a follow-up question, based on answers. Do you think it'd be too much of a hassle to have a makefile that creates the lib files? I'd still rather not add the sources to my project or in my source directory.
Library is a binary file, so all defines obviously already in.
Just to make order, defines are evaluated as 1st stage of compilation process - the step is called preprocess. At this stage, for each cpp files created one file containing all #include'ed in it files recursively and all macros are evaluated.
Any way 3rd party should not depend on your compilation flags with one exception - release/build lib. Only in this case you need 2 versions of 3rd lib.
As regarding to question if to compile 3rd party libs once or each time while compiling your code it depends. If you are doing it only for itself than do what looks an easies way for you, but if we're talking about development team and the project to be maintain for a long time, than more things are to be considered.
SO we're talking about some solid solution for a team and we want to compile library several times.
In this case I personally strive to compile 3rd part library once and use it many times. This reduces compilation times for each build for each developers, which means faster development.
Nice, but where you hold these libs. I like phisycal separation - 3rd party library and my code not in same tree. This can avoid some not intentional errors. A good build system, and most of time it's mandatory, should be re-buildable. This means that if you checkout your code after year, you can compile and receive exactly same binaries.
Once I used some external read only tree on my machine. This tree was managed only by me.
To make my sources re-buildable, each next version of 3rd party library put in direcoty containing it's version and my source tree was updated to point to this point. If you build on several machines, than the read only tree should be visible on all these machines.
Additioanal solution is to check if your SCM tool (I suppose you use one) gives you some ability to combine several sub-tries from repository in one checkhout. For each 3rd party library there's one sub-tree. This way 3rd party libraries are available on all machines your build. I currenly use these method on subversion - it's called svn:external. On CVS AFAIK it's called cvs modules. Additional advantage that the libraries are managed by source control system, so you can track all changes done to 3rd party libs.
defines get evaluated even before compiling. They are dealt with by the pre-processor, that prepares the code for the compiler to use. So yes, they are evaluated before the libraries are created.
You can't link against source code. You can only link with object files, static libraries, or dynamic libraries (shared object files/DLLs).
Using dynamic linking can be a good option, especially if the externals are large and/or you'll be using them in many executables.

Where should I be using a static library in C++

What are the use cases of using static libraries in C++? I have seen that people create DLLs instead or some that use static libraries only. Whats your recommendation?
I'm a big fan of static libraries pretty much everywhere. The one big thing that DLLs get you that static libs cannot do is the ability to dynamically load and unload library functionality. So if your application is going to support some sort of hot swapping plugins, you need to use dynamic libs. Otherwise you can probably use static libs.
Static libs open the door to a lot of optimizations that you can't do with dynamic libs because they are performed at link-time. In the microsoft world Link Time Code Generation (LTCG) give you the ability to do whole program optimization and dead code stripping through not only your application, but also your libraries (in gcc this is called Link Time Optimization [LTO])
Additionally static libs tend to make your program easier to distribute because you aren't forced to pass around a lot of library files, and you can completely avoid DLL-hell if you ever were to version your library.
You should use shared libraries (DLL) if you have a significant functionality that needs to be shared between applications; AND this functionality may be improved independant of all the application and updates shipped seprately.
The 'AND' part is the hardest to fulfill: usually you ship your application with any new functionality added and never update the library without updating the application at the same time (I am not saying that never happens) but usually the two ship in lockstep.
Otherwise it is easier to just build normal libs and ship the application.
An Example of a good (I use the term loosely for example purposes) is DirectX. When a new version of DirectX is shipped (and the interface has not changed) you just need to update the DLL and all apllications that use DirectX get the benifit of the new version of the library. In reality it is not quite that simple but you get the idea.
In general, although there are always exceptions to the rule, I would say:
Advantages of DLLs
Less physical memory usage when running multiple instances of an application. (Copy on write optimisation of memory usage.)
Faster link times.
Smaller executables.
Better modularity.
Advantages of static libraries
Less virtual memory usage (and probably less physical memory usage) when running a single instance of an application.
Performance. Approximately 10% (more or less) improvement over DLLs, depending on your application.
Reliability. You tested your application against a specific version (or specific versions) of a library. An upgrade to a DLL could potentially break your application.
There is the advantage of not having to recompile your entire program if you make a change to a dynamically linked library. #Chris makes a good point about dll-hell but if it s a minor bug fix that doesn't affect the API, this can save you the recompilation.
There is a SO post that talks about Windows not being able to apply updates to your program if you statically link their libraries (link to come). Although i think you are more talking about statically linking your own modules.
Use static version of your libraries where you can. Use dynamic libraries where you need to (license, availability or plugin system).
I use static libraries to implement UML's "package" concept. All modules belonging to a package gets put into their own subdirectory, and I create an IDE subproject or makefile for that directory which builds a static library *.a file. Modern IDEs make it possible to work with your top-level package along with sub-packages within the same "workspace".
If a package (or a group of packages) can be deployed separately from the main executable, then I compile it into a shared library (*.so or *.dll) instead and consider it a "component" in UML jargon.
Well a Static DLL would be for holding huge libraries and also for using Multi-Os coode as i like to call it so it's able to be ran on Linux , Windows ...