I am learning c++. So I'm a beginner. But I can't understand why programmers create custom libraries. I can easily get an answer to the question "how", but I can't get an answer to the question Why, I've found two possible reasons:
Organizing subprograms (completed code pieces to one library) to use in different programms or tools, for example.
Binary output size.
For organizing functional code blocks, I'd prefer to use git clone instead of a library. For me, it is a more comfortable way to integrate, edit, and use my code in different projects.
About output size: it's a controversial issue. If I use a static library, I don't get a smaller size because, using a static library, my app will be extended by byte code, but using native code, created code blocks will be converted as machine code and immediately added to the executable file. If we speak about DLLs, yes, you can reduce the output size, but you've got another issue. Loading in RT. I won't get any benefits.
Can you explain to me why?
You’re already writing libraries if you have multiple repositories that can be variously combined to form multiple programs. (This is an excellent idea: you never know when you’ll want to build a bigger program on top of current capabilities, and libraries compose much better than programs. What mechanism to use to compose source libraries is a different question.) Compiling them into binary libraries is mostly a matter of convenience, either because linking is cheaper than compiling or because it fixes things like preprocessor macros. It also avoids circumstances where two dependencies might otherwise require multiple copies of a third they both use.
Related
My current project has two libraries libAAA and libBBB.
The former, libAAA, contains basic math and mechanics for my project (e.g. the rules for chess). The latter, libBBB, depends on libAAA but adds GUI elements (e.g. how single figures are drawn).
The executable depends on both, libAAA and libBBB. E.g. libBBB is used to setup the chess game while libBBB is used to define how it looks (console, qt, gtk,...).
Is it OK to have such redundant dependencies or is it probably bad practice?
edit:
I ask that because I like how easy it is to decouple a GUI representation from the mechanics. E.g. if I would like to change the look or the toolkit, I could just add libHHH, make another executable and that's it.
This is fine.
Let's look at some alternatives:
Pull in all the code from libAAA into libBBB: This is alright if libAAA isn't intended to be used in other contexts, and their respective sizes allow this to be done in a practical manner. In many contexts though, e.g. if libBBB is much larger and libAAA is meant as a small reusable library for many projects independently of the UI framework used in libBBB, this won't work.
Copy the code used by libBBB into it directly: Please don't. This would mean two separate libraries that do a lot of the same things, two places to update in the future, more confusion for you in the future or for anyone else trying to understand it, etc.
Bring the libBBB code into the application itself: This is not always possible or practical, depending on your development stack and environment. Many applications may also need libraries that were programmed and compiled in a different language for performance or other reasons.
I'm probably missing a couple other alternative options, but keeping both dependencies despite the redundancy seems the best option to me.
As you stated in the edit, this also allows you to swap out libBBB for another UI library that also uses libAAA(or another lib if need be) without tearing down any of your core application code or invalidating any other code in your dependencies.
I am working on a project with my friend where we have to update existing version of a game which uses SDL 1.2 to use SDL 2.0 header files and functions.
I would like to know what is the standard procedure we follow while updating already existing source code to newer libraries.
The code has 28 source files with 11 header files and makes extensive use of keyboard and mouse events and sounds as well.
All the source files use c++ and sdl.Most source files are around 200 lines of code.
i have a time period of about 3 months to make the changes.I would like to know how to write a basic summary of my schedule for that time period on a week-by week basis or 2 weeks basis.
Can anyone provide me proper steps for the same so I can make a schedule for the same?
I don't know of standard procedures, but here are my thoughts:
In general, I can think of two approaches to porting an existing software to either another language, use of another library, or any change of the such; with minimal structural change
or with major structural change.
Minimal structural change means that you leave the structure of your program intact, and start replacing every single bit of it by its equivalent in the other library. For example, if porting from glut to SDL, you replacing the glut window creation with SDL's, keyboard handling with SDL's etc, all independently.
This method is faster, in the sense that all the changes are small and local. You are also less likely to introduce bugs. The result may not be so efficient though, as the structure of the program is not originally designed for this new library.
Major structural change means that based on the new library, you rewrite a large part of the program that is dependent on that library. For example, if switching from C++'s STL to C's standard libraries, rewriting string with its equivalent in C that keeps allocating/freeing/copying doesn't make sense because your approach should be fundamentally different.
This approach requires a lot more work but in the end will give you a much better quality.
Back to your particular case, I'm not sure how different SDL 2 is from SDL 1.2, but my guess is it's not fundamentally different. Therefore, I believe the first method should work in your case. That is, you should work out the equivalents in SDL 2 of the stuff you did in SDL 1.2, and replace them in your code accordingly.
We have a (very large) existing codebase for a custom ActiveX control, and I'd like to integrate libkml into it for the sake of interacting with KML mapping data, rather than reinventing the wheel. The problem is, I'm a relatively new Windows developer, and coming from the Linux world, I'm really not sure what the right way of integrating a third party library is. Thankfully, libkml does provide MSVCC projects for compiling it, so porting isn't a problem. I guess I have a couple choices that I can think of:
Build and link the library directly. We already have a solution with project files in it for the "main" project; I could add the libkml projects to that solution, but I'd rather not. It's very unlikely that the libkml code will change in relation to our app's code.
Statically link to the .lib files produced by the libkml build. This is unattractive, since there are six .lib files that come out of the libkml solution and it seems inelegant to manually specify them in the linker options, etc.
Package the code as-is in a DLL. Maybe with COM? It seems like if I did this without any translation, I'd end up with a lot of overhead, and since I'm fairly unfamiliar with COM, I don't know how much work would be involved in exposing all the functionality I'd like to use via COM. The library is fairly big, has a lot of classes it uses, and if I had to manually write code to expose it all, I'd be hesitant to go this route.
Write wrapper code to to abstract the functionality I need, package that in a COM DLL, and interact with that. This seems sensible, I suppose, but it's difficult to determine how much abstraction I need since I haven't written the code that would use libkml yet.
Let me reiterate: I haven't yet written the code that will interact with libkml yet, so this is mostly experimental. Options 1 and 2 are also complicated by the fact that libkml relies additionally on three more external libraries that are also in .lib files (that I had to recompile anyways to get the code generation flags to line up). The goal obviously is to get the code to work, but maintainability and source tree organization are also goals, so I'm leaning towards options 3 and 4, but I don't know the best way to approach those on Windows.
Typing six file names, or using the declarative style with #pragma comment(lib, "foo.lib") is small potatoes compared to the work you'll have to do to turn this into a DLL or COM server.
The distribution is heavily biased towards using this as a static link library. There are only spotty declarations available to turn this into a DLL with __declspec(dllexport). They exist only in the 3rd party dependencies. All using different #defines of course, you'll by typing a bunch of names in the preprocessor definitions for the projects.
Furthermore, you'll have a hard time actually getting this DLL loaded at runtime since you are using it in a COM server. The search path for DLLs will be the client app's when COM creates your control instance, not likely to be anywhere near close to the place you deployed the DLL.
Making it a COM server is a lot of work, you'll have to write all the interface glue yourself. Again, nothing already in the source code that helps with this at all.
You can also wrap all the functionality you need in a non-COM-dll. Visual studio supports creating a static wrapper library which, when linked, will make your program use the dll. This way you only have one dependency to specify instead of six.
Other than that, what is wrong with specifying six dependencies. I would assume that there is a good reason that these are six separate libraries instead of one, so it is prudent to specify exactly which parts you actually use.
Maybe I'm missing something here, but I really don't see what is wrong with (1). I think that even if you had multiple projects that were using libkml, just insert the project file for libkml into your solution file, specify the dependencies, and you should be done. It's dead simple. Even solution (2) is dead simple. If the libraries ever change, you rebuild - you're going to need to do that anyway.
I'm failing to see how (3) or (4) are necessary or even desired. To me, it sounds like a lot of work for goals (source tree organization and maintainability) that I'm not even sure that those options really meet. In fact, you said yourself that "It's very unlikely that the libkml code will change in relation to our app's code."
What I've found over the years is to just keep things simple. If rebuilding KML is potentially time consuming, grab the libs and just statically link to the libraries. Yes, there are other dependencies, but you'll set this up once and be done, hopefully never to worry about it again. Otherwise, stick it in the project and move on. I think that it's worthwhile to ask whether spending a lot of time on this issue is worth the trouble.
General question:
For unmanaged C++, what's better for internal code sharing?
Reuse code by sharing the actual source code? OR
Reuse code by sharing the library / dynamic library (+ all the header files)
Whichever it is: what's your strategy for reducing duplicate code (copy-paste syndrome), code bloat?
Specific example:
Here's how we share the code in my organization:
We reuse code by sharing the actual source code.
We develop on Windows using VS2008, though our project actually needs to be cross-platform. We have many projects (.vcproj) committed to the repository; some might have its own repository, some might be part of a repository. For each deliverable solution (.sln) (e.g. something that we deliver to the customer), it will svn:externals all the necessary projects (.vcproj) from the repository to assemble the "final" product.
This works fine, but I'm quite worried about eventually the code size for each solution could get quite huge (right now our total code size is about 75K SLOC).
Also one thing to note is that we prevent all transitive dependency. That is, each project (.vcproj) that is not an actual solution (.sln) is not allowed to svn:externals any other project even if it depends on it. This is because you could have 2 projects (.vcproj) that might depend on the same library (i.e. Boost) or project (.vcproj), thus when you svn:externals both projects into a single solution, svn:externals will do it twice. So we carefully document all dependencies for each project, and it's up to guy that creates the solution (.sln) to ensure all dependencies (including transitive) are svn:externals as part of the solution.
If we reuse code by using .lib , .dll instead, this would obviously reduce the code size for each solution, as well as eliminiate the transitive dependency mentioned above where applicable (exceptions are, for example, third-party library/framework that use dll like Intel TBB and the default Qt)
Addendum: (read if you wish)
Another motivation to share source code might be summed up best by Dr. GUI:
On top of that, what C++ makes easy is
not creation of reusable binary
components; rather, C++ makes it
relatively easy to reuse source code.
Note that most major C++ libraries are
shipped in source form, not compiled
form. It's all too often necessary to
look at that source in order to
inherit correctly from an object—and
it's all too easy (and often
necessary) to rely on implementation
details of the original library when
you reuse it. As if that isn't bad
enough, it's often tempting (or
necessary) to modify the original
source and do a private build of the
library. (How many private builds of
MFC are there? The world will never
know . . .)
Maybe this is why when you look at libraries like Intel Math Kernel library, in their "lib" folder, they have "vc7", "vc8", "vc9" for each of the Visual Studio version. Scary stuff.
Or how about this assertion:
C++ is notoriously non-accommodating
when it comes to plugins. C++ is
extremely platform-specific and
compiler-specific. The C++ standard
doesn't specify an Application Binary
Interface (ABI), which means that C++
libraries from different compilers or
even different versions of the same
compiler are incompatible. Add to that
the fact that C++ has no concept of
dynamic loading and each platform
provide its own solution (incompatible
with others) and you get the picture.
What's your thoughts on the above assertion? Does something like Java or .NET face these kinds of problems? e.g. if I produce a JAR file from Netbeans, will it work if I import it into IntelliJ as long as I ensure that both have compatible JRE/JDK?
People seem to think that C specifies an ABI. It doesn't, and I'm not aware of any standardised compiled language that does. To answer your main question, use of libraries is of course the way to go - I can't imagine doing anything else.
One good reason to share the source code: Templates are one of C++'s best features because they are an elegant way around the rigidity of static typing, but by their nature are a source-level construct. If you focus on binary-level interfaces instead of source-level interfaces, your use of templates will be limited.
We do the same. Trying to use binaries can be a real problem if you need to use shared code on different platforms, build environments, or even if you need different build options such as static vs. dynamic linking to the C runtime, different structure packing settings, etc..
I typically set projects up to build as much from source on-demand as possible, even with third-party code such as zlib and libpng. For those things that must be built separately, e.g. Boost, I typically have to build 4 or 8 different sets of binaries for the various combinations of settings needed (debug/release, VS7.1/VS9, static/dynamic), and manage the binaries along with the debugging information files in source control.
Of course, if everyone sharing your code is using the same tools on the same platform with the same options, then it's a different story.
I never saw shared libraries as a way to reuse code from an old project into a new one. I always thought it was more about sharing a library between different applications that you're developing at about the same time, to minimize bloat.
As far as copy-paste syndrome goes, if I copy and paste it in more than a couple places, it needs to be its own function. That's independent of whether the library is shared or not.
When we reuse code from an old project, we always bring it in as source. There's always something that needs tweaking, and its usually safer to tweak a project-specific version than to tweak a shared version that can wind up breaking the previous project. Going back and fixing the previous project is out of the question because 1) it worked (and shipped) already, 2) it's no longer funded, and 3) the test hardware needed may no longer be available.
For example, we had a communication library that had an API for sending a "message", a block of data with a message ID, over a socket, pipe, whatever:
void Foo:Send(unsigned messageID, const void* buffer, size_t bufSize);
But in a later project, we needed an optimization: the message needed to consist of several blocks of data in different parts of memory concatenated together, and we couldn't (and didn't want to, anyway) do the pointer math to create the data in its "assembled" form in the first place, and the process of copying the parts together into a unified buffer was taking too long. So we added a new API:
void Foo:SendMultiple(unsigned messageID, const void** buffer, size_t* bufSize);
Which would assemble the buffers into a message and send it. (The base class's method allocated a temporary buffer, copied the parts together, and called Foo::Send(); subclasses could use this as a default or override it with their own, e.g. the class that sent the message on a socket would just call send() for each buffer, eliminating a lot of copies.)
Now, by doing this, we have the option of backporting (copying, really) the changes to the older version, but we're not required to backport. This gives the managers flexibility, based on the time and funding constraints they have.
EDIT: After reading Neil's comment, I thought of something that we do that I need to clarify.
In our code, we do lots of "libraries". LOTS of them. One big program I wrote had something like 50 of them. Because, for us and with our build setup, they're easy.
We use a tool that auto-generates makefiles on the fly, taking care of dependencies and almost everything. If there's anything strange that needs to be done, we write a file with the exceptions, usually just a few lines.
It works like this: The tool finds everything in the directory that looks like a source file, generates dependencies if the file changed, and spits out the needed rules. Then it makes a rule to take eveything and ar/ranlib it into a libxxx.a file, named after the directory. All the objects and library are put in a subdirectory that is named after the target platform (this makes cross-compilation easy to support). This process is then repeated for every subdirectory (except the object file subdirs). Then the top-level directory gets linked with all the subdirs' libraries into the executable, and a symlink is created, again, naked after the top-level directory.
So directories are libraries. To use a library in a program, make a symbolic link to it. Painless. Ergo, everything's partitioned into libraries from the outset. If you want a shared lib, you put a ".so" suffix on the directory name.
To pull in a library from another project, I just use a Subversion external to fetch the needed directories. The symlinks are relative, so as long as I don't leave something behind it still works. When we ship, we lock the external reference to a specific revision of the parent.
If we need to add functionality to a library, we can do one of several things. We can revise the parent (if it's still an active project and thus testable), tell Subversion to use the newer revision and fix any bugs that pop up. Or we can just clone the code, replacing the external link, if messing with the parent is too risky. Either way, it still looks like a "library" to us, but I'm not sure that it matches the spirit of a library.
We're in the process of moving to Mercurial, which has no "externals" mechanism so we have to either clone the libraries in the first place, use rsync to keep the code synced between the different repositories, or force a common directory structure so you can have hg pull from multiple parents. The last option seems to be working pretty well.
I'm just starting to explore C++, so forgive the newbiness of this question. I also beg your indulgence on how open ended this question is. I think it could be broken down, but I think that this information belongs in the same place.
(FYI -- I am working predominantly with the QT SDK and mingw32-make right now and I seem to have configured them correctly for my machine.)
I knew that there was a lot in the language which is compiler-driven -- I've heard about pre-compiler directives, but it seems like someone would be able to write books the different C++ compilers and their respective parameters. In addition, there are commands which apparently precede make (like qmake, for example (is this something only in QT)).
I would like to know if there is any place which gives me an overview of what compilers are out there, and what their different options are. I'd also like to know how each of them views Makefiles (it seems that there is a difference in syntax between them?).
If there is no website regarding, "Everything you need to know about C++ compilers but were afraid to ask," what would be the best way to go about learning the answers to these questions?
Concerning the "numerous options of the various compilers"
A piece of good news: you needn't worry about the detail of most of these options. You will, in due time, delve into this, only for the very compiler you use, and maybe only for the options that pertain to a particular set of features. But as a novice, generally trust the default options or the ones supplied with the make files.
The broad categories of these features (and I may be missing a few) are:
pre-processor defines (now, you may need a few of these)
code generation (target CPU, FPU usage...)
optimization (hints for the compiler to favor speed over size and such)
inclusion of debug info (which is extra data left in the object/binary and which enables the debugger to know where each line of code starts, what the variables names are etc.)
directives for the linker
output type (exe, library, memory maps...)
C/C++ language compliance and warnings (compatibility with previous version of the compiler, compliance to current and past C Standards, warning about common possible bug-indicative patterns...)
compile-time verbosity and help
Concerning an inventory of compilers with their options and features
I know of no such list but I'm sure it probably exists on the web. However, suggest that, as a novice you worry little about these "details", and use whatever free compiler you can find (gcc certainly a great choice), and build experience with the language and the build process. C professionals may likely argue, with good reason and at length on the merits of various compilers and associated runtine etc., but for generic purposes -and then some- the free stuff is all that is needed.
Concerning the build process
The most trivial applications, such these made of a single unit of compilation (read a single C/C++ source file), can be built with a simple batch file where the various compiler and linker options are hardcoded, and where the name of file is specified on the command line.
For all other cases, it is very important to codify the build process so that it can be done
a) automatically and
b) reliably, i.e. with repeatability.
The "recipe" associated with this build process is often encapsulated in a make file or as the complexity grows, possibly several make files, possibly "bundled together in a script/bat file.
This (make file syntax) you need to get familiar with, even if you use alternatives to make/nmake, such as Apache Ant; the reason is that many (most?) source code packages include a make file.
In a nutshell, make files are text files and they allow defining targets, and the associated command to build a target. Each target is associated with its dependencies, which allows the make logic to decide what targets are out of date and should be rebuilt, and, before rebuilding them, what possibly dependencies should also be rebuilt. That way, when you modify say an include file (and if the make file is properly configured) any c file that used this header will be recompiled and any binary which links with the corresponding obj file will be rebuilt as well. make also include options to force all targets to be rebuilt, and this is sometimes handy to be sure that you truly have a current built (for example in the case some dependencies of a given object are not declared in the make).
On the Pre-processor:
The pre-processor is the first step toward compiling, although it is technically not part of the compilation. The purposes of this step are:
to remove any comment, and extraneous whitespace
to substitute any macro reference with the relevant C/C++ syntax. Some macros for example are used to define constant values such as say some email address used in the program; during per-processing any reference to this constant value (btw by convention such constants are named with ALL_CAPS_AND_UNDERSCORES) is replace by the actual C string literal containing the email address.
to exclude all conditional compiling branches that are not relevant (the #IFDEF and the like)
What's important to know about the pre-processor is that the pre-processor directive are NOT part of the C-Language proper, and they serve several important functions such as the conditional compiling mentionned earlier (used for example to have multiple versions of the program, say for different Operating Systems, or indeed for different compilers)
Taking it from there...
After this manifesto of mine... I encourage to read but little more, and to dive into programming and building binaries. It is a very good idea to try and get a broad picture of the framework etc. but this can be overdone, a bit akin to the exchange student who stays in his/her room reading the Webster dictionary to be "prepared" for meeting native speakers, rather than just "doing it!".
Ideally you shouldn't need to care what C++ compiler you are using. The compatability to the standard has got much better in recent years (even from microsoft)
Compiler flags obviously differ but the same features are generally available, it's just a differently named option to eg. set warning level on GCC and ms-cl
The build system is indepenant of the compiler, you can use any make with any compiler.
That is a lot of questions in one.
C++ compilers are a lot like hammers: They come in all sizes and shapes, with different abilities and features, intended for different types of users, and at different price points; ultimately they all are for doing the same basic task as the others.
Some are intended for highly specialized applications, like high-performance graphics, and have numerous extensions and libraries to assist the engineer with those types of problems. Others are meant for general purpose use, and aren't necessarily always the greatest for extreme work.
The technique for using each type of hammer varies from model to model—and version to version—but they all have a lot in common. The macro preprocessor is a standard part of C and C++ compilers.
A brief comparison of many C++ compilers is here. Also check out the list of C compilers, since many programs don't use any C++ features and can be compiled by ordinary C.
C++ compilers don't "view" makefiles. The rules of a makefile may invoke a C++ compiler, but also may "compile" assembly language modules (assembling), process other languages, build libraries, link modules, and/or post-process object modules. Makefiles often contain rules for cleaning up intermediate files, establishing debug environments, obtaining source code, etc., etc. Compilation is one link in a long chain of steps to develop software.
Also, many development environments abstract the makefile into a "project file" which is used by an integrated development environment (IDE) in an attempt to simplify or automate many programming tasks. See a comparison here.
As for learning: choose a specific problem to solve and dive in. The target platform (Linux/Windows/etc.) and problem space will narrow the choices pretty well. Which you choose is often linked to other considerations, such as working for a particular company, or being part of a team. C++ has something like 95% commonality among all its flavors. Learn any one of them well, and learning the next is a piece of cake.