Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm building a very basic library, and it's the first project that I plan on releasing for others to use if they'd like. As such, I'd like to know what some "best practices" as far as organization goes. The only thing that may be unique about my project is that in order for it to be used in a project, users would be required to extend certain abstract classes, which leads me to my first question:
A lot of libraries I've seen consist of a .a file and a single .h file. Is this best practice? Wouldn't it be better to expose all the public .h files so that users can choose which ones to include? If this is the preferred way of doing things, how exactly is it accomplished? What goes into that single .h file?
My second question involves dependencies. For example my current project relies on OpenGL, GLFW, and GLEW. Should I package those in some way with my project, or just make it the user's responsibility to ensure that they are installed?
Edit: Someone asked about my target OS. All of my dependencies are cross platform so I'm (perhaps naively) hoping to make my library cross platform as well.
Thanks for any and all help!
It really depends on the circumstances. If you have some fairly complex functionality, that are in a number of closely related functions, then one header is the right solution.
E.g. you write a set of functions that draw something to the screen, and you need a few functions to confgiure/set up the environment, a few functions to define and place objects in the scene, a few functions to do the actual drawing/processing, and finally teardown, then using one header file is a good plan.
In the above case, it's also possible to have one "overall" header-file that includes several smaller ones. Particularly if you have fairly large classes, sticking them all in one file gets rather messy.
On the other hand, if you have one set of functions that deal with gasses dissolved in liquids, another set of functions to calculate the strength/load capacity of a steel beam, and another set of functions to calculate the friction of a rubber tyre against a roadsurface, then they probably should have different headers - even if it's all feasible functionality to go in a "Physics/mechanics library".
It is rarely a good idea to supply third party libraries with your library - yes, if you want to offer two downloads, one with the "all you nead, just add water", and one "bare library", that's fine. But I don't want to spend three times longer than necessary to download your library, simply because it also contains three other libraries that your code is using, which is already on my machine. However, do document what libraries are needed, and what you need to do to install them on your supported platforms (and what the supported platforms are). And what versions of libraries you have tested - there's nothing worse than "getting the latest", only to find that the version something needs is two steps back...
(And as Jason C points out, licensing gets very messy once you have a few different packages that your code depends on, because your license then has to be compatible with ALL the other licenses - sometimes that's not even possible...)
You have options and it really depends on how convenient you choose to make it for developers using your libraries.
As for the headers, the general method for libraries of average complexity is to have a single header that a developer can include to get everything they need. A good method is, if you have multiple headers, create a single header with the same name as your library (not required, just common) and have it #include all the individual headers. Then distribute the single header and individual headers. That way your users have the option of #including just one to get everything, or #including individual ones if necessary.
E.g. in mylibrary.h:
#ifndef MYLIBRARY_H
#define MYLIBRARY_H
#include <mylibrary/something.h>
#include <mylibrary/another.h>
#include <mylibrary/lastone.h>
#endif
Ensure that your individual headers can be included standalone (i.e. they #include everything they need) if you want to provide that option to developers.
As for dependencies, you will want to make it the user's responsibility to ensure they are installed. The user is compiling their code and linking to your library, and so it is also the user's responsibility to link to dependent libraries. If you package third-party dependencies with your library you run many risks:
Breaking user's systems who already have dependencies installed.
As mentioned in Mats Petersson's answer, forcing users to download dependencies they already have.
Violating licensing rights on third-party libraries.
The best thing for you to do is clearly document the required dependencies.
For this there are not really standard "best practices". Any sane practice would be a good practice.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've noticed that the source of the popular C++ library Cinder has separate src and include folders, containing *.cpp and *.h files correspondingly. Is there an advantage to doing it this way rather than simply putting every .cpp in the same dir as its matching .h?
Its often easier to structure your code this way, especially if you are going to export it as an API with pre-compiled libraries. The (public) headers then become your API, it makes sense to keep them in a separate place from the source as this is the part of the code that will have to be distributed with the library.
Usable options are
module/*.{cpp,h} - best for spacial locality of related files, worst when need to apply strict API focus (backward compatibility, release vs patch, etc.)
module/{include/*.h, src/*.{cpp,h}} - good for API focus, good for spacial locality, my preferred choice
include/module/*.h,src/module/*.{cpp,h} - best for API focus, not very good for spacial locality
There is no real pros an cons, well not in general. When designing API (compared to applications), you will have to provide a set of includes with your library and this particular rĂ´le of header files make developper choose to solution of separating them from sources in their filesystem.
I don't think an organisation is better than an other but I can give you two adviced to help you decide what is best for your projects :
Try to simplify you file hierarchy as much as possible. When it comes to project configuration, install scripts and version control, less embedded folders is less headaches.
The most important is not where header files are located but how they are included:
<> or ""
From internal source files or from external code which uses your headers ?
With a path to them or directly the file name ?
Seing how you want your headers to be written when calling #include helps you decide where it's more comfortable for you to put them.
As far as I'm concerned I don't really like headers/source separations. Some headers are not meant to be exposed by my APIs so either I have all my sources in one folder or I prefer a public/private separation.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to find different ways to reuse my C++ functions in different applications. Say for example I have the following functions:
Function A(){} // this will do a complex math operation
Function B(){} // this will load a complex shape file
Function C(){} // Print the results.
I need to use the above 3 functions in 3 different C++ programs. They are completely independent and I'm trying to see what the best way is to use them in all of my applications rather than writing same code 3 times.
I am thinking about the following options:
Option A: Writing static library
Option B: Writing dynamic library
Option C: Windows Services
Option D: Same code and compile everywhere
Are there any other options? Or what would be the best option?
If the functions are only going to be called "in-house" by yourself and/or your co-workers (i.e. they aren't going to be exposed to people who don't have access to your source code repository) then option (D) is sufficient. Just keep the the .cpp and .h files in a single well-known sub-directory of your source code repository and have each application's project file reference them as necessary. This is simple to implement and gives you maximum flexibility (since each project can compile the shared .cpp files with different compiler-flags that best suit its own needs, if necessary -- with a library you'd have to figure out a single set of compiler flags that would work for all applications that want to link to the library, which isn't always convenient).
If you're writing an API for public consumption, OTOH, things get a little more complex, since after you release the code to the public you will no longer be in full control of which versions are getting used and where. In that case you will have to make a decision based on who your users are and what you think they would be most comfortable with.
Option C can probably be tossed out since it's overkill for this sort of thing, and carries the penalty of tying your code to a particular OS with no compensatory advantage.
It's option D (compile everywhere) all the way -- with the only exceptions being stand-alone libraries that are shared with many, many other people (or closed-source).
This makes it a lot easier to manage releases, because there really aren't any -- each copy of the library can be updated independently -- whenever is convenient.
This makes it easy for each project to debug into the library, with the particular version of the library that is in use.
This gives you the option of customizing the library for each project -- but use this capability judiciously to minimize merging complexity.
This choice is independent of whether or not you build the library it into a separate binary package as part of your build process.
I would recommend using something like git-submodules to manage the code -- except that the git-submodules feature is kind of half-baked.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I would focus on libraries though it can be a general application installation as well.
When we install a library (say C++), a novice user like me probably expects that when we "install" a library, all that source-code gets copied somewhere with few flags and path variables set so that we can directly use #include kind of statements in our own code and start using them.
But by inspection I can say that actually, the exact source-files are not copied but instead pre-compiled object-forms of the files are copied, except for the so called *.h header-files. (Simply because, I cannot find the sourcefiles all over the hard-disk except the headerfiles)
My Questions:
What is the behind scene method, when we "install" something.. what are all the typical locations that get affected by in a 'linux' environment. And the typical importance/use of each of those locations.
What is the difference between "installing" a library and installing a new application into the linux system via "sudo apt-get" or so.
Finally, If I have a custom set of source files which are useful as a library, and want to send them to another system, how would I "install" my own library there, in the same way as above.
Just to clarify, My primary interest is to know from your kind answers and literature-pointers, the bigger picture of a typical installation (an application/a library), to a level that I can crosscheck,learn and re-do if I want to.
(Question was removed, question addressed difference between header and object files) This is more a question of general programming. A header file is just the declaration of classes/functions/etc, it does nothing. All a header file does is say "hey, I exist, this is what I look like." That is to say it's just a declaration of signatures used later in the actual code. The object code is just the compiled and assembled, but not linked code. This diagram does a good job of explaining the steps of what we generally call the "compilation" process, but would better be called the "compilation, assembling, and linking process." Briefly, linking is pulling in all necessary object files, including those needed from the system, to create a running executable which you can use.
(Now question 1) When you think about it, what is installation except the creation and modification of necessary files with the appropriate content? That's what installing is, just placing the new files in the appropriate place, and then modifying configuration files if necessary. As to what "locations" are typically affected, you usually see binaries placed in /bin, /usr/bin and /usr/local/bin; libraries are typically placed in /lib or /usr/lib. Of course this varies, depending. I think you'd find this page on linux system directories to be an educational read. Remember though, anything can be placed pretty much anywhere and still work appropriately as long as you tell other things where to find it, these directories are just used because they keep things organized and allow for assumptions about where items, such as binaries, will be located.
(Now question 2) The only difference is that apt-get generally makes it easier by installing the item you need and keeping track of installed items, also it allows for easy removal of installed items. In terms of the actual installation, if you do it correctly manually then it should be the same. A package manager such as apt-get just makes life easier.
(Now question 3) If you want to do that you could create your own package or if it's less involved, you could just create a script that moves the files to the appropriate locations on the system. However you want to do it, as long as you get the items where they need to be. If you want to create a package yourself, it'd be a great learning experience and there are plenty of tutorials are online. Just find out what package system your flavor of linux uses then look for a tutorial on how to create packages of that type.
So the really big picture, in my opinion, of the installation process is just compilation (if necessary), then the moving of necessary files to their appropriate places on the system, and the modification of existing files on the system if necessary: Put your crap there, let the system know it's there if you need to.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a question about how large c++ projects with many components are supposed to be managed (I guess is the best term). For all intents and purposes I'm a beginning programmer. I understand the basics of compiling, header files, etc., but I've never really worked on anything bigger than homework assignments. So, let's take something like a game engine that has various components like a memory manager, renderer, physics simulation, and so on. How would one work on these components separately, but in a way that makes it easy to integrate back into the whole? For example, would you make a separate visual studio project for each piece with its own main? If you have one big project for everything, how would you work on one component without potentially another unfinished component making it fail every compile? I feel like I'm missing some major concept. Like, for projects with multiple programmers that have to check out portions to work on... do they grab all the code so they can compile, or do they set up their own temporary project to work on their bit? Both options sound wrong. You have to have a main function to compile right?
I would very much appreciate anyone educating me on this topic as I feel this is something i should have and just somehow missed completely.
When you are working with larger programs it is customary to have one source file with a main program and the rest (there can be many source files) are called from main. Then you need a build strategy. You can write a script file that compiles each of your source files and then links them all together. Unfortunately this can lead to long build times, so professional programmers use of make files which rebuild only the files that change.
As a further refinement, you can organize groups of sources into libraries and build the libraries separately and then link them with your remaining compiled source files.
Try looking up gmake (for linux) to see how to build larger projects. I guess you are using Microsoft VC++, in which case compiled files have .obj extensions and libraries .lib extensions. Microsoft have there own way of building libraries which is slighly more complicated than using gmake.
When you look further you'll come across shared libraries (dynamic link libraries on windows - DLLs).
This isn't really a great question for Stack overflows format. C++ does support language facilities for managing large code bases, like namespaces, classes, and header files. But your question seems to suggest a lack of perspective as to what they are for, or a limited understanding of the technical framework and process for contributing code to a software project. Which isn't a c++ specific issue.
When working on a living project, a primary concern is dealing with complexity. Or, in other words, reducing the number of things you have to think about at any one point in time. What that means is if another programmer is working on the user interface, ideally your code in the physics engine shouldn't have to change to reflect those changes. So interfaces, for forming abstractions and hiding information, are essential.
Granted I'm pretty green as well, so I can't give any real solid advice. I only mention this point to give some perspective as to how vague your question is. If I understand your question correctly, you might enjoy a book like Code Complete 2 by McConnell.
Large projects are separated into pieces. Normally, you should have the ability to compile each piece separately. The best practice that I know is to declare the interfaces among the various components, minimizing dependencies as close as possible to zero, and then building 'test' programs, which are small and serve two reasons: test a small piece of code, have main().
The directory structure is usually:
yourlib/
lib/
ext-inc/
test/
other dirs/
...
the lib contains the output library object (.a, .so)
the ext-lib contains the headers external code will use (sometimes called 'public' or just 'inc')
the test directory usually have a main.c (cpp) file and might have some more, as needed.
When you checkout(svn) / clone(git) / sync(p4) / etc. you would take everything, but work only on your area. once done, you merge/submit your changes into the main branch.
We have a (very large) existing codebase for a custom ActiveX control, and I'd like to integrate libkml into it for the sake of interacting with KML mapping data, rather than reinventing the wheel. The problem is, I'm a relatively new Windows developer, and coming from the Linux world, I'm really not sure what the right way of integrating a third party library is. Thankfully, libkml does provide MSVCC projects for compiling it, so porting isn't a problem. I guess I have a couple choices that I can think of:
Build and link the library directly. We already have a solution with project files in it for the "main" project; I could add the libkml projects to that solution, but I'd rather not. It's very unlikely that the libkml code will change in relation to our app's code.
Statically link to the .lib files produced by the libkml build. This is unattractive, since there are six .lib files that come out of the libkml solution and it seems inelegant to manually specify them in the linker options, etc.
Package the code as-is in a DLL. Maybe with COM? It seems like if I did this without any translation, I'd end up with a lot of overhead, and since I'm fairly unfamiliar with COM, I don't know how much work would be involved in exposing all the functionality I'd like to use via COM. The library is fairly big, has a lot of classes it uses, and if I had to manually write code to expose it all, I'd be hesitant to go this route.
Write wrapper code to to abstract the functionality I need, package that in a COM DLL, and interact with that. This seems sensible, I suppose, but it's difficult to determine how much abstraction I need since I haven't written the code that would use libkml yet.
Let me reiterate: I haven't yet written the code that will interact with libkml yet, so this is mostly experimental. Options 1 and 2 are also complicated by the fact that libkml relies additionally on three more external libraries that are also in .lib files (that I had to recompile anyways to get the code generation flags to line up). The goal obviously is to get the code to work, but maintainability and source tree organization are also goals, so I'm leaning towards options 3 and 4, but I don't know the best way to approach those on Windows.
Typing six file names, or using the declarative style with #pragma comment(lib, "foo.lib") is small potatoes compared to the work you'll have to do to turn this into a DLL or COM server.
The distribution is heavily biased towards using this as a static link library. There are only spotty declarations available to turn this into a DLL with __declspec(dllexport). They exist only in the 3rd party dependencies. All using different #defines of course, you'll by typing a bunch of names in the preprocessor definitions for the projects.
Furthermore, you'll have a hard time actually getting this DLL loaded at runtime since you are using it in a COM server. The search path for DLLs will be the client app's when COM creates your control instance, not likely to be anywhere near close to the place you deployed the DLL.
Making it a COM server is a lot of work, you'll have to write all the interface glue yourself. Again, nothing already in the source code that helps with this at all.
You can also wrap all the functionality you need in a non-COM-dll. Visual studio supports creating a static wrapper library which, when linked, will make your program use the dll. This way you only have one dependency to specify instead of six.
Other than that, what is wrong with specifying six dependencies. I would assume that there is a good reason that these are six separate libraries instead of one, so it is prudent to specify exactly which parts you actually use.
Maybe I'm missing something here, but I really don't see what is wrong with (1). I think that even if you had multiple projects that were using libkml, just insert the project file for libkml into your solution file, specify the dependencies, and you should be done. It's dead simple. Even solution (2) is dead simple. If the libraries ever change, you rebuild - you're going to need to do that anyway.
I'm failing to see how (3) or (4) are necessary or even desired. To me, it sounds like a lot of work for goals (source tree organization and maintainability) that I'm not even sure that those options really meet. In fact, you said yourself that "It's very unlikely that the libkml code will change in relation to our app's code."
What I've found over the years is to just keep things simple. If rebuilding KML is potentially time consuming, grab the libs and just statically link to the libraries. Yes, there are other dependencies, but you'll set this up once and be done, hopefully never to worry about it again. Otherwise, stick it in the project and move on. I think that it's worthwhile to ask whether spending a lot of time on this issue is worth the trouble.