Let's say you have several bespoke C++ projects in separate repositories or top-level directories in the same repository. Maybe 10 are library projects for stuff like graphics, database, maths, etc and 2 are actual applications using those libraries.
What's the best way to organise those 2 application projects to have the .libs they need?
Each lib project builds the .lib in its own directory, developers have to copy these across to the application area manually and make sure to get the right version
Application projects expect lib projects to be in particular paths and look for .libs inside those locations
A common /libs directory is used by all projects
Something else
This is focused on C++, but I think it's pretty similar with other languages, for instance organising JARs in a Java project.
I'd suggest this approach:
Organise your code in a root folder. Let's call it code.
Now put your projects and libraries as subfolders (e.g. Projects and Libraries).
Build your libraries as normal and add a post-build step that copies the resulting headers and .lib files into a set of shared folders. For example, Libraries\include and Libraries\lib. It's a good idea to use subfolders or a naming convention (myLib.lib, myLib_d.lib) to differentiate different builds (e.g. debug and release) so that any lib reference explicitly targets a single file that can never be mixed up. It sucks when you accidentally link against the wrong variant of a lib!
You can also copy third-party libraries that you use into these folders as well.
Note: To keep them organised, include your files with #include "Math\Utils.h" rather than just "Utils.h". And put the headers for the whole Math library into include\Math, rather than dropping them all in the root of the include folder. This way you can have many libraries without name clashes. It also lets you have different versions of libraries (e.g. Photoshop 7, Photoshop 8) which allows you to multi-target your code at different runtime environments.
Then set up your projects to reference the libraries in one of two ways:
1) Tell your IDE/compiler where the libs are using its global lib/include paths. This means you set up the IDE once on each PC and never have to specify where the libs are for any projects.
2) Or, set each project to reference the libs with its own lib/include paths. This gives you more flexibility and avoids the need to set up every PC, but means you have to set the same paths in every new project.
(Which is best depends on the number of projects versus the number of developer PCs)
And the most important part: When you reference the includes/libs, use relative paths. e.g. from Projects\WebApp\WebApp.proj, use "..\..\Libraries\include" rather than "C:\Code\Libraries\Include". This will allow other developers and your buildserver to have the source code elsewhere (D:\MyWork instead of C:\Code) for convenience. If you don't do this, it'll bite you one day when you find a developer without enough disk space on C:\ or if you want to branch your source control.
Related
I'm working at an organization with a product suite based on several hundred Visual Studio solutions (mostly C++). Some of these solutions generate libraries that are used by other solutions and there's also a common "include" folder containing headers that shared by multiple modules.
The issue is that the dependencies are not explicitly stated anywhere, and the build system resolves dependencies by specifying a linear build order that makes sure the dependent modules get built at the right time. This works well for the build system but leaves developers at a disadvantage when trying to work on components with many direct and indirect external dependencies. For example, I might want to edit one of the library projects or shared headers and then build all the affected modules without necessarily knowing ahead of time which ones are affected. Another use case involves building a module after doing a fresh pull from TFS and having the modules it depends on built first without having to build the entire system.
I am wondering if there is/are any tool(s) available that can automate dependency generation for building large projects. I have considered creating a few really big solutions that encapsulate the other solutions but that seems really awkward and clumsy. Also, I don't like the idea of having developers manually specify dependencies as it can error prone, especially with such a large code base. I worked with scons a few years ago and really liked the way it could parse source files and automatically discover all the dependencies dependencies. Is there anything available today that can do the same thing with Visual Studio solutions?
This is not a duplicate of Visual Studio: how to handle project dependencies right?
I need to emphasize the magnitude of the problem I am trying to solve. This is a very large existing code base. In the main directory there are several hundred sub-folders, each one containing one of more VS solutions (not projects). Each solution, in turn, contains one or more projects. As I said before, I'm not trying to establish dependencies among a few projects in a solution. The problem is much bigger than that. I'm trying to find a way to establish dependencies among the solutions themselves (several hundred of them). For example, one solution may contain some projects that generate libraries for security, others for communications, etc. There may be, for example, dozens of solutions that use the communications libraries. So essentially I'm trying to create a directed a cyclic graph with hundreds of nodes and potentially tens of thousands of edges.
You could use cmake (https://cmake.org/). With it, you can specify several libraries and apps to be built. Once configured, you can modify a project and the build will just update the dependent projects. Cmake also provides a visual studio generator, so that you can continue using that IDE.
A possible disavantage to you is that, to configure, you must explictly specify, for each project (library or executable), with what projects it must be linked and what folders it must include. There are ways to define some global includes and links, but the use will depends on your problem.
VS does track dependencies (by parsing source files). It doesn't make sense that something could automatically set dependencies of your VS projects, in any other build tools you'd still have to specify in some way that for linking project A.exe you need to use B.lib.
If you use newer VS versions you should simply add references to lib to your exe/dll projects. If you manually added project dependencies, most likely you should remove them all, especially make sure you don't make static lib projects dependent on each other. VS allows you to do that (for example, if build of one library generates some source files that another static lib uses), but in general these shouldn't have any dependencies and this allows VS to optimize builds by building them in parallel.
For example, commonly you could have some kind of Base.lib, then System.lib and Graphics.lib. All of these are user by your App.exe. System.lib uses code from Base.lib, Graphics.lib uses code from System.lib and Base.lib. So, naturally the dependency chain is clear and you go and set them in VS, and that's a mistake! In cases like this in VS you should make these 4 libs independent and only App.exe should be dependent on all these libs (e.g. it should have references to all of these). VS will figure out what is the the correct dependency of these projects.
Regarding Cmake case: it simply generates VS projects and solutions, if you use VS then cmake cannot do more than VS itself can.
Visual studio doesn't support native projects as it supports .NET projects. In the sense that when for example creating a static(.lib) library. Including of the static library, and the directory containing the headers, has to be done manually.
For one project this isn't some much of a problem. But if you're like me managing several projects. A lot of which are somewhat depenendend. It becomes a huge hassle to manage all of it.
I was wondering if there is any official 'microsoft approved' approach to this. And if not, what is the best way to deal with this situation. Supposing the following conditions occur:
several static libraries(.lib) projects. Which are included in several solutions
several dynamic libraries (.dll) projects. Which are included in several solutions
multiple applications using the same libraries(both dynamic and static), in one solution
My personal solution to the problem is as follows.
Every project generating a binary builds to:
$(SolutionDir)build\$(Configuration)\`
Every project generating a static library builds to:
$(SolutionDir)build\$(Configuration)\Libraries\
The intermediate directory for all projects is:
$(SolutionDir)build\$(ProjectName)\$(Configuration)\
And runs the following pre-build command:
Copy /Y "$(ProjectDir)*.h" + "$(ProjectDir)*.hpp" "$(SolutionDir)build\$(Configuration)\Libraries\"
Advantages of this system include:
All the project directories stay free of builds (useful when using source control). And all the binaries are in one place.
Setting additional include directories is never required when using outputs from other projects in the same solution. A dynamic library doesn't have to be added at all. And all that is required to include a static library is adding it to the Additional Dependencies field under:
Configuration Properties->Linker->Input
Drawbacks of this system include:
Since all the header files are copied, the risk exists of accidentally editing those. Which results in loss of work, when the copying occurs again.
Since the settings are per project, they have to be set for ever project
The libraries are built separately for every solution
I would like some help in setting up a project in SVN with regards to directory structure. I have read several answers regarding this on SO, but as I am new to this, most of them are difficult to understand.
I am building a single library, on which several other distinct project depends on:
I need the ability to export MyLibrary (headers and .lib only) easily for use by third parties
MyLibrary1
Depends on external libraries, should be able to manage different versions of these libraries!
MyLibrary2
Depends on External Libraries fmod, glew, ...
Project 1, 2, 4, 5, 6 ...
Depends on MyLibrary1, 2, or both
Each project could need versions for multiple platforms (osx, windows ...)
I would like to know of a good way to organize this, do keep in mind that I am rather new to this - a more pedantic answer would be helpful. For example if you write something like /src, do explain what is supposed to go into it! I would be able to guess, but I wont be sure =)
////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Edit
I cant put this into a comment, so here goes:
#J.N, thanks for the extensive reply, I would like to clarify some stuff, I hope I understood what you meant properly:
root
library foo
/branches // old versions of foo
/tags // releases of foo
/trunk // current version
/build // stuff required by makefiles
/tools // scripts to launch tests ect
/data // test data needed when running
/output // binaries, .exe files
/dependencies // libraries that foo needs
/lib name
include
lib
/docs // documentation
/releases // generated archives
/sample // sample project that shows how to use foo
/source // *.h, *.cpp
program bar
/branches // old versions of bar
/tags // releases of bar
/trunk // current version
/build // stuff required by makefiles
/tools // scripts to launch tests ect
/data // test data needed when running
/output // binaries, .exe files
/dependencies // libraries that bar needs
/lib name
include
lib
/docs // documentation
/releases // generated archives
/sample // sample project that shows how to use bar
/source // *.h, *.cpp
1) Where do the *.sln files go? In /build?
2) do I need to copy foo/source into bar/dependencies/foo/include? After all, bar depends on foo
3) Where do *.dll files go? If foo has dependencies on dll files, then all programs using foo need access to the same dll files. Should this go into root/dlls?
There are several levels to your questions: how to organize a single project source tree, how to maintain the different projects together, how to maintain the dependencies of those project, how to maintain different variants of each projects and how to package them.
Please keep in mind that whatever you do, your project will eventually grow large enough to make it unadapted. It's normal to change the structure several times in the lifetime of a project. You'll get the feeling that it isn't right anymore when that will happen: it's usually when the setup is bothering you more than it helps.
1 - Maintaining the different variants of each project
Don't have variants for each project, you won't solve several variants by maintaining parralel versions or branches. Have a single source tree for every project/library that can be used for all variants. Don't manage different "OSes", manage different features. That is, have variants on things like "support posix sockets" or "support UI". That means that if a new OS come along, then you just need to choose the set of features it supports rather than starting a new version.
When specific code is needed, create an interface (abstract class in C++), and implement the behaviour with respect to it. That will isolate the problematic code and will help adding new variants in the future. Use a macro to choose the proper one at compile time.
2 - Maintaining the dependencies of each project
Have a specific "dependencies" folder in which each subfolder contains everything needed for one dependency (that is includes and sub dependencies). At the beggining when the codebase is not too large, you don't care too much about ensuring automatically that all the dependencies are compatible with each other, save it for later.
Don't try to merge the dependencies from their root location higher in the svn hierarchy. Formally deliver each new version to the teams needing it, up to them to update their own part of the SVN with it.
Don't try to use several versions of the same dependency at once. That will end badly. If you really need to (but try avoiding it as much as you can), branch your project for each version.
3 - Maintain the different projects
I'd advise to maintain each projects repository independently (with SVN they still could be the same repo, but in separated folders). Branches and tags should be specific to one project, not all of them. Try to limit to a maximum the number of branches, they don't solve problems (even with git). Use branches when you have to maintain different chronoligical versions in parallel (not variants) and fight back as much as you can before you actually do it, everybody will benefit from the use of the newer code.
That will allow to impose security restrictions (not sure if feasible with vanilla SVN, but there are some freely available servers that support it).
I'd recommend sending emails notifications whenever someone commits on a project to everybody potentially interested.
4 - Project source tree organization
Each project should have the following SVN structures:
trunk (current version)
branches (older versions, still in use)
tags (releases, used to create branches without thinking too much when patches are required)
When the project gets bigger, organize branches and tags in sub folders (for instance branches/V1.0/V1.1 and branches/V2.0/V2.1).
Have a root folder with the following subfolders: (some of this may be created by VC itself)
Build system (stuff required by your makefiles or others)
Tools (if any, like an XSLT tool or SOAP compiler, scripts to launch the tests)
Data (test data you need while running)
Output (where the build system put the binaries)
Temp Output (temporary files created by the compilation, optional)
Dependencies
Docs (if any ;) or generated docs)
Releases (the generated archives see later)
Sample (a small project that demonstrate how to use the project / library)
Source ( I don't like to split headers and .cpp, but that's my way )
Avoid too many levels of subfolders, it's hard to search trees, lists are easier
Define properly the build order of each folder (less necessary for VC but still)
I make my namespaces match my folders names (old Java habits, but works)
Clearly define the "public" part that you need to export
If the project is large enough to hold several binaries / dlls each should have its own folder
Don't commit any binaries you generate, only the releases. Binaries like to conflict with each other and cause pain to the other people in the team.
5 - Packaging the projects
First, make sure to include a text file with the SVN revision and the date, there's an automated way to do that with auto props.
You should have a script to generate releases (if time allows). It will check that everything is commited, generate a new version number .... Create a zip/tar.gz archive you must commit/archive, whose name contains the SVN revision, branch and the current date (the format should be normalized accross projects). The archive should have everything that is needed to run the app / use the library in a file structure. Create a tag so that you can start from it for emergency bug fixing.
Imagine an overall project with several components:
basic
io
web
app-a
app-b
app-c
Now, let's say web depends on io which depends on basic, and all those things are in one repo and have a CMakeLists.txt to build them as shared libraries.
How should I set things up so that I can build the three apps, if each of them is optional and may not be present at build time?
One idea is to have an empty "apps" directory in the main repo and we can clone whichever app repos we want into that. Our main CMakeLists.txt file can use GLOB to find all the app directories and build them (not knowing in advance how many there will be). Issues with this approach include:
Apparently CMake doesn't re-glob when you just say make, so if you add a new app you must run cmake again.
It imposes a specific structure on the person doing the build.
It's not obvious how one could make two clones of a single app and build them both separately against the same library build.
The general concept is like a traditional recursive CMake project, but where the lower-level modules don't necessarily know in advance which higher-level ones will be using them. Yet, I don't want to require the user to install the lower-level libraries in a fixed location (e.g. /usr/local/lib). I do however want a single invocation of make to notice changed dependencies across the entire project, so that if I'm building an app but have changed one of the low-level libraries, everything will recompile appropriately.
My first thought was to use the CMake import/export target feature.
Have a CMakeLists.txt for basic, io and web and one CMakeLists.txt that references those. You could then use the CMake export feature to export those targets and the application projects could then import the CMake targets.
When you build the library project first the application projects should be able to find the compiled libraries automatically (without the libraries having to be installed to /usr/local/lib) otherwise one can always set up the proper CMake variable to indicate the correct directory.
When doing it this way a make in the application project won't do a make in the library project, you would have to take care of this yourself.
Have multiple CMakeLists.txt.
Many open-source projects take this appraoch (LibOpenJPEG, LibPNG, poppler &etc). Take a look at their CMakeLists.txt to find out how they've done this.
Basically allowing you to just toggle features as required.
I see two additional approaches. One is to simply have basic, io, and web be submodules of each app. Yes, there is duplication of code and wasted disk space, but it is very simple to implement and guarantees that different compiler settings for each app will not interfere with each other across the shared libraries. I suppose this makes the libraries not be shared anymore, but maybe that doesn't need to be a big deal in 2011. RAM and disk have gotten cheaper, but engineering time has not, and sharing of source is arguably more portable than sharing of binaries.
Another approach is to have the layout specified in the question, and have CMakeLists.txt files in each subdirectory. The CMakeLists.txt files in basic, io, and web generate standalone shared libraries. The CMakeLists.txt files in each app directory pull in each shared library with the add_subdirectory() command. You could then pull down all the library directories and whichever app(s) you wanted and initiate the build from within each app directory.
You can use ADD_SUBDIRECTORY for this!
https://cmake.org/cmake/help/v3.11/command/add_subdirectory.html
I ended up doing what I outlined in my question, which is to check in an empty directory (containing a .gitignore file which ignores everything) and tell CMake to GLOB any directories (which are put in there by the user). Then I can just say cmake myrootdir and it does find all the various components. This works more or less OK. It does have some side drawbacks though, such as that some third-party tools like BuildBot expect a more traditional project structure which makes integrating other tools with this sort of arrangement a little more work.
The CMake BASIS tool provides utilities where you can create independent modules of a project and selectively enable and disable them using the ccmake command.
Full disclosure: I'm a developer for the project.
I am having a bit of trouble organising my source files.
I have my own small, but growing collection of code that I would like to use in various projects. The file and folder layout is something like this:
library\sub1\source.h
library\sub1\source.cpp
library\sub2\source.h
library\sub2\source.cpp
One of my problems is that I want to include this code, as needed, in my other projects. To date I have used absolute paths to point to the libary code, but there must be a better way.
Futhermore, I need to add every library file I use to a project's files Visual Studio in order for it to compile correctly.
So my question in short is how do I fix this? What is the proper/best way to handle the above situation.
You shouldn't, in general, add source files from libraries directly to other projects. Compile them separatly as a library and use those.
For organising the library's directory structure itself, right now I settled on something like the following structure
library1/widget.h
library1/private/onlyinlib.h
library1/private/widget.cpp
(and if applicable)
library1/private/resources/widget.jpg
library1/private/project/widget.xcode
I put all headers directly in the library path, and have a subfolder private which will contain everything that's only used by the library, but should never be shared / exposed.
The greatest advantage is that every project I start only needs a include path pointing at the directory containing my libraries, then every (public) include is done like
#include "library1/widget.h"
private includes are simply
#include "onlyinlib.h"
This has a number of advantages:
If new libraries are introduced, there's no messing with project /compiler settings to get the headers 'visible'.
Moving to other compilers / platforms is also very little hassle.
The headers are automatically 'namespaced', i.e. by including part of the path too, it's next to impossible to get a nameclash with the includes
It's immediatly obvious where a header comes from, and if a header is part of the public interface or not
I don't think that there's a proper way to do this - it's going to depend on exactly what you are trying to achieve.
Here's some things you might not be aware of:
You can use relative paths in your projects.
You can use environment variables in paths.
You can add directories to Visual Studio's search rules.
This gives you a little more control over where you put the include files and if you add your folders to Visual Studio's search rules you don't have to include any paths at all.
If you must include third-party code instead of just linking with a pre-compiled version (e.g., perhaps you need to make modifications or tweaks to it), consider branching it in whatever you use for source-control:
/trunk/... --- your code goes here
/thirdparty --- pristine copies of third-party libraries go here
/thirdparty/lib1
/thirdparty/lib2
etc.
/trunk/lib1 --- branched from: /thirdparty/lib1, perhaps with local changes
this is the version that you build/link with.
Assuming you use a decent source-control system, this scheme will allow you to easily upgrade to newer versions of third-party libraries and then merge those changes with the changes you've made locally.
For example, suppose "lib1" releases a new version:
Commit the change to /thirdparty/lib1.
Merge from /thirdparty/lib1 to /trunk/lib1
Fix any merge conflicts.
This is, IMO, the only sane way to handle upgrading third-party libraries to which you've made local modifications.
First: Add all used directorys to your project include paths. Add them as relative paths if possible.
Second: You must add all used librarys/source files to your project. This can be either done in the project explorer, or in the Project->Linker tab. In the latter case, you'll have to add the used directories to the projects library paths as well.
Usually its not a good idea to use paths in #include directives.