I'm using boot to build a Clojure&ClojureScript project. Obviously it depends on a lot of third party libraries with all their licenses. Most of these licenses require mentioning the copyright of the dependency in the resulting deliverable.
I could not find a boot plugin or other hints how to create this information automatically at boot time. Has anyone ever solved this? Maintaining this list manually is tedious and error prone.
At present there is no built-in or community tasks which provide this functionality.
Related
Suppose you are working on some project that supports several configurations (linux and windows builds, shared/static linking, with some feature or without, etc). To build all these configurations you need different versions of 3rd party components (built with gcc or msvc, shared/static, with some specified preprocessor definitions, etc). So eventually you end up with a problem of managing all these configurations not only for your project, but for all the libraries your project is using.
Is there a general solution/approach/software to facilitate managing several different configurations of a single project?
Criteria:
Ease of setup, i.e. how much time one would need to spend to build your project from scratch?
Ease of management, i.e. is it hard to add new dependency or remove an existing one?
Error proof, i.e. how often developers will break the build by changing dependencies?
So far I've tried several approaches.
Store prebuilt packages for every configuration under VCS.
Pros: Ease of setup while project is small (update working copy and you are good to go). Ease of management (build library once for every required configuration). Error proof (VCS client notifies you about changes in your working copy).
Cons: Doesn't work well for distributed VCS (GIT, Mercurial, etc.). Repository grows rapidly and eventually a simple "clone" operation will be intolerable. You also end up downloading a lot of stuff which you don't really need (i.e. windows libraries if you are working on linux). And if you are implementing library, then users of your library will inherit all these problems by integrating it in their project.
Store library sources instead of prebuilt packages.
Pros: Ease of setup.
Cons: It is extremely painful to add a new library. You need to provide build scripts and source patches for every configuration. But that is only the tip of the iceberg. Your dependencies have their own dependencies, which have their own, so on and so forth ... You have a good chance to end up with something like a Gentoo distribution :)
Store an archive or just a folder with prebuilt packages somewhere on the external server.
Pros: Solves the problem... kind of.
Cons: Not so easy to setup (you have to copy the archive manually). Not so easy to manage (you have to add each library to a server by hand). No history of changes. Not error proof, because it is easy to forget to put something on the server, or to remove something useful.
Slightly improved approach: you can use a centralized VCS (for example, SVN) to store all 3rd party libraries, it will be easier to work with. But still you either don't have a centralized history of changes if you use it as a simple file storage, or you get a huge repository with lots of unnecessary libraries if you use it as a sub-repository.
When you faced with such type of problems, you have to learn and start using Configuration Management tools (besides usual technics of your SCM of choice). CM is process, and using some of Configuration Management Tools is part of this process.
Currently we have greate choice of different CM-tools, in which you can select best-fit or just preferred. From my POV, Chef is "Best Choice for Everybody", you mileage may vary
There are already some questions about dependency managers here, but it seems to me that they are mostly about build systems, while I am looking for something targeted purely at making dependency tracking and resolution simpler (and I'm not necessarily interested in learning a new build system).
So, typically we have a project and some common code with another project. This common code is organized as a library, so when I want to get the latest code version for a project, I should also go get all the libraries from the source control. To do this, I need a list of dependencies. Then, to build the project I can reuse this list too.
I've looked at Maven and Ivy, but I'm not sure if they would be appropriate for C++, as they look quite heavily java-targeted (even though there might be plugins for C++, I haven't found people recommending them).
I see it as a GUI tool producing some standardized dependency list which can then be parsed by different scripts etc. It would be nice if it could integrate with source control (tag, get a tagged version with dependencies etc), but that's optional.
Would you have any suggestions? Maybe I'm just missing something, and usually it's done some other way with no need for such a tool? Thanks.
You can use Maven in relationship with C++ in two ways. First you can use it for dependency management of components between each other. Second you can use Maven-nar-plugin for creating shared libraries and unit tests in relationship with boost library (my experience). In the end you can create RPM's (maven-rpm-plugin) out of it to have adequate installation medium. Furthermore i have created the installation for CI environment via Maven (RPM's for Hudson, Nexus installation in RPM's).
I'm not sure if you would see an version control system (VCS) as build tool but Mercurial and Git support sub-repositories. In your case a sub-repository would be your dependencies:
Join multiple subrepos into one and preserve history in Mercurial
Multiple git repo in one project
Use your VCS to archive the build results -- needed anyway for maintenance -- and refer to the libs and header files in your build environment.
If you are looking for a reference take a look at https://android.googlesource.com/platform/manifest.
I haven't done much "front-end" development in about 15 years since moving to database development. I'm planning to start work on a personal project using C++ and since I already have MSDN I'll probably end up doing it in Visual Studio 2010. I'm thinking about using Subversion as a version control system eventually. Of course, I'd like to get up and running as quickly as I can, but I'd also like to avoid any pitfalls from a poorly organized project environment.
So, my question is, are there any good resources with common best practices for setting up a development environment? I'm thinking along the lines of where to break down a solution into multiple projects if necessary, how to set up a unit testing process, organizing resources, directories, etc.
Are there any great add-ons that I should make sure I have set up from the start?
Most tutorials just have one simple project, type in your code and click on build to see that your new application says, "Hello World!".
This will be a Windows application with several DLLs as well (no web development), so there doesn't need to be a deploy to a web server kind of process.
Mostly I just want to make sure that I don't miss anything big and then have to extensively refactor because of it.
Thanks!
I would also like a good answer to this question. What I've done is set it up so that each solution makes reference to a $(SolutionDir)\build directory for includes and libraries. That way each project that has dependencies on other projects can access them and versions won't compete. Then there are post-build commands to package up headers and .lib files into a "distribution" folder. I use CC.net to build each package on checkin. When we decide to update a dependency project we "release" it to ourselves, which requires manual tagging, manual copying current.zip into a releases area and giving it a version number, and copying that into the /build of the projects that depend on the upgrade.
Everything works pretty great except this manual process at the end. I'd really love to get rid of it but can't seem to. Read an article from ACM about "Continuous Release" that would be really nice to have an implementation of but there isn't any. I keep telling myself I'll make one.
If I use "junctions" in the windows filesystem I can link "distribute" to "build" and then build a secondary solution that includes all the projects that are dependent on each other to build a product. When I did that though it encouraged developers to use it for active development, which discouraged TDD and proper releasing.
we work under Linux/Eclipse/C++ using Eclipse's "native" C++ projects (.cproject). the system comprises from several C++ projects all kept under svn version control, using integrated subclipse plugin.
we want to have a script that would checkout, compile and package the system, without us needing to drive this process manually from eclipse, as we do now.
I see that there are generated makefile and support files (sources.mk, subdir.mk etc.), scattered around, which are not under version control (probably the subclipse plugin is "clever" enough to exclude them). I guess I can put them under svn and use in the script we need.
however, this feels shaky. have anybody tried it? Are there any issues to expect? Are there recommended ways to achieve what we need?
N.B. I don't believe that an idea of adopting another build system will be accepted nicely, unless it's SUPER-smooth. We are a small company of 4 developers running full-steam ahead, and any additional overhead or learning curve will not appreciated :)
thanks a lot in advance!
I would not recommend putting things that are generated in an external tool into version control. My favorite phrase for this tactic is "version the recipe, not the cake". Instead, you should use a third party tool like your script to manipulate Eclipse appropriately to generate these files from your sources, and then compile them. This avoids the risk of having one of these automatically generated files be out of sync with your root sources.
I'm not sure what your threshold for "super-smooth" is, but you might want to take a look at Maven2, which has a plugin for Eclipse projects to do just this.
I know that this is a big problem (I had exactly the same; in addition: maintaining a build-workspace in svn is a real pain!)
Problems I see:
You will get into problems as soon as somebody adds or changes project settings files but doesn't trigger a new build for all possible platforms! (makefiles aren't updated).
There is no overall make file so you can not easily use the build order of your projects that Eclipse had calculated
BTW: I wrote an Eclipse plugin that builds up a workspace from a given (textual) list of projects and then triggers the build. That's possible but also not an easy task.
Unfortunately I can't post the plugin somewhere because I wrote it for my former employer...
I'm finding that with dynamic linking, even with SxS, Windows Update will come along and stomp on a version of the VC8 CRT (for example it has a security flaw) and then my app will fail to run with older versions.
What are some of the important reasons to stay with the dynamic linking with VC CRT, other than increasing the size of your binaries?
Staying up to date on security fixes is a good reason. Otherwise, you're responsible for rebuilding your application with a fixed CRT and deploying it to your customers.
Using a shared CRT should result in lower memory footprint for the system, since most of the DLL's pages can be shared between processes.
I prefer static linking. Security is not a really big issue since hackers target applications that many users have installed on their system. So unless your application has over 1 million users, I wouldn't worry about it being exploited by hackers.
I don't like dynamic linking. It just feels too fragile to me.
EDIT: And if you want to make sure that your users have an up-to-date version of your application then also write an updater application that is automatically installed along with your main app. On Windows this could be implemented as a Service.
See http://people.redhat.com/drepper/no_static_linking.html
It's about linux, but some of the ideas apply.
If done right there should be absolutely no problem with dynamic linking and the application should not fail to run. The only hard part is to switch to building your installer from whatever method you use now to the way supported by Microsoft (redistributable merge modules - MSM, MSI, dynamic linking). See this link for extremely precious advice right from the source. Some interesting quotes from the blog:
In order to redistribute the Visual C++ libraries, all you need to do is include the appropriate .MSM file and its accompanying policy .MSM to distribute the library you need.
Again, just to emphasize – do not use VCRedist*.exe unless you are using Click Once to deploy your application.
However, I can think of no scenarios in which this (my note: static linking) is actually the right thing to do when shipping your product to customers.
I do agree that you might need to do non-trivial work to implement this (maybe you're not using MSI right now etc.) but I think that if resources allow you should try to switch to the recommended methods described above.
And if you don't do it the way described above your application will indeed stop working at some point. And developers blame Microsoft while they were really not following the supported way described above. Maybe Microsoft is to blame because it doesn't link to the blog above more often on MSDN to spread the word but that's about it.
You're lucky out there in Windows. And Linux literally consists of libraries, and you have such issues with all of them. :-)
As far as I understand, library vendors always retain backward compatibility, especially if it's Microsoft. So, the possible solution is to build your application on an old machine, keeping in mind that Microsoft develops CRT library in the way that your app will run on all further versions.
When your program is using something from the CRT that is one of the 'security leaks' that you mention. If you link statically your users won't know that they are subject to a security flaw, and are maybe in danger of a virus. On the other hand if your program doesn't work because it is dynamically linked they will be forced to update to the new safe version.