I've inherited a python program where the previous author pinned all their dependencies at ancient versions and left them that way. I want to update them. I tried testing against the current versions and (unsurprisingly) they break.
I want to find the most recent dependency versions that worked. Tox.ini has ways to generate environments from a pattern, e.g. envlist = py27-dependency{1,2} and then specify dep versions based on the pattern. But all the examples require specifying an explicit list, and an exhaustive list of versions would be...non-optimal.
I would like to generate an env for all available versions of each dependency between the old pinned version and the current version; and have tox tell me which combinations pass tests. Is this possible?
Related
Let's simplify the question considering the following scenario.
I'm working with three C++ projects stored in three different Git repositories. All projects need to live in a separate repo because they need to be independent during the development.
Project 1
Project 2
Common Code Project
To compile Project 1 you need Common Code version 1 (e.g. an older version).
To compile Project 2 you need Common Code version 2 (e.g. a newer version).
Sometimes you want to work with the Common Code Project as a standalone project (kind of a standalone shared library), compile it, do tests, and possibly advance with new versions, branches etc.
Project 1 and 2 will keep their stories and they are likely to require different old versions of the Common Code for the compilation even if the Common Code advances in its personal story.
I need to work on all the projects on the same machine switching multiple time between them in the same day, so the working model can't be too complicated to avoid errors or problems.
Should I work with 3 parallel repositories resolving the compilation dependencies on the version at compile time (e.g. by checking out the correct version of the repo before compiling)? Is there any other better solution (e.g. Git Submodules)?
This sounds like a git submodules task. With submodules, you can reference another repository at a specific revision.
How to check out specific version of a submodule using git submodule?
You could use
git submodules (not really recommended, hard to maintain, git will go into bizarre berserk states if you mess up - e.g. delete a submodule directory from underneath git, so it's really fragile)
git subtrees, probably in a more convenient form of git subrepo (https://github.com/ingydotnet/git-subrepo) - this causes some code duplication so i'd also be cautious about using it, especially if projects are large.
repo tool - https://code.google.com/p/git-repo/
With repo you need to set up a manifest repository which will describe from which repos, which revisions and into which directories to populate project parts. This allows you to develop components completely independently, and lock to certain versions of these projects together with keeping clear history - the manifest repository is versioned separately and can have marks for specific releases of software.
How would I write nodejs addon which would support all versions (atleast all stable versions > 0.10.6) of nodejs. For example, one version would have String::Utf8Value name(args[0]); where as another would have node::Utf8Value name(args[0]);. This is just an example but I have many scenarios where there will be different code for different versions of nodejs.
As far as I know this could be achieved in following ways.
Defining pre-processor to check which version and compile code accordingly.
if defined()//Not sure what exactly have to checked
include <nameser.h>
else
include <arpa/nameser.h>
endif
If this is the best option(which I don't think) even if multiple places pre-processor as to be added and code looks ugly, how would I achieve this. Meaning how would I check which version of NodeJS inside C/CPP addon.
Having separate file for each version and defining conditions inside binding.gyp to compile specific file based on nodejs version. If this is the best option, which variable I can refer to check the nodejs version.
Having tags while publishing npm package so that user can install for his specific nodejs version. Tag a published version. Although user has to check for the nodejs, non technical person won't be executing this, so shouldn't be a problem. The only problem I am seeing with this approach is versioning. Example, If there is a fix which has to applied for multiple tags, then, for every tag publish I have to specify different version (Not sure though).
Any other way I can achieve this which I am not aware of, if any the above options is not the good option to go with ?
When building multiply dependant C++ CMake projects (in Linux) in topologically sorted order, we have two possibilities:
Go through every project, and ...
... "make install" it in some prefix. When building library in project, link to already installed libraries.
... build it via "make", do not install. When building library in project, link to already builded libraries inplace.
What are pros/cons of those choices? Build performed by homebrew script, which resolves dependencies, builds in right order, etc.
Of course you can do both. But the idea of 'installing' is that libraries, headers, documentation etc. are placed in a well defined directory, that does not depend on the layout of source code trees.
Separate source, which is most often only interesting go the programmer of that package, and compiled proagrams libraries etc., which are interesting for users andd programmers of other packages.
Imagine you have to change the directory structure of one subpackage. Without installing you would have to adapt all the other man scripts.
So:
Pros of solution 1 (== Cons of solution 2)
Better maintainability of the whole package
The "expected" way
make and make install are expected to perform two conceptually different things. There is no better or worse of them. I will explain by describing usual sequence of program installation using make (from "Art of Unix Programming"):
make (all) - Your all production should make every executable of your project. Usually
the all production doesn’t have an explicit rule; instead it refers to all of your
project’s top-level targets (and, not accidentally, documents what those are).
Conventionally, this should be the first production in your makefile, so it will
be the one executed when the developer types make with no argument.
make test - Run the program’s automated test suite, typically consisting of a set of unit
tests to find regressions, bugs, or other deviations from expected behavior
during the development process. The ‘test’ production can also be used
by end-users of the software to ensure that their installation is functioning
correctly.
make install - Install the project’s executables and documentation in system directories so
they will be accessible to general users (this typically requires root privileges).
Initialize or update any databases or libraries that the executables require in
order to function.
Credits go to Eric Steven Raymond for this answer
A little background, we have a fairly large code base, which builds in to a set of libraries - which are then distributed for internal use in various binaries. At the moment, the build process for this is haphazard and everything is built off the trunk.
We would like to explore whether there is a build system which will allow us to manage releases and automatically pull in dependencies. Such a tool exists for java, Maven. I like it's package, repository and dependency mechanism, and I know that with either the maven-native or maven-nar plugin we could get this. However the problem is that we cannot fix the source trees to the "maven way" - and unfortunately (at least the maven-nar) plugins don't seem to like code that is not structured this way...
So my question is, is there a tool which satisfies the following for C++
build
package (for example libraries with all headers, something like the .nar)
upload package to a "repository"
automatically pull in the required dependencies from said repository, extract headers and include in build, extract libraries and link. The depedencies would be described in the "release" for that binary - so if we were to use CI server to build that "release", the build script has the necessary dependencies listed (like the pom.xml files).
I could roll my own by modifying either make+shell scripts or waf/scons with extra python modules for the packaging and dependency management - however I would have thought that this is a common problem and someone somewhere has a tool for this? Or does everyone roll their own? Or have I missed a significant feature of waf/scons or CMake?
EDIT: I should add, OS is preferred, and non-MS...
Most of the linux distributions, for example, contain dependency tracking for their packages. Of all the things that I've tried to cobble together myself to take on your problem, in the end they all are "not quite perfect". The best thing to do, IMHO, is to create a local yum/deb repository or something (continuing my linux example) and then pull stuff from there as needed.
Many of the source-packages also quickly tell you the minimum components that must be installed to do a self-build (as opposed to installing a binary pre-compiled package).
Unfortunately, these methods are that much easier, though it's better than trying to do it yourself. In the end, to be cross-platform supporting, you need one of these systems per OS as well. Fun!
I am not sure if I understand correctly what you want to du, but I will tell you what we use and hope it helps.
We use cmake for our build. It hat to be noted that cmake is quite powerful. Among other things, you can "make install" in custom directories to collect headers and binaries there to build your release. We combine this with some python scripting to build our releases. YMMV, but some things might just be too specific for a generic tool and a custom script may be the simpler solution.
Our build tool builds releases directly from a svn reposity (checkout, build, ...) which I can really recommend to avoid some local state polluting the release in some unforseen way. It also enforces reproducability.
It depends a lot on the platforms you're targeting. I can only really speak for Linux, but there it also depends on the distributions you're targeting, packages being a distribution-level concept. To make things a bit simpler, there are families of distributions using similar packaging mechanisms and package names, meaning that the same recipe for making a Debian package will probably make an Ubuntu package too.
I'd definitely say that if you're willing to target a subset of all known Linux distros using a manageable set of packaging mechanisms, you will benefit in the long run from not rolling your own and building packages the way the distribution creators intended. These systems allow you to specify run- and build-time dependencies, and automatic CI environments also exist (like OBS for rpm-based distros).
There are already some questions about dependency managers here, but it seems to me that they are mostly about build systems, while I am looking for something targeted purely at making dependency tracking and resolution simpler (and I'm not necessarily interested in learning a new build system).
So, typically we have a project and some common code with another project. This common code is organized as a library, so when I want to get the latest code version for a project, I should also go get all the libraries from the source control. To do this, I need a list of dependencies. Then, to build the project I can reuse this list too.
I've looked at Maven and Ivy, but I'm not sure if they would be appropriate for C++, as they look quite heavily java-targeted (even though there might be plugins for C++, I haven't found people recommending them).
I see it as a GUI tool producing some standardized dependency list which can then be parsed by different scripts etc. It would be nice if it could integrate with source control (tag, get a tagged version with dependencies etc), but that's optional.
Would you have any suggestions? Maybe I'm just missing something, and usually it's done some other way with no need for such a tool? Thanks.
You can use Maven in relationship with C++ in two ways. First you can use it for dependency management of components between each other. Second you can use Maven-nar-plugin for creating shared libraries and unit tests in relationship with boost library (my experience). In the end you can create RPM's (maven-rpm-plugin) out of it to have adequate installation medium. Furthermore i have created the installation for CI environment via Maven (RPM's for Hudson, Nexus installation in RPM's).
I'm not sure if you would see an version control system (VCS) as build tool but Mercurial and Git support sub-repositories. In your case a sub-repository would be your dependencies:
Join multiple subrepos into one and preserve history in Mercurial
Multiple git repo in one project
Use your VCS to archive the build results -- needed anyway for maintenance -- and refer to the libs and header files in your build environment.
If you are looking for a reference take a look at https://android.googlesource.com/platform/manifest.