Day 1 with using Hudson for our CI build. Slowly but surely getting up to speed.
My question is about run parameters. I've seen that I can use them to reference a particular run of a particular project - that's all fine.
What I don't understand (and can't find any documentation on - there's nothing at Parameterized Build) is how I refer to anything in the run defined by the run parameter.
Essentially I want to reference the %BUILD_NUMBER% and %SVN_REVISION% of the run that is selected in the run parameter.
How can I do that?
Do you really need to add extra property values, extra parameters for your job?
Since BUILD_NUMBER and SVN_REVISION are already defined as environment variables (see Building a software project), you can use those in your job.
When a Hudson job executes, it sets some environment variables that you may use in your shell script, batch command, or Ant script
or:
illustrates you already have those values at your disposal.
You can then use them to define other environment variables/properties within your shell or ant script.
When it comes to pass a variable value from one job to another, the Parameterized Trigger Plugin should do the trick:
The parameters section can contain a combination of one or more of the following:
a set of predefined properties
properties from a properties file read from the workspace of the triggering build
the parameters of the current build
"Subversion revision": makes sure the triggered projects are built with the same revision(s) of the triggering build.
You still have to make sure those projects are actually configured to checkout the right Subversion URLs.
Note: there might be an issue with the Join Plugin, which might not work when the Parameterized Trigger is in action.
Related
Not sure what the best way to explain what I want is, but here goes...
I have a header file ('generated.h') that is generated from another header file ('interface.h') using some python script.
If I add a custom build step to generated.h, that's a circular dependency. Also, 'generated.h' doesn't even exist in a new workspace, so it gets a bit more confusing there.
Should I instead change interface.h to a custom build tool?
'generated.h' is only for use in testing (generated.h's are mock headers) and there may be several.
Therefore I don't really want to add a custom build step to interface.h, since that's used in "real" code. It's not really interface.h's responsibility to generate 'generated.h' (or is it?).
I could add the script as an item next to 'generated.h', but if there are many such generated.h-like-files, I would need to modify the script to accept multiple sets of args, or otherwise find a way to add the generation script several times.
What would you recommend?
I am working on a uni project with multiple tasks, and I am trying to debug a simple program, however I am getting the error "LNK2005 main already defined in task 1".
I realise this is because I have used "int(main)" for both tasks (I have code for task 1 and code for task 2). I do not want to have to create a new project folder for every task. Is there a way around this ?
While it is generally advisable to have a project for each executable you build, you can get away with having a single project for multiple executables if you manage to somehow get rid of the undesired duplicate mains. You have quite a few options available to you:
Have only one main. Have it test its own executable name, and take specific action depending on the name it finds. In the post-build rules, set up rules for creating each (specifically named) executable from your base executable. This allows you to build all your executables at the same time in a fairly efficient manner.
Have multiple mains, but hide them using #ifdefs. Add a #define to the project settings or just somewhere above main(), and compile as needed. This is ok if you don't want to build all your executables all the time.
Just bite the bullet and set up multiple projects.
Whatever you do, consider that being able to build everything you have in a single step is considered a highly desirable trait of build systems, and is usually high on the list of features a properly engineered development process should have.
In SAS using SASMSTORE option I can specify a place where the SASMACR catalog will exist. In this catalog will reside some macro.
At some moment I may need to change the macro and this moment may occure while this macro and therefore the catalog will be in use by another user. But then it will be locked and unavailable to be modified.
How can I avoid such a situation?
If you're using a SAS Macro catalog as a public catalog that is shared among colleagues, a few options exist.
First, use SVN or similar source control option so that you and your colleagues each have a local copy of the macro catalog. This is my preferred option. I'd do this, and also probably not used stored compiled macros - I'd just set it up as autocall macros, personally - because that makes it easy to resolve conflicts (as you have separate files for each macro). Using SCMs you won't be able to resolve conflicts, so you'll have to make sure everyone is very well behaved about always downloading the newest copy before making any changes, and discusses any changes so you don't have two competing changes made at about the same time. If SCMs are important for your particular use case, you could version control the macros that create the SCMs and build the SCM yourself every time you refresh your local copy of the sources.
Second, you could and should separate development from production here. Even if you have a shared library located on a shared network folder, you should have a development copy as well that is explicitly not locked by anyone except when developing a new macro for it (or updating a currently used macro). Then make your changes there, and on a consistent schedule push them out once they've been tested and verified (preferably in a test environment, so you have the classic three: dev, test, and prod environments). Something like this:
Changes in Dev are pushed to Test on Wednesdays. Anyone who's got something ready to go by Wednesday 3pm puts it in a folder (the macro source code, that is), and it's compiled into the test SCM automatically.
Test is then verified Thursday and Friday. Anything that is verified in Test by 3pm Friday is pushed to the Dev source code folder at that time, paying attention to any potential conflicts in other new code in test (nothing's pushed to dev if something currently in test but not verified could conflict with it).
Production then is run at 3pm Friday. Everyone has to be out of the SCM by then.
I suggest not using Friday for prod if you have something that runs over the weekend, of course, as it risks you having to fix something over the weekend.
Create two folders, e.g. maclib1 and maclib2, and a dataset which stores the current library number.
When you want to rebuild your library, query the current number, increment (or reset to 1 if it's already 2), assign your macro library path to the corresponding folder, compile your macros, and then update the dataset with the new library number.
When it comes to assigning your library, query the current library number from the dataset, and assign the library path accordingly.
What I'm trying to achieve is the following:
I have multiple dependent configurations for a single, logical build. The very first configuration runs a script that does a bit of work and returns a value. You can think of this configuration as the setup step. I need to be able store this value and use it in subsequent steps. All dependent configurations for a single build should receive the same value.
Setup() computes a value x. I then have configurations B(x) and A(x) that run after Setup() and need to be fed the calculated value x.
Previously, I've managed to do something similar for things that are calculated as part of the TeamCity configuration. E.g. I generated a unique build id for the entire build chain and was able to access it via %dep.{team_city_configuration_id}.system.build.number%.
This time, the value I need to propagate is calculated in the guts of a build script and not as part of the TeamCity plumbing. I've managed to wrap the setup script in question and grep out the value I need, but I don't know how to propagate it between configurations.
Is this even possible, or am I barking up the wrong tree? If I cannot do this in a non-insane way, is there a better alternative I'm missing?
Thanks
Can a mod close this, please? It's a dupe. My colleague found this, which does exactly what we wanted.
I have several (15 or so) builds which all reference the same string of text in their respective build process templates. Every 90 days that text expires and needs to be updated in each of the templates. Is there a way to create a central variable or argument
One solution would be to create an environment variable on your build machine. Then reference the variable in all of your builds. When you needed to update the value you would only have to set it in one place.
How to: Use Environment Variables in a Build
If you have more than one build machine then it could become too much of a maintenance issue.
Another solution would involve using MSBuild response files. You create an .rsp file that holds the property value and the value would be picked up and set from MSBuild via the command line.
You need to place it into somewhere where all your builds can access it, then customize your build process template to read from there (build definitions - as you know - do not have a mechanism to share data between defs).
Some examples would be a file checked into TFS, a file in a known location (file share), web page, web service, etc.
You could even make a custom activity that knew how to read it and output the result as an OutArgument (e.g. Custom activity that read the string from a hardcoded URL).