I have a DB integration test that I'm running using Boost Build. The test needs some commandline args (DB username, password). What's the best way to set that via Boost Build in a way that's configurable by the user (via environment variables, bjam commandline, user-config.jam)?
I know I can do this with variables:
import os ;
local DB_PASS = [ os.environ DB_PASS ] ;
run dbtest : test.cpp : --dbpass $(DB_PASS) ;
This can be set via a the commandline (bjam -s DB_PASS=pass) or via an environment variable.
On the other hand, Boost Build tends to do most of its configuration via the feature mechanism. I could probably define a new feature and get the configuration data to the right place that way.
What's the pros and cons of each approach? Which one should I take? If features: how would I do that?
NB: The actual test is within a Jamfile that's used by the Jamroot, so not directly in the root file.
I would just use your suggestion of variables. They provide a great deal of flexibility. I don't see how a "feature" in this case would help things.
Related
I'm trying to implement my own module to build C++ on Windows with clang-cl toolchain as there's no built-in support in QBS right now.
I chose to use lld-link instead of microsoft linker, so I have to supply it with all the MS library include paths manually. With these paths hardcoded, I manage to build my apps fine. But I'd like to make my module more flexible and use %LIB% environment variable set by vcvars32.bat|vcvars64.bat
As far as I understand, this could (should?) be done inside module's setupBuildEnvironment script. Here's what I try to read the %LIB% and fail:
import qbs.Environment
import qbs.Process
Module
{
setupBuildEnvironment:
{
var p = new Process();
p.exec("vcvars64.bat", [], true);
// makes no difference
// p.exec("cmd", ["/c", "vcvars64.bat"], true);
var lib = p.getEnv("LIB");
// this fails too
// var lib = Environment.getEnv("LIB");
console.info("LIB = " + lib);
p.close();
}
...
}
This gives me LIB = so I'm getting nowhere. My guess is that the process is already terminated at the moment of querying the variable (p.getEnv("LIB")), hence the empty result. The QBS docs for Process.getEnv() state nothing in this regard.
What is the correct QBS way to initialize environment with vcvars64.bat, and more broadly, what is the correct way to get environment of a process inside setupBuildEnvironment?
[update]
Well, embarassingly, this was easy to work around by creating a simple batch and getting rid of setupBuildEnvironment script altogether:
#echo off
call vcvars64 && qbs
But I'd like to avoid batch scripting as much as possible, so the question still stands.
The vars batch files just dump some information onto the console. That does not set an environment on the calling process in any way. You would need to parse the process output. I suggest you take a look at the MsvcProbe item in the qbs sources to see how that is implemented for MSVC. You might be able to adapt the code for clang-cl.
I am trying to set up an API-key to be a global variable that is accessible across all of my TFS2015 Builds. Since TFS2015 seems to lack this feature, I am attempting to use a system environment variable on the build server that is then referenced in the build definitions.
According to Microsoft's documentation, this should be possible. So I have set up a system variable (call it APIKey) on the build server and referenced it within the arguments of a build step using the standard syntax (i.e. "ApiKey=$(APIKey)"). However, instead of replacing the variable with the API-key in the system variable it is trying to use $(APIKey) as the value and causing the build to fail.
It also occurred to me that this custom environment variable would instead be set somewhere in the build agent folder itself but, after some poking around, I'm not sure where or how I would do that.
Are either of these things actually doable?
Following are my steps to achieve this:
Create a system variable on build agent machine:
Restart the build agent machine.
Use the variable in build definition. Here I use cmd task as an example and use the $(testvar) as its argument:
The task will read the value from system variable as following:
So I am running into an issue when I go to build my projects using tfs build controller using the Output location "AsConfigred" it will not detect my unit tests. Let me give a little info on my setup.
TFS 2013 Update 2, Default Process Template
Here is a few screenshots that can hopefully help fill in what I can't in typing. I am copying my build out to a file share on our network so that we can use other utilities use the output. I don't want to use "PerProject" or "SingleFolder" because they mess up the file structure we have configured (These both will run the tests). So i have the files copy to folder names "SingleOutputFolder" which is a child of the DropLocation. I would like to be able to run from the drop folder or run from the bin folder for each of my tests (I don't care which). However it doesn't seem to detect/run ANY of the tests. Any help would be greatly appreciated. Please let me know if you need any additional information.
I have tried using ***test*.dll, Install\SingleFolderOutput**.test.dll, and $(TF_BUILD_DROPLOCATION)\Install\SingleFolderOutput*test*.dll
But I am not sure what variables are available and understand where the scope of its execution is.
Given that you're using Build Output location set to AsConfigured you have to change the default values of the Test sources spec setting to allow build to find the test libraries in the bin folders. Here's an example.
If the full path to the unit test libraries is:
E:\Builds\7\<TFS Team Project>\<Build Definition>\src\<Unit Test Project>\bin\Release\*test*.dll
use
..\src\*UnitTest*\bin\*\*test*.dll;
This question was asked on MSDN forums here.
MSDN Forums Suggested Workaround
The suggested workaround in the accepted answer (as of 8 a.m. on June 20) is to specify the full path to the test projects' binary folders: For example:
C:\Builds\{agentId}\{teamProjectName}\{buildDefinitionName}\src\{solutionName}\{testProjectName}\bin*\Debug\*test*.dll*
which really should have been shown as
{agentWorkingFolder}\src\{relativePathToTestProjectBinariesFolder}\*test*.dll
However this approach is very brittle, for the following reasons:
Any new test projects you add to the solution will not be executed until you add them to the build definition's list of test sources:
It will break under any of the following circumstances:
the build definition is renamed
the working folder in build agent properties is modified
you have multiple build agents, and a different agent than the one you specified in {id} runs the build
Improved Workaround
My workaround mitigates the issues listed in #2 (can't do anything about #1).
In the path specified above, replace the initial part:
{agentWorkingFolder}
with
..
so you have
..\src\{relativePathToTestProjectBinariesFolder}\*test*.dll
This works because the internal working directory is apparently the \binaries\ folder that is a sibling of the \src\ folder. Navigating up to the parent folder (whatever it is named, we don't care) and back in to \src\ before specifying the path to the test projects binaries does the trick.
Note: If you have multiple test projects, you add additional entries, separated with semicolons:
..\src\{relativePathToTestProjectONEBinariesFolder}\*test*.dll;..\src\{relativePathToTestProjectTWOBinariesFolder}\*test*.dll;..\src\{relativePathToTestProjectTHREEBinariesFolder}\*test*.dll;
What I ended up doing was adding a post build event to copy all of the test.dll into the staging location folder in the specific build that is basically equivalent to where it would go on a SingleFolder build and do that on each test project.
if "$(TeamBuildOutDir)" == "" (
echo "Building Interactively not in TFS"
) else (
echo "Building in TFS"
xcopy "$(TargetDir)*.*" "$(TeamBuildBinaries)\" /Y /E /S
)
MSBUILD parameter in the build def that told it to basically drop in the folder that TFS looks for them.
/p:TeamBuildBinaries="$(TF_BUILD_BINARIESDIRECTORY)"
Kept the default Test assembly file specification:
**\*test*.dll
View this link for the information on the variable that I used and what relative path it exists at.
Another solution is to do the reverse.
Leave all of the files in the root so that all of the built in functionality works. There is more than just test execution in there. What about static code analysis, impact analysis..among others. You would have to do something custom for them all.
Instead use a pre-drop powershell script to create your Install arrangement from the root files.
If it is an application then you can use the _ApplicationFolder Nuget package to create an _PublishApplications folder same as you get for web applications.
I'm still struggling with waf for setting up different set of flags for subprojects.
I have a structure like this, in which superproject recursed to the subprojects:
superproject/wscript
libproject/wscript
progproject/wscript
The problem is that both subprojects progrproject and libproject use boost tool.
I want that both projects check for boost at configuration time, since I want the projects to be self-contained when they are built independently.
I have to do something like this to not overwrite flags for boost between subprojects:
#In libproject
cfg.check_boost('regex', uselib_store='BOOST_LIBPROJECT')
#In progrpoject
cfg.check_boost('program_options', uselib_store='BOOST_PROGPROJECT')
This has the side-effect of making options --boost-libsand --boost-includesdefaults not to work anymore.
Actually, I would like to use the default BOOST store for both, but one seems to overwrite the other in _cache.py file. If I build the projects alone and separate, this problem does not happen. I think setting another environment is not a solution (right?)
since this makes the command to become ./waf build_libproject or similar, which is not what I want.
I want:
./waf configure build
to run correctly without workarounding with different stores or preventive flag name changing.
What is the correct way to do it?
I've recently picked up scons to implement a multi-platform build framework for a medium sized C++ project. The build generates a bunch of unit-tests which should be invoked at the end of it all. How does one achieve that sort of thing?
For example in my top level sconstruct, I have
subdirs=['list', 'of', 'my', 'subprojects']
for subdir in subdirs:
SConscript(dirs=subdir, exports='env', name='sconscript',
variant_dir=subdir+os.sep+'build'+os.sep+mode, duplicate=0)
Each of the subdir has its unit-tests, however, since there are dependencies between the dlls and executables built inside them - i want to hold the running of tests until all the subdirs have been built and installed (I mean, using env.Install).
Where should I write the loop to iterate through the built tests and execute them? I tried putting it just after this loop - but since scons doesn't let you control the order of execution - it gets executed well before I want it to.
Please help a scons newbie. :)
thanks,
SCons, like Make, uses a declarative method to solving the build problem. You don't want to tell SCons how to do its job. You want to document all the dependencies and then let SCons solve how it builds everything.
If something is being executed before something else, you need to create and hook up the dependencies.
If you want to create dmy touch files, you can create a custom builder like:
import time
def action(target, source, env):
os.system('echo here I am running other build')
dmy_fh = open('dmy_file','w')
dmy_fh.write( 'Dummy dependency file created at %4d.%02d.%02d %02dh%02dm%02ds\n'%time.localtime()[0:6])
dmy_fh.close()
bldr = Builder(action=action)
env.Append( BUILDERS = {'SubBuild' : bldr } )
env.SubBuild(srcs,tgts)
It is very important to put the timestamp into the dummy file, because scons uses md5 hashes. If you have an empty file, the md5 will always be the same and it may decide to not do subsequent build steps. If you need to generate different tweaks on a basic command, you can use function factories to modify a template. e.g.
def gen_a_echo_cmd_func(echo_str):
def cmd_func(target,source,env):
cmd = 'echo %s'%echo_str
print cmd
os.system(cmd)
return cmd_fun
bldr = Builder(action = gen_a_echo_cmd_func('hi'))
env.Append(BUILDERS = {'Hi': bldr})
env.Hi(srcs,tgts)
bldr = Builder(action = gen_a_echo_cmd_func('bye'))
env.Append(BUILDERS = {'Bye': bldr})
env.Bye(srcs,tgts)
If you have something that you want to automatically inject into the scons build flow ( e.g. something that compresses all your build log files after everything else has run ), see my question here.
The solution should be as simple as this.
Make the result of the Test builders depend on the result of the Install builder
In pseudo:
test = Test(dlls)
result = Install(dlls)
Depends(test,result)
The best way would be if the Test builder actually worked out the dll dependencies for you, but there may be all kinds of reasons it doesn't do that.
In terms of dependencies, what you want is for all the test actions to depend on all the program-built actions. A way of doing this is to create and export a dummy-target to all the subdirectories' sconscript files, and in the sconscript files, make the dummy-target Depends on the main targets, and have the test targets Depends on the dummy-target.
I'm having a bit of trouble figuring out how to set up the dummy target, but this basically works:
(in top-level SConstruct)
dummy = env.Command('.all_built', 'SConstruct', 'echo Targets built. > $TARGET')
Export('dummy')
(in each sub-directory's SConscript)
Import('dummy')
for target in target_list:
Depends(dummy, targe)
for test in test_list:
Depends(test, dummy)
I'm sure further refinements are possible, but maybe this'll get you started.
EDIT: also worth pointing out this page on the subject.
Just have each SConscript return a value on which you will build dependencies.
SConscript file:
test = debug_environment.Program('myTest', src_files)
Return('test')
SConstruct file:
dep1 = SConscript([...])
dep2 = SConscript([...])
Depends(dep1, dep2)
Now dep1 build will complete after dep2 build has completed.