CruiseControl Next Build Time: Retrieve/Display - build

I am looking for a way to retrieve the value of the next build date/time in CruiseControl. This value is displayed in both the CCTray app as well as the CC Dashboard/Farm Report (../ccnet/ViewFarmReport.aspx), so it must be stored somewhere on the build machine or in the CruiseControl local memory (or even a listener file somewhere). Any ideas?

CCTray gets the information by querying the CruiseControl-server. Information about the status of the projects can be read from the XML returned from XmlStatusReport.aspx.
For example, the response from http://buildserver/ccnet/X/Y/XmlStatusReport.aspx is something like:
<Projects CCType="CCNet">
<Project name="MyProject"
category="MyCategory"
activity="Sleeping"
lastBuildStatus="Success"
lastBuildLabel="42"
lastBuildTime="2017-02-21T15:51:12.0880951+01:00"
nextBuildTime="2017-02-21T20:10:43.3853446+01:00"
...>
</Project>
</Projects>
Which tells us the nextBuildTime of the project MyProject. You can browse the source-code for CruiseControl.Net for more info.

Related

Show code coverage with a source code in Jenkins wiht Cobertura (run result from other machine)

Background
I have large c++ application with complex directory structure. Structure is so deep that code repository can't be stored in Jenkins workspace, but is some root directory, otherwise build fails since path length limit is busted.
Now since application is tested in different environments, test application is run in diffrent machine. Application and all resources are compressed and copied to test machine where tests are run using OpenCppCoverage and as a result Cobertura xml is produced.
Now since source code is needed to show covarage result xml is copied back to build machine and then feed to Jenkins Cobertura plugin.
Problem
Coverage reports shows only percent results for module or source code. Code content is not show, but this error message is show:
Source
Source code is unavailable. Some possible reasons are:
This is not the most recent build (to save on disk space, this plugin only keeps the most recent build’s source code).
Cobertura found the source code but did not provide enough information to locate the source code.
Cobertura could not find the source code, so this plugin has no hope of finding it.
You do not have sufficient permissions to view this file.
Now I've found this SO answear which is promising:
The output xml file has to be in the same folder as where coverage
is run, so:
coverage xml -o coverage.xml
The reference to the source folder is put into coverage.xml and if
the output file is put into another folder, the reference to the
source folder will be incorrect.
Problem is that:
I've run tests on different machine (this can be overcome by script which modifies paths in xml).
my source code can't be inside a workspace during a build time
placing xml in respective directory of source code is not accepted by Cobertura plugin. It ends with this error:
[Cobertura] Publishing Cobertura coverage report...
FATAL: Unable to find coverage results
java.io.IOException: Expecting Ant GLOB pattern, but saw 'C:/build_coverage/Products/MyMagicProduct/Src/test/*Coverage.xml'. See http://ant.apache.org/manual/Types/fileset.html for syntax
This is part of xml result (before modifications):
<?xml version="1.0" encoding="utf-8"?>
<coverage line-rate="0.63669186741173223" branch-rate="0" complexity="0" branches-covered="0" branches-valid="0" timestamp="0" lines-covered="122029" lines-valid="191661" version="0">
<sources>
<source>c:</source>
<source>C:</source>
</sources>
<packages>
<package name="C:\jenkins\workspace\MMP_coverage\MyMagicProduct\src\x64\Debug\MMPServer.exe" line-rate="0.63040511358728513" branch-rate="0" complexity="0">
<classes>
<class name="AuditHandler.cpp" filename="build_coverage\Products\MyMagicProduct\Src\Common\AuditHandler.cpp" line-rate="0.92682926829268297" branch-rate="0" complexity="0">
<methods/>
<lines>
<line number="18" hits="1"/>
<line number="19" hits="1"/>
<line number="23" hits="1"/>
<line number="25" hits="1"/>
<line number="27" hits="1"/>
....
</lines>
</class>
....
The biggest issue is that I'm not sure if location of xml is actual a problem since plugin doesn't report details of the issues encountered when trying to fetch/find respective source code. Second bullet from Cobertura which may explain problem is totally confusing:
Cobertura found the source code but did not provide enough information to locate the source code.
What else I've tried
I've ensured that anyone can read source code (to avoid problem with access)
I've modified xml so filename contains path relative to: jenkins workspace, path where xml file with coverity report is located
copied my source code to various locations, even containing "cobertura" directory since something like this I've found in plugin source code
I've tried understand the issue by inspecting source code.
I've found some (a bit old) github project which maybe a hint howto fix it - currently I'm trying to understudy what it exactly does (I don't want to import this project to my build structure).
So far no luck.
Update:
Suddenly (I'm not sure what I have done) it works for my account. Problem is that it works only for me all other users have same issue. This clearly indicate that issue must be a security.
I encountered a very similar issue when I had to develop a CI pipeline for a very huge C++ client. I had the best results if I avoided the Cobertura Plugin and instead used the HTML Publisher Plugin. The main issue I had was also finding the source files.
Convert OpenCppCoverage result to HTML
This step is quite easy. You have to add the parameter --export_type=html:<outputPath> (see Commandline-reference) to the OpenCppCoverage call.
mkdir CodeCoverage
OpenCppCoverage.exe --export_type=html:CodeCoverage <GoogleTest.exe>
The commands above should result in a html-file in the directory <jenkins_workspace>/CodeCoverage/index.html
Publish the OpenCppCoverage result
To do this we use the HTML Publisher Plugin as I mentioned above. reportDir is the directory created in step one and which contains our html-file. Its path is relative to the Jenkins workspace.
publishHTML target: [
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'CodeCoverage',
reportFiles: 'index.html',
reportName: 'Code Coverage'
]
and to be sure that everyone can download and check the result locally we archieve the result of OpenCppCoverage:
archiveArtifacts artifacts: 'CodeCoverage/*.*'
You can see the result now in the sidebar of your pipeline under Code Coverage and the result will look like the following:
This is the solution that worked for me.
I hope this helps at least a bit. I can only advice do avoid the Cobertura Plugin. I wasted so much time try to fix it and recognize my sources...
Ok I've found reasons why I had a problems with this plugin.
xml from openCppCoverage is just correct. No changes are needed here to make it work (as far as sources are there where pdb file points to). Sources outside Jenkins workspace are not the problem here. When I copied executable from build machine to test machine, then run tests with openCppCoverage and copied result back to build machine it is just fine.
In job configuration any user which supposed to view code coverage has to have access to Job/workspace in security section. In my case I've enabled this for all logged in users. This covers last bullet point of error message.
Most important thing: build must be successful. I mean form beginning to the end. Doesn't meter if step containing call to cobertura plugin was successful. If any step (even in the future step) fails then cobertura will not show code for this coverage run. In my case build job was failing since one of tests was timing out. This was caused by openCppCoverage overhead which slows down tests by factor 3. My script was detecting timeout and killing one of tests.
I discovered that not successful build was a problem by accident. During experiments I noticed two cases when cobertura has shown source code:
I've rerun job and removed all steps but one responsible for publishing coloratura results
I run whole job such way it run a single test case which passed
Not sowing coverage if build is not successful is reasonable (if test failed then most probably wrong branch of code has been taken), but UI should indicate that in different way.
Conclusion
This is great example how it is important to report errors to user with precise details what went wrong and why. I wasted at least whole weak to figure out what is actually wrong which bullet point of error message is actually my case. In fact error message from plugin doesn't cover all reasons of not showing the code.
I will file report that plugin should give better explanation what went wrong.

VS2017 Property sheet ordering

So, I have a project in VS2017, and VS2017 has recently received an update. I have then added all the wxWidgets modules as projects to my initial solution and have dealt with build order so they're built in the proper order.
However, I always get this error:
C:\Programs\Visual Studio 2017\Common7\IDE\VC\VCTargets\Microsoft.Cpp.Common.props(144,5): warning
MSB4211: The property "WindowsTargetPlatformVersion" is being set to a value for the first time, but it was already consumed at
"C:\Programs\Visual Studio 2017\Common7\IDE\VC\VCTargets\Microsoft.Cpp.WindowsSDK.props (29,5)".
I've found this thread and the article linked in it: link
but it doesn't tell me how to fix it. From what I can tell, the properties for individual project are not evaluated in an order they should be evaluated.
How do I define the property sheet ordering? What exactly do I need to change?
Also not that I cannot change the project files or anything connected to wxWidgets since it's a submodule in my repository and any changes done cannot be saved to the repo.
Disclaimer: I haven't got a clue about your issue ,just trying to help you (the OP) !
In the IDE ,under menu View ,select other windows.
There select Property Manager ,which let you manipulate property sheets in your projects.
Right-click on a property-sheet. Some sheets have a menu which let you move the sheet up or down.
I suggest to play around with that. It might just solve your issue.
I could be completely wrong of course.
I had the same problem, with a different library though.
The cause for me was that in the project that I converted, the configuration that I was compiling with was not present in the props file
C:\Users\\AppData\Local\Microsoft\MSBuild\v4.0\Microsoft.Cpp.x64.user.props
For example, in the vcxproj file I had
<ProjectConfiguration Include="DLL Release|x64">
<Configuration>DLL Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
but in the props file I only had:
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<ClCompile>
<PreprocessorDefinitions>MYMACRO1;MYMACRO2;%(PreprocessorDefinitions)</PreprocessorDefinitions>
</ClCompile>
</ItemDefinitionGroup>
I just added a similar entry for 'DLL Release|x64' and that fixed it.
Definitely the warning message is not very helpful in this case.

Setting Code Analysis settings in TFSBuild.proj

I am trying to set/override some settings in our TEST installation of TFS with regards to forcing Code Analysis and assosicated settings during the build process (regardless of the setting sin the project file)
We currently use in our TEST TFS installation:
Visual Studio 2012 Ultimate on our developer machines AND build server
Have TFS 2012 installed on one server (application and data layer)
Have TFS 2012 build service (controller and agent) installed on another server
We can compile sample .net 4.5 projects (class libraries (DLLs), web applications etc) as expected. This is solely to do with overriding associated Code Analysis settings (hopefully).
Scenario 1 - In our sample applications on our developer machines when you select the project settings (right click -> properties in solution explorer), go to the Code Analysis tab if I turn on the "Enable Code Analysis on build" and select a Rule set from the drop down is performs as exepcted, hence it will generate some warnings. This technical adds <RunCodeAnalysis>false</RunCodeAnalysis> to the *.csproj file if opened up in notepad. If the build is executed to compile the sample project/solution then Code Analysis is performed as expected. I do NOT want to do this on every project because a developer could turn it off (although I am looking to have check-in policies and/or private/gated checkins as well to force this anyway).
Scenario 2 - I can disable the "Enable Code Analysis on Build" checkbox and force code analysis in our TFSBuild.proj file (we (will) use the default upgradetemplate.xaml as our process definition because we will be upgrading from TFS 2008 on our LIVE TFS installation) by having:
<RunCodeAnalysis>Always</RunCodeAnalysis>
This works and this is how we will force (lessons still to be learned :-)) Code Analysis on our builds.
The problem then comes when setting other assosicated Code Analysis settings. For example which default rule set(s) to apply/use or treat CA warnings as errors. Some of these settings can be set either in VS or all of them by editting *.csproj in notepad. If i edit the *.csproj then these values are used in the build as expected (as well as locally on the developer machine). This is not ideal as I want to do it centrally in TFSBuild.proj without having to edit every project file. I believe I can use settings such as in my TFSbuild.proj file:
<PropertyGroup>
<RunCodeAnalysis>Always</RunCodeAnalysis>
<CodeAnalysisRuleSet>AllRules.ruleset</CodeAnalysisRuleSet>
<CodeAnalysisTreatWarningsAsErrors>true</CodeAnalysisTreatWarningsAsErrors>
</PropertyGroup>
But they don't appear to work or I am putting them in the wrong place? How do I fix/use them correctly?
FYI i build my solutions in TFSBuild.proj by:
<Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
<Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" />
<ItemGroup>
<SolutionToBuild Include="/some folder/some solution.sln" />
<ConfigurationToBuild Include="Debug|Any CPU">
<FlavorToBuild>Debug</FlavorToBuild>
<PlatformToBuild>Any CPU</PlatformToBuild>
</ConfigurationToBuild>
</ItemGroup>
</Project>
On the build server I did find reference to the target file for Code Analysis at c:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\CodeAnalysis but I dont want to change the default behaviour on the build server (although it does work when I do). The condition for example for CodeAnalysisTreatWarningsAsErrors must be getting evaluated as false. Its like my values are not read from TFSBuild.proj but are from the .csproj file.
Any questions feel free to ask and thanks in advance
I had what I think is a similar problem with Cruise Control not compiling using the CODE_ANALYSIS compilation symbol, even if "Enable Code Analysis on Build (defines CODE_ANALYSIS constant)" was checked in VS.Net.
It looks like whether it is check or not, CODE_ANALYSIS is actually not explicitly added to the list of compilation symbols in the csproj (even if it appears in the text box "Conditional compilation symbols"), only <RunCodeAnalysis>true</RunCodeAnalysis> is added.
When compiling through VS.Net, the CODE_ANALYSIS is automatically added, but not when using MSBuild, which is what Cruise Control uses.
I eventually changed in VS.Net the "Conditional compilation symbols" from "CODE_ANALYSIS;MySymbol" to "MySymbol;CODE_ANALYSIS". Doing that forced CODE_ANALYSIS to also appear in the csproj.
I remember having a similar problem - but not having the time to investigate it, I worked around it by calling FxCop directly using the exec task. I'll just give you the highlights, omitting the specification of some properties, I hope the names are clear.
I created an ItemGroup of the output dlls, FilesToAnalyze, and fed it to FxCop in a way similar to:
<PropertyGroup>
<FxCopErrorLinePattern>: error</FxCopErrorLinePattern>
<FxCopCommand>"$(FxCopPath)" /gac /rule:"$(FxCopRules)" /ruleset:="$(FxCopRuleSet)" #(FilesToAnalyze->'/file:"%(identity)"', ' ') /out:$(FullFxCopLog) /console | Find "$(FxCopErrorLinePattern)" > "$(FxCopLogFile)"</FxCopCommand>
</PropertyGroup>
<Exec Command="$(FxCopCommand)"
ContinueOnError="true">
<Output TaskParameter="ExitCode" PropertyName="FxCopExitCode"/>
</Exec>
<ReadLinesFromFile File="$(FxCopLogFile)">
<Output TaskParameter="Lines" ItemName="AllErrorLines"/>
</ReadLinesFromFile>
I could then determine the number of errors in the output using an extensionpack task:
<MSBuild.ExtensionPack.Framework.MsBuildHelper TaskAction="GetItemCount" InputItems1="#(AllErrorLines)">
<Output TaskParameter="ItemCount" PropertyName="FxErrorCount"/>
</MSBuild.ExtensionPack.Framework.MsBuildHelper>
and create a failing build step for each error:
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(FxCopStep)"
Status="Failed"
Message="FxCop Failed: $(FxErrorCount) errors."/>
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Status="Failed"
Message="%(AllErrorLines.Identity)"/>
By doing code analysis on the build server this way, we also avoided having to configure each project separately. We isolated all this in a separate .targets file, so adding code analysis to a solution was a matter of importing that file, and perhaps adjusting the behavior by setting appropriate properties.

MSBuild: Custom.After.Microsoft.Common.targets for native C++ projects in VS2010

I've read about the use of "Custom.Before.Microsoft.Common.targets" and "Custom.After.Microsoft.Common.targets" in order to execute a custom target before/after every project build and I would like to use this technique in order to change version info while building on our TeamCity build server.
The problem is that although it works for C# projects, it doesn't seem to work for native C++ projects.
After some digging around in the Microsoft.Cpp.targets file I found out that for native C++ projects this seems to be implemented through setting $(ForceImportBeforeCppTargets) and $(ForceImportAfterCppTargets).
I can't seem to find a single piece of information on the web about this technique for native C++ apps though, so I'm asking if I'm looking in the right direction or not.
Any help is appreciated.
For VC++ projects it is a bit different. You define a file to be imported either at the beginning or at the end of the project. To use this approach you need to define values for the properties ForceImportBeforeCppTargets or ForceImportAfterCppTargets. For example if you want a file to be included at the beginning of the project you can pass in the value at the command line. For example I just created a dummy VC++ project named CppTets01. Then I created the two sample files below.
Before.proj
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Target Name="CustomTargetInBefore" AfterTargets="Build">
<Message Text="From CustomTargetInBefore" Importance="high"/>
</Target>
</Project>
After.proj
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Target Name="CustomTargetInAfter" AfterTargets="Build">
<Message Text="From CustomTargetInAfter" Importance="high"/>
</Target>
</Project>
Then I executed the following command:
msbuild CppTest01.vcxproj
/p:ForceImportBeforeCppTargets="C:\Temp\_NET\ThrowAway\CppTest01\CppTest01\Before.proj";
ForceImportAfterCppTargets="C:\Temp\_NET\ThrowAway\CppTest01\CppTest01\After.proj"
The result was
C:\Temp_NET\ThrowAway\CppTest01\CppTest01>msbuild CppTest01.vcxproj /p:ForceImportBeforeCppTargets="C:\Temp_NET\ThrowAway\CppTest01\C
ppTest01\Before.proj";ForceImportAfterCppTargets="C:\Temp_NET\ThrowAway\CppTest01\CppTest01\After.proj"
Microsoft (R) Build Engine Version 4.0.30319.1
[Microsoft .NET Framework, Version 4.0.30319.1]
Copyright (C) Microsoft Corporation 2007. All rights reserved.
Build started 10/18/2010 8:32:55 AM.
Project "C:\Temp\_NET\ThrowAway\CppTest01\CppTest01\CppTest01.vcxproj" on node 1 (default targets).
InitializeBuildStatus:
Creating "Debug\CppTest01.unsuccessfulbuild" because "AlwaysCreate" was specified.
ClCompile:
All outputs are up-to-date.
All outputs are up-to-date.
ManifestResourceCompile:
All outputs are up-to-date.
Link:
All outputs are up-to-date.
Manifest:
All outputs are up-to-date.
FinalizeBuildStatus:
Deleting file "Debug\CppTest01.unsuccessfulbuild".
Touching "Debug\CppTest01.lastbuildstate".
CustomTargetInBefore:
From CustomTargetInBefore
CustomTargetInAfter:
From CustomTargetInAfter
Done Building Project "C:\Temp\_NET\ThrowAway\CppTest01\CppTest01\CppTest01.vcxproj" (default targets).
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:00:00.21
As you can see from the output the targets were successfully injected into the build process. If you want to relate this back to Custom.Before.Microsoft.Common.targets and Custom.Before.Microsoft.Common.targets then you should know that the technique used there is a bit different. Specifically if you create those files they are automatically imported into every C#/VB.NET project. In this case you have to set this property. You really have two options here:
You can set this property as an environment variable
You can use another technique, ImportBefore & ImportAfter which is specific to VC++
For #1 let me explain a bit. In MSBuild when you access a property with the syntax $(PropName) then if a property with the name PropName doesn't exist MSBuild will look up in the environment variables to see if such a value exists, if it does then that value is returned. So if you have a build server in which you want to include a file for each VC++ build, then just create those properties as environment variables. Now for the other technique.
ImportBefore/ImportAfter
In VC++ a new concept is introduced. In Microsoft.Cpp.Win32.targets you can see the declaration at the top of the .targets file.
<Import Project="$(VCTargetsPath)\Platforms\Win32\ImportBefore\*.targets"
Condition="Exists('$(VCTargetsPath)\Platforms\Win32\ImportBefore')" />
Then there is one towards the bottom
<Import Project="$(VCTargetsPath)\Platforms\Win32\ImportAfter\*.targets"
Condition="Exists('$(VCTargetsPath)\Platforms\Win32\ImportAfter')" />
A similar import declaration exists for the other target platforms as well. Take a look at the files at %ProgramFiles32%\MSBuild\Microsoft.Cpp\v4.0\Platforms\ for the specific names.
With this technique if you want a file to be imported then simply create a file that ends with .targets and place it into the appropriate folder. The advantage of this is that it will be imported into every VC++ build for that platform, and that you can create many different files. The drawback is that you have to place them in those specific folders. That's the main difference between both techniques. With this first technique you can specify the file location via property and its not automatically included for every build, but for the second approach it is but you cannot change the location
You can also add project content into one of *.props files from directory %LOCALAPPDATA%\Microsoft\MSBuild\v4.0\
It make same effect.

Splitting a build into multiple output directories

I have a solution that uses NServicebus which contains at least 3 projects that are of interest for this scenario; a publisher, a sweep, and a webservice. Basically the sweep(s) gather data for the publisher to store in a database and then publish to subscribers. The webservice gives access to the data stored in the publishers database.
When I built this solution on my dev box and deployed to the test environment everything was fine. Last week we started using automated builds on a build server and while it builds successfully, the services would not start up in the test environment. I found this to be because NServicesBus uses marker interfaces. The NServicesBus generic host uses reflection to check assemblies in the same directory as the host for those markers so it knows which to fire up. Unlike my local build the build server does not build each project into it's own bin directory, it just dumps all the assemblies into a single bin directory. Since there are now multiple classes that want to be started by the host, it doesn't work out. Also, the webservice has a lot more assemblies to include than the publisher and sweep need so the end result is the same assemblies get deployed to three different directories. It's unnecessary and doesn't work.
I've been modifying the build like so to get around this, but it's tedious and it's not change tolerant:
<CreateItem Include="$(OutDir)*.*" Exclude="$(OutDir)BOHSweep*">
<Output ItemName="PublisherFilesToCopy" TaskParameter="Include" />
</CreateItem>
<CreateItem Include="$(OutDir)*.*" Exclude="$(OutDir)InventoryPublisher*">
<Output ItemName="BOHSweepFilesToCopy" TaskParameter="Include" />
</CreateItem>
<Copy SourceFiles="#(PublisherFilesToCopy)" DestinationFolder="\\XXXX\Transmittals\BOHPublisher\Test\%(RecursiveDir)" />
<Copy SourceFiles="#(BOHSweepFilesToCopy)" DestinationFolder="\\XXXX\Transmittals\BOHSweep\Test\%(RecursiveDir)" />
Any elegant suggestions on how to tackle this issue?
You should be able to use the technique found here: http://blogs.msdn.com/b/aaronhallberg/archive/2007/06/07/preserving-output-directory-structures-in-orcas-team-build.aspx