Split a build into multiple outputs while maintaining webservice folder structure - web-services

A little over a year ago I posted a question about splitting a build into multiple outputs and followed the recommended solution. Now I need to build on this a little. This solution contains three projects that produce deployable bits and then some other projects that contain code common to the deployable projects (business logic and data access type stuff). Of the three deployable projects, two of them are windows services and one is a wcf webservice. I can build all of these locally fine, but when I build this on our build server things get a little strange.
When I build my webservice project locally I get a folder structure like this:
-Published Websites
Service file
Config file
-Bin
Service DLL’s
This is the desired output for the webservice, however I don't want to build locally. When I originally built on the build server everything was getting jumbled together like this:
-Published Websites
Service file
Config file
-Bin
Service DLL’s
All windows service exe's
All DLL's needed by the service exe's
I followed the suggestions in the article linked at the beginning of this post. In a nutshell I made a change to the build template and also tweaked the output path in the project files. This resulted in my build output looking like this:
Build Folder
-WIN SERVICE 1
EXE’s
DLL’s
-WIN SERVICE 2
EXE’s
DLL’s
-WEBSERVICE
DLL’s
The problem here is that the folder structure for the webservice is not intact (the svc and config files and the bin folder are all missing). I need this structure as I don't deploy directly to the webserver, we use a staging location. I'd rather not split the webservice into its own solution as it is logically related to the windows services and all the common code in the solution.
So, the big question is how do I set up a build that can output multiple directories, but one of those directories is a webservice and it contains the appropriate files and directory structure?

The only Solution I've been able to come up with is to have two different builds for this solution. One builds the webservice and the other builds the windows services. This allows me to keep the the solution together. We'll just have to remember to run both builds when common code changes.
Any suggestions/refinements to this solution are welcome.

Related

Crashing WCF WebService on Windows Phone

I am using 3 webservices in my project and it was running correctly. But in these days it is crashing when creating it's client and I haven't changed anything.
How can I solve it, could you help me, please?
It's saying your service config file is not found. Are you referencing it correctly from the app.config?
It looks like you're using WPF or Silverlight so find your app.config file.
You cannot apply configSource= to since that is a config section group - not a simple config section, and the configSource attribute is only available on simple configuration sections.
You should however absolutely be able to apply the configSource attribute to any of the nodes inside - I do this all the time, in production systems - and it just works; for behaviors, services, clients, bindings, etc; each in a separate .config file.
.ClientConfig is also a bad extension. All configuration files should have .config extension.
In the worst case scenario that you can't configure the external config source for the settings, move them back tot he app.config file of the application!

wso2 identity server remote debug source-binary mismatch

I am trying to put together an Eclipse project for remote debugging a standard wso2-identity server. I have created a user library consisting of the dozens of wso2 jar files and tried to manually identify, download and attach the appropriate source files from SVN based on the platform-chunk-patch versioning scheme. The problem is that there is one class (and possibly others) where the source-binary mapping is not in sync making debugging impossible.
An example:
https://svn.wso2.org/repos/wso2/carbon/kernel/tags/4.2.0/core/org.wso2.carbon.user.core/4.2.0/src/main/java/org/wso2/carbon/user/core/jdbc/JDBCUserStoreManager.java
The HEAD version of this java file in SVN does not match up with the level 4 patched class binary:
./wso2is-4.6.0/repository/components/patches/patch0004/org.wso2.carbon.user.core_4.2.0.jar#uzip/org/wso2/carbon/user/core/jdbc/JDBCUserStoreManager.class
I do not want to build wso2 so the nicest solution would be if someone could point me to a wso2-is-4.6.0 patch level 04 repository of binary-source bundles, either in the form of composite jars with classes+sources or maven source artifacts.
Alternatively a URL and a revision number in SVN pointing to the correct source of JDBCUserStoreManager would suffice.
You can find the required source of JDBCUserStoreManager from here. You can find the sources of all patches for kernel here.

Is there an alternative for using a .testsettings file with TestCases and Microsoft Test Manager?

We have a peculiar situation here that is causing our automated tests to fail on a newly created lab environment, using TFS 2012.
We've always had a bunch of 'unit' tests that tested our DAL code, which in turn uses the Enterprise Library Data Application Block to perform operations on the database. This was setup quite a few years ago, to enable our clients to choose either SqlServer or Oracle databases alongside our product, taking advantage of the DatabaseFactory class and all the supporting generic interfaces and classes in the entlib.data. I mentioned 'unit' like this because these are actually not pure unit tests but integration ones, seeing as they require a real database to work.
To test the same SQL code against both databases, we maintain two separate .config files inside a 'Resources' folder in our TFS project branch, pointing to our test databases:
Resources\SqlServer\ConnectionStrings.config (SqlServer specific connection strings)
Resources\Oracle\ConnectionStrings.config (Oracle specific connection strings)
In the root Resources folder, there are two accompanying .testsettings files, responsible for deploying files specific to each database:
Resources\SqlServer.testsettings (which deploys the SqlServer\ConnectionStrings.config file)
Resources\Oracle.testsettings (which deploys the Oracle\ConnectionStrings.config file)
Since the whole structure is in source control, the testsettings is able to find the .config files by using relative paths, allowing us to test everything without having to setup parameters manually.
On devs machines, we always select the SqlServer.testsettings file when running the tests, so that they don't need to have the whole oracle environment installed to validate their changes before checking in the code. The Oracle side of the validation always occurred in our build process, where we actually test every method twice: first using the same SqlServer.testsettings used by the developers, and then using the Oracle.testsettings.
This way, we can setup our test assemblies' app.configs to redirect the connectionStrings node to an external file, like this:
<configuration>
<connectionStrings configSource="ConnectionStrings.config"/>
...
When the tests are run, mstest copies the adequate ConnectionStrings.config file to the test's working folder, based on which .testsettings was used to initiate the run.
This was working fine until today, when I discovered that tests started through Microsoft Test Manager ignore the Visual Studio .testsettings files. Now I'm trying to run these same tests in our lab environment but the ConnectionStrings.config files are not deployed (understandably) and the tests fail.
How can we achieve this without using .testsettings files? After having huge headaches trying to setup oracle correctly in our new x64 build server, we disabled Oracle tests in the build definition. Now that we started setting up our lab environment, we thought about having one of the machines in it configured with our whole system using Oracle, enabling us to again run these 'unit tests' with oracle-specific connection strings to validate our queries. At the same time, we want to keep testing everything locally and on the build server using SqlServer also.
I think using [DeploymentItem] in this case is impossible, since it is meant for static files and not selectable, dynamic ones like our current setup.
Is there any equivalent to the .testsettings deployment process that we could use with TestCases inside MTM/Lab Env? On the Properties tab for our TestPlan, I can see the Automated Runs -> Test Settings option, but that only seems to allow deployment by specifying absolute paths (which will actually be resolved on the target machines). Is there a way to specify a relative path there, pointing to our ConnectionStrings.config files checked in on TFS? Maybe yet another alternative exists that I'm missing, perhaps using multiple build configurations?
Create separate build configurations for each of the server types by going into Configuration Manager and click New under Active solution configurations. Edit the project file and do something like this:
<PropertyGroup Condition="'$(Configuration)' == 'Oracle'">
<appConfig>App.Oracle.Config</AppConfig>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'SQL'">
<appConfig>App.SQL.Config</AppConfig>
</PropertyGroup>
Then ensure you have the correct connection strings in each of the config files. You can then configure TFS to build using those build configurations.
More info on using PropertyGroup and Condition, MSBuild Configurations and MSBuild project properties

How do I setup TFS build definition when my localPC, source, build agent, and deployment are all on seperate servers

I'm trying to set up a build definition in TFS 2010. The options for this seem very limited, for instance I have 5 solution files in my source control and I don't seem to be able to specifiy which one to use. I've selected a workspace from my deployment server (which does a TF get every 10 minutes so I know it's a valid workspace), but when the build runs it gives me an error complaining about the mapping - and it seems to have made it's own mapping up from somewhere.
Mapping I set: $/InteractV4/Dev/IV4ProductionSR/
Error: There is no working folder mapping for $/InteractV4/Dev/IV4Support/iv4ProductionSR.sln.
There are 2 issues with this error. 1: it's not the workspace I was trying to use. 2: It's wrong and there is a working folder mapping for this source, both on my local PC and on the deployment pc, but NOT on the build server. Do I need to set up a load of folders and mappings on the build agent server? Or on the main TFS (source) server?
Thanks.
TFS-Builds operate on private Workspaces that get generated during the build process, so using a custom-Workspace is without tweaking impossible.It's possible to keep TFS from regenerating a new Workspace with each Buid, by going to Build Definition edit "Process":"2.Basic":"Clean Workspace" and changing default value All into either Outputs or None.The mappings are set for each Build Defition where various pairs exist:
Source Control Folder | Build Agent Folder
$/foo/bar | $(SourceDir)\somewhere
The $(SourceDir) is substituted during Build and it gets its value from the Build Agent Settings. If you go to the TFS Admin Console & select "Build Configuration", you 'll be presented with a list of Build Agents running on the Server (there might be additional Agents in other Servers). Clicking on "Properties" of an Agent, pops up a Window like that: This entry "Working directory" is the one that resolves & substitutes $(SourceDir) during build.For example, an entry $(SystemDrive)\Builds\$(BuildAgentId) could resolve into something like C:\Builds\88.So, for a TFS Build running on this Agent, you should expect all Sources that stand in source control under $/foo/bar to be found under C:\Builds\88\somewhere
EDITAccording to your comments you have now a mapping like this:
$\InteractV4\Dev\IV4ProductionSR | $(SourceDir)
Your build fails, as "There is no working folder mapping for $/InteractV4/Dev/IV4Support/iv4ProductionSR.sln".
Is this source control directory $/InteractV4/Dev/IV4Support mapped in your Build Definition?

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.