I am trying to get over with my base structure to manage adopting alm practices.
First I have done structuring for my source folder structure where under my main branch I did put folder as builds. I was intending to store my builds (ci, nigthly, manual) be stored under each branch. However while I was creating a new build definition I stuck with a field.
Build agent folder under workspace definition should I leave it as it is $SourceDir my source control and build server are reside on same machine.
Drop folder is not the same this as builds folder in my source control, right? I mean should I keep the new builds under source control or ci server will handle it itself?
Thanks.
There's no benefit to storing your build output in version control but if your process dictates this there's nothing stopping you from doing it. You'd need to customize the build process template to achieve this as part of the automated build. Out of the box each build will create a label of the source (accessible against several workitem types) and assuming everything is setup correctly you can pull a copy of this build from the drop location on the build server.
The build server will have it's own workspace definition for which $SourceDir will map to the top level directory where the source will be pulled from version control. The Drop folder is where the final build output will be placed. It can be any local or UNC file share accessible to the account under which the builds are running.
Drop folder isn't related to yours or the build servers local workspace folder so even if you set this as the drop folder it won't get added to version control. Remember that your local workspace just provides a level of abstraction over the physical structure of your version control structure on the server. Simply mapping them one to one locally won't in anyway cause artifacts added to the local to be automatically added to the server.
Hope that makes sense.
Related
We are updating our sitecore to 8.2 and in the process I am trying to refine our source control and development workflow.
Goals
1. Have a single source of truth for support dlls, configs, lic, etc.
2. Have everything in source control that is needed to recreate the entire site from dev to prod. (excluding packages).
In order to have all of the different configs needed for the various machines I have created gulp tasks that transform the configs on build (dev, staging, prod). Those transformed configs are placed in a folder in the project that is then used to replace the originals on the target machines. This folder publishes all of its contents and seems to be working well so far.
What I don't know is how to deal with all of the config files that do not change.
Is it best to include all of those .config files in the project so that they publish? If not, then the target machine folders will have to be either manually managed (seems like a bad idea) or a script used to ensure the configs are up to date (more customization..by default not a great idea).
The only downside (that I see) to including all of the configs in the project is the weight that it would add to file searches (and that doesn't seem like a very strong argument).
Am I not seeing something?
How are you other Sitecore humans handling this?
Gregory
As a general rule of thumb, do not check in any default files into Source Control.
The main reasons are; bloat, making syncing/downloading from your source control take much longer, and upgrades, the latter being a much more important reason.
If/when you upgrade in the future, if you do not have any Sitecore files checked into source control then you can simply deploy a new/clean instance of Sitecore, fix any conflicts in your own code and then deploy on top. You don't have to try and figure out what has changed in the default install files between releases.
Any changes you need to make to Sitecore configs or settings should be made using patch files and only those custom files added to your solution.
How to handle this for deployments?
There are a few options. You could go done the scripted route, which will take a clean Sitecore install, unzip and made whatever modifications you need, then install/unzip the modules that you use in your solution one by one.
Another option maybe to create a default install with all the modules and then zip this up, then an install would be similar process to above but a more simpler case of just unzipping a single file. You could use Sitecore SIM to both install the instance, modules and then backup or do this manually.
Yet another alternative may be to check everything into Source Control, either under separate repository or a different project so ensure that all default files and configs are kept separate. If you need to upgrade in the future, simply delete the repo/project and add them back in again.
I would also do the same (a separate project) to keep all Support patches/dlls separate, again to help easily identify what fixes have been applied and to easily remove them if a future version resolves the issue.
These may add an additional step to your deploy, but keeping this separation will make your life much much easier when it comes to upgrade time.
I have a single Cloud Source Repository with multiple projects. I am able to create a cloudbuild.yaml file in the repo root that builds all projects. However, I don't want to have a build trigger that rebuilds all of the projects since most commits will be for a single project. Ideally I would like to have a cloudbuild.yaml file in each project subdirectory and a build trigger that detects changes in the project subdirectory of the repository. Is something like this possible?
As a possible workaround, I believe I may be able to keep my cloudbuild.yaml in the repository root and create a custom step that will get the commit sha (via the COMMIT_SHA substitution) and then get the list of files committed (via "git show --name-only --pretty=format: $COMMIT_SHA") to determine which project should be built and what image should be created. An alternative may be to have a tagging naming convention that will contain the project name and basing the trigger on that but I don't want to tag each commit.
Note, it seems like build triggers work very well when you have multiple repos but we made the decision to go with a mono repo and I don't want to rehash that debate in this question. I'd like to understand how to best use the Build Triggers in a mono repo.
Using DDE if I import a Jar file into an nsf, either using the new Jar Design element or via web-inf\lib, then as soon as I save an xpage the workspace goes into constant rebuild. It rebuilds the workspaces, stops, rebuilds, stops etc.
It will only stop for good if I delete the jar design element, remove it from the build path or turn Build Automatically off.
I've tried this with a selection of different Jars on a local database with no network connection and on a server copy, all result in the same constant rebuild.
Referencing an external jar works fine but I'd prefer to keep it in the nsf.
Am using DDE 9.0
I'm guessing it's somehow related to this issue which describes how jars in nsfs have to be detached to compile. It's as if this detachment causes an update which makes DDE think it has to rebuild again
https://stackoverflow.com/search?q=xpages+jar+build
What works for me:
switch off automatic build
import Jar
add Jar to build path
link NSF to onDisk project
Set DDE to monitor changes automatically (in preferences)
switch back on automatic build
Then when you need to replace the Jar with a newer version, just copy it into the OnDisk project - you need to restart the http preview after replacing the jar.
I'm trying to set up a build definition in TFS 2010. The options for this seem very limited, for instance I have 5 solution files in my source control and I don't seem to be able to specifiy which one to use. I've selected a workspace from my deployment server (which does a TF get every 10 minutes so I know it's a valid workspace), but when the build runs it gives me an error complaining about the mapping - and it seems to have made it's own mapping up from somewhere.
Mapping I set: $/InteractV4/Dev/IV4ProductionSR/
Error: There is no working folder mapping for $/InteractV4/Dev/IV4Support/iv4ProductionSR.sln.
There are 2 issues with this error. 1: it's not the workspace I was trying to use. 2: It's wrong and there is a working folder mapping for this source, both on my local PC and on the deployment pc, but NOT on the build server. Do I need to set up a load of folders and mappings on the build agent server? Or on the main TFS (source) server?
Thanks.
TFS-Builds operate on private Workspaces that get generated during the build process, so using a custom-Workspace is without tweaking impossible.It's possible to keep TFS from regenerating a new Workspace with each Buid, by going to Build Definition edit "Process":"2.Basic":"Clean Workspace" and changing default value All into either Outputs or None.The mappings are set for each Build Defition where various pairs exist:
Source Control Folder | Build Agent Folder
$/foo/bar | $(SourceDir)\somewhere
The $(SourceDir) is substituted during Build and it gets its value from the Build Agent Settings. If you go to the TFS Admin Console & select "Build Configuration", you 'll be presented with a list of Build Agents running on the Server (there might be additional Agents in other Servers). Clicking on "Properties" of an Agent, pops up a Window like that: This entry "Working directory" is the one that resolves & substitutes $(SourceDir) during build.For example, an entry $(SystemDrive)\Builds\$(BuildAgentId) could resolve into something like C:\Builds\88.So, for a TFS Build running on this Agent, you should expect all Sources that stand in source control under $/foo/bar to be found under C:\Builds\88\somewhere
EDITAccording to your comments you have now a mapping like this:
$\InteractV4\Dev\IV4ProductionSR | $(SourceDir)
Your build fails, as "There is no working folder mapping for $/InteractV4/Dev/IV4Support/iv4ProductionSR.sln".
Is this source control directory $/InteractV4/Dev/IV4Support mapped in your Build Definition?
Aim: Set up an ant/cmd script that will package the artifacts from several builds into a single zip. I plan to do this by setting up a final build configuration that will have a dependency on those several projects.
So all my build configurations build successfully and produce build artifacts on the Build Server #.BuildServer\system\artifacts{PROJECT}{several configurations}.. In my "Artifact Aggregating" configuration, I need to be able to reference what and where those artifacts are using variables that can be used in my ant/cmd script. i.e. I have Project A with configurations w, x, and y; how would I define/construct I variables of these configurations(w,x,y) that can be referenced by build configuration z. I looked at current Teamcity documentation i.e. http://www.jetbrains.net/confluence/display/TCD3/System+Properties+of+a+Build+Configuration#SystemPropertiesofaBuildConfiguration-ref; but I find this doesn't resolve my query.
Is there a way I can set up my artifact paths for configurations w, x and y to make the final task easier?
What would be the best way to accomplish this task? Any ideas are welcome.
This is how we do that.
Create n+1 Configuration(ZIP_ALL) and add dependency for all n projects see Dependency trigger
Create network share \\server\Build for aggregating project's building results
(you need cleanup strategy for that folder) - we simply drop all, our teammates create sub folders with SVN rev name (TC sets variable with revision value)
For each configuration create msbuild(or ant, or rake) script, that will build and zip (if you need) all output from build
Copy zip file or complete Output folder to common location (\\server\Build) see Copy Task
Create ant script for ZIP_ALL configuration that simply zips all files in common location
Publish that to TeamCity via Artifact Publishing