Common header and footer in fossil - fossil

I have a master remote repository, and two local repositories.
However, it appears that changes to header/footer/css do not propagate throughout the repos.
How can I achieve this?

Fossil will treat the configuration of each of your repository interfaces (whatever you put in the header/footer/css etc) as something unrelated to the versioning of your files. This makes sense when you are working with repositories that you you are not the admin of. You will always be in control of the UI on your local machine (where you are by default the admin) and make it look and behave whichever way you want, even if you do not have priviliges to make the same changes to the central repo.
To propagate changes to the configuration (including the header/footer/css etc) you can use the fossil configuration options. Just type fossil configuration --help to see how you can export, import and synchronize your configuration accross repositories.

Related

Adding modules/themes to a platform after it has been built

New user here...
Installed D8+Civi by building a composer based git repo for the platform then stamping out a few test sites.
It worked really well.
But now I am at the point of realizing I missed a few modules and I want to add some themes to apply to the sites.
I can easily to it in the git which was used to define the platform. But what is the proper way to manage the central platform data and files that are then used for the x number of sites.
I know the docs try to discuss this be a tutorial walk-through would be very helpful.
As a guess, I could make the central platform files a git clone and pull down clones for the new stuff. But if there was a need for an database updates that wouldn't get done.
Ideas?
Thanks
It's not clear what you mean by "central platform data".
If you mean assets that are relevant for the entire platform, that can apply to all of the sites, you would do the following:
Add anything new to Git and push it.
Create a new platform to match the latest code in Git.
Ran a Migrate task on the old platform to migrate the sites to the new one.
Database schema updates happen automatically.
The sites will now be running on the new codebase.
If you're talking about site-specific assets that you don't want to be included in the platform's code, then you can enable Git for sites with the Aegir Hosting Git module.
It allows you to deploy site-specific Git repositories.
However, I don't recommend using that module for platforms, just sites, because it allows you to git pull on Production sites, which is a terrible idea. For that, see Aegir Deploy.
Both of these modules ship with Aegir so you won't need to install them. Some of the Hosting Git features may need to be enabled, however.

Sitecore Config Files + Project Setup

We are updating our sitecore to 8.2 and in the process I am trying to refine our source control and development workflow.
Goals
1. Have a single source of truth for support dlls, configs, lic, etc.
2. Have everything in source control that is needed to recreate the entire site from dev to prod. (excluding packages).
In order to have all of the different configs needed for the various machines I have created gulp tasks that transform the configs on build (dev, staging, prod). Those transformed configs are placed in a folder in the project that is then used to replace the originals on the target machines. This folder publishes all of its contents and seems to be working well so far.
What I don't know is how to deal with all of the config files that do not change.
Is it best to include all of those .config files in the project so that they publish? If not, then the target machine folders will have to be either manually managed (seems like a bad idea) or a script used to ensure the configs are up to date (more customization..by default not a great idea).
The only downside (that I see) to including all of the configs in the project is the weight that it would add to file searches (and that doesn't seem like a very strong argument).
Am I not seeing something?
How are you other Sitecore humans handling this?
Gregory
As a general rule of thumb, do not check in any default files into Source Control.
The main reasons are; bloat, making syncing/downloading from your source control take much longer, and upgrades, the latter being a much more important reason.
If/when you upgrade in the future, if you do not have any Sitecore files checked into source control then you can simply deploy a new/clean instance of Sitecore, fix any conflicts in your own code and then deploy on top. You don't have to try and figure out what has changed in the default install files between releases.
Any changes you need to make to Sitecore configs or settings should be made using patch files and only those custom files added to your solution.
How to handle this for deployments?
There are a few options. You could go done the scripted route, which will take a clean Sitecore install, unzip and made whatever modifications you need, then install/unzip the modules that you use in your solution one by one.
Another option maybe to create a default install with all the modules and then zip this up, then an install would be similar process to above but a more simpler case of just unzipping a single file. You could use Sitecore SIM to both install the instance, modules and then backup or do this manually.
Yet another alternative may be to check everything into Source Control, either under separate repository or a different project so ensure that all default files and configs are kept separate. If you need to upgrade in the future, simply delete the repo/project and add them back in again.
I would also do the same (a separate project) to keep all Support patches/dlls separate, again to help easily identify what fixes have been applied and to easily remove them if a future version resolves the issue.
These may add an additional step to your deploy, but keeping this separation will make your life much much easier when it comes to upgrade time.

Deploy multiple Content Delivery Servers with same confguration

I am building out a Sitecore farm with multiple Content Delivery servers. In the current process, I stand up the CD server and go through the manual steps of commenting out connection strings and enabling or disabling config files as detailed here per each virtual machine/CD server:
https://doc.sitecore.net/Sitecore%20Experience%20Platform/xDB%20configuration/Configure%20a%20content%20delivery%20server
But since I have multiple servers, is there any sort of global configuration file where I could dictate the settings I want (essentially a settings template for CD servers), or a tool where I could load my desired settings/template for which config files are enabled/disabled etc.? I have used the SIM tool for instance installation, but unsure if it offers the loading of a pre-determined "template" for a CD server.
It just seems in-efficient to have to stand up a server then config each one manually versus a more automated process (ex. akin to Sitecore Azure, but in this case I need to install the VMs on-prem).
There's nothing directly in Sitecore to achieve what you want. Depending on what tools you are using then there are some options to reach that goal though.
Visual Studio / Build Server
You can make use of SlowCheetah config transforms to configure non-web.config files such as ConnetionStrings and AppSettings. You will need a different build profiles for each environment you wish to create a build for and add the appropriate config transforms and overrides. SlowCheetah is available as a nuget package to add to your projects and also a Visual Studio plugin which provides additional tooling to help add the transforms.
Continuous Deployment
If you are using a continuous deployment tool like Octopus Deploy then you can substitute variables in files on a per environment and machine role basis (e.g. CM vs CD). You also have the ability to write custom PowerShell steps to modify/transform/delete files as required. Since this can also run on a machine role basis you can write a step to remove unnecessary connection strings (master, reporting, tracking.history) on CD environments as well as delete the other files specified in the Sitecore Configuration Guide.
Sitecore Config Overrides
Anything within the <sitecore> node in web.config can be modified and patch using Include File Patching Facilities built into Sitecore. If you have certain settings which need to be modified or deleted for a CD environment then you can create a CD-specific override, which I place in /website/App_Config/Include/z.ProjectName/WebCD and use a post-deployment PowrrShell script in Octopus deploy to delete this folder on CM environment. There are example of patches within the Include folder, such as SwitchToMaster.config. In theory you could write a patch file to remove all the config sections mentioned in the depoyment guide, but it would be easier to write a PowerShell step to delete these instead.
I tend to use all the above to aid in deploying to various environments for different server roles (CM vs CD).
Strongly recommend you take a look at Desired State Configuration which will do exactly what you're talking about. You need to set up the actual configuration at least once of course, but then it can be deployed to as many machines as you'd like. Changes to the config are automatically flowed to all machines built from the config, and any changes made directly to the machines (referred to as configuration drift) are automatically corrected. This can be combined with Azure, which now has capability to act as a "pull-server" through the Automation features.
There's a lot of reading to do to get up to speed with this feature-set but it will solve your problem.
This is not a Sitecore tool per se.

Where is Appropriate to Put AWS Keys

I'm learning about Strongloop, it's pretty good so far.
Question: What is the appropriate place to put AWS keys? config.json? ..and how would I access them from my application?
Thanks
Ideally you would not put those credentials in any file that is committed. I usually find environment variables to be the best balance of convenience and security.
If you are using strong-pm, then you would do this with slc ctl env-set. If you are using some other supervisor, then you'll need to consult its docs.
A lot of times it is enough to use Upstart or systemd directly, which both make it fairly easy to set environment variables in the service process.
Other than above answer, what you can do is put these in your release procedure.
What we have done in our product is all these entries are kept in a config file which is deployed from the shared folder.
Let me elaborate it.
we have local config files in the git. and separate config files on production servers in a folder names as shared, now, when ever a tag release is deployed from git, the shared folder overwrite these config files.

Mercurial: keep 2 branches in sync but with certain persistent differences?

I'm a web developer working on my own using django, and I'm trying to get my head round how best to deploy sites using mercurial. What I'd like to have is to be able to keep one repository that I can use for both production and development work. There will always be some differences between production/development (e.g. they might use different databases, development will always have debug turned on) but by and large they will be in sync. I'd also like to be able to make changes directly on the production server (tidying up html or css, simple bugfixes etc.).
The workflow that I intend to use for doing this is as follows:
Create 2 branches, prod and dev (all settings initially set to production settings)
Change settings.py and a few other things in the dev branch. So now I've got 2 heads, and from now on the repository will always have 2 heads.
(On dev machine) Make changes to dev, then use 'hg transplant' to copy relevant changesets to production.
push to master repository
(On production server) Pull from master repo, update to prod head
Note: you can also make changes straight to prod so long as you transplant the changes into dev.
This workflow has the drawback that whenever you make a change, not only do you have to commit it to whichever branch you make the change on, you also have to transplant it to the other branch. Is there a more sensible way of doing what I want here, perhaps using patches? Or failing that, is there a way of automating the commit process to automatically transplant the changeset to the other branch, and would this be a good idea?
I'd probably use Mercurial Queues for something like this. Keep the main repository as the development version, and have a for-production patch that makes any necessary changes for production.
Here are two possible solutions one using mercurial and one not using mercurial:
Use the hostname to switch between prod and devel. We have a single check at the top of our settings file that looks at the SERVER_NAME environment variable. If it's www.production.com it's the prod DB and otherwise it picks a specified or default dev/test/stage DB.
Using Mercurial, just have a clone that's dev and a clone that's prod, make all changes in dev, and at deploy time pull from dev to prod. After pulling you'll have 2 heads in prod diverging from a single common ancestor (the last deploy). One head will have a single changeset containing only the differences between dev and prod deployments, and the other will have all the new work. Merge them in the prod clone, selecting the prod changes on conflict of course, and you've got a deployable setup, and are ready to do more work on 'dev'. No need to branch, transplant, or use queues. So long as you never pull that changeset with the prod settings into 'dev' it will always need a merge after pulling from dev, and if it's just a few lines there's not much to do.
I've solved this with local settings.
Append to settings.py:
try:
from local_settings import *
except ImportError:
pass
touch local_settings.py
Add ^local_settings.py$ to your .hgignore
Each deploy I do has it's own local settings (typically different DB stuff and different origin email addresses).
PS: Only read the "minified versions of javascript portion" later. For this, I would suggest a post-update hook and a config setting (like JS_EXTENSION).
Example (from the top of my head! not tested, adapt as necessary):
Put JS_EXTENSION = '.raw.js' in your settings.py file;
Put JS_EXTENSION = '.mini.js' in your local_settings.py file on the production server;
Change JS inclusion from:
<script type="text/javascript" src="blabla.js"></script>
To:
<script type="text/javascript" src="blabla{{JS_EXTENSION}}"></script>
Make a post-update hook that looks for *.raw.js and generates .mini.js (minified versions of raw);
Add .mini.js$ to your .hgignore
Perhaps try something like this: (I was just thinking about this issue, in my case it's a sqlite database)
Add settings.py to .hgignore, to keep it out of the repository.
Take your settings.py files from the two separate branches and move them into two separate files, settings-prod.py and settings-dev.py
Create a deploy script which copies the appropriate settings-X file to settings.py, so you can deploy either way.
If you have a couple of additional files, do the same thing for them. If you have a lot of files but they're all in the same directory by themselves, you could just create a pair of directories: production and development, and then either copy or symlink the appropriate one into a deploy directory.
If you did something like this, you could dispense with the need for branching your repository.
I actually do this using named branches and straight merging instead of transplanting (which is more reliable, IMO). This usually works, although sometimes (when you've edited the different files on the other branch), you'll need to pay attention not to remove the differences again when you're merging.
So it works great if you're not changing the different files much.