How do I inspect the working directory structure of a GoCD job run? - go-cd

I've got a job that's failing and I think the problem is that I've misunderstood what the layout of the directory structure for the running job.
How can I see what's actually on disk so I can diagnose what's happening?
Can I do it from the GoCD UI, or am I going to have to connect to the agent box and look at things that way?
In Jenkins, I'd just use the "workspace" link to eyeball the layout.

Currently, I'm adding directorying listing commands to the jobs themselves, then inspect the out in the logs.

A comprehensive way to inspect the structure is to define an "artifact" of * - this declares the entire pipeline's working directory as an artifact, then you can inspect it in the UI.
This is probably a very bad plan, because it's going to use up tons of disk space and it takes a long time to create the artifact so it slows down your pipeline a lot.

Related

Setting build priority in yaml or UI

Is there a way to setup up a build's priority in a yaml based pipeline? There seem to be references to build priority in the Azure DevOps API, but nothing in how to do this via yaml. I thought there might be some docs in the Triggers section, but no.
We need this because we have some fast building NuGet packages, but these get starved via slow-build pipelines making turnaround time for packages painful.
The closest thing I could come up with to working around this is via agent demands in the yaml
demands:
- Agent.ComputerName = XYZ
to separate build pipelines, but this is a bit of a hack and doesn't use agents efficiently.
A way to set this in UI would be acceptable, but I couldn't seem to find anything.
Recently Azure DevOps introduced the ability to manually specify a build/release runs next.
This manifests as a Run next button. (image source).
So while you can't say "this pipeline always takes priority" yet, you can manually force a specific run to the front of the queue.
If you need a specific pipeline to always take priority, then you likely want to setup a separate agent pool just for those pipelines, or use demands as Leo Liu mentioned.
Setting build priority in yaml or UI
I'm afraid this feature is not yet supported in Azure DevOps at this moment.
There is a popular user voice about it, you can upvote it and check the feedback from that ticket.
Currently as a workaround, just like what you did, set the demands in build definitions to force building with the specific agents.
Hope this helps.

Can I sync two bazel-remote-cache's using rsync

I have a build pipeline that builds and tests changes before they are merged to the main line. Once that happens, it would be great if the Bazel actions from that build are available to developers. Unfortunately, the build pipeline runs in the cloud and uses an in-cloud cache, but the developers use an on-premises cache.
I am using https://github.com/buchgr/bazel-remote
Does anyone know if I can just rsync the artifacts from the data directory of the cloud cache to the developers' cache in order to give them access to the pre-built artifacts? Normally, I would just try it out, but I'm concerned about subtle issues that might poison the cache or negatively effect the hit rate, so I'm hoping to hear from someone who understands the code before I go digging.
You can rsync the cache directory contents and use them from another location, but this won't work with a running bazel-remote- the items will be ignored until bazel-remote is restarted.
Another option would be to use the http_proxy configuration file setting to automatically put/get cache items to/from another bazel-remote instance. An example configuration file was recently added to README.md in the bazel-remote git repository.

How big is the Loggregator buffer on Swisscom Application Cloud

The documentation says:
$ cf logs APP_NAME --recent
displays all the lines in the Loggregator buffer.
How big is this buffer? Can I change it myself?
Disclaimer: I don't work for Swisscom so I have no idea how their platform is configured. That said, I believe I can still answer your questions.
How big is this buffer?
I believe the value you're looking for is doppler.maxRetainedLogMessages which defaults to 100.
https://github.com/cloudfoundry/loggregator/wiki/Loggregator-Component-Properties
That said, I would make the case that the number of lines buffered shouldn't matter. The recent logs feature is purely a convenience and it's not mean for log storage. If you're trying to use it in a way where you need to know specifically how much it will buffer then you're probably using it wrong and should probably consider a different option.
A couple common cases where the --recent buffer might not be large enough and solutions:
Scenario 1: You're troubleshooting (perhaps a failed app start or stage) and cf logs --recent doesn't show you enough information.
Run cf logs in a second terminal and repeat your action. This will show you the entire log output saved locally on your machine. You can also redirect to a file with cf logs <app> > some-file.log.
Scenario 2: You're running a production app.
You will want reliable log storage for a sustained and predictable amount of time. In this case, you'll want to setup a log drain so that your logs are sent somewhere outside of the platform for long term storage.
https://docs.cloudfoundry.org/devguide/services/log-management.html#user-provided
If you've got another scenario, feel free to add a comment. There is probably a different/better way to handle it.
Can I change it myself?
The setting I mentioned above is configured by your platform operator.
Hope that helps!
thanks to Daniel Mikusa for your insights and helpful explanation. I can confirm that Swisscom Application Cloud uses the default value.
Quote from our deployment manifest:
loggregator:
(...)
maxRetainedLogMessages: 100
outgoing_dropsonde_port: 8081

How to simulate fabric execution

Is there a way to tell fabric to just print the commands it will execute instead of actually execute them?
I'm preparing an installation script and if it fails I'll have to uninstall the steps previous to error occurrence.
I've checked the "fab" command parameters but found nothing about this.
Thank you for your help.
There are tickets (including issue 26) open on github that request such a feature. The challenge described in that thread is that you can't always be certain what the script would do - ie/ some behaviour may change depending on the state of the remote server.
As an alternative, you could look at reproducing your environment in a virtual machine (vagrant makes doing this really easy), and testing to see whether your scripts run as expected there.
If you're really concerned about this, a configuration management system (particularly one that can reverse changes) like puppet or chef may make more sense.

About Sitecore Backup

I am trying to backup a whole Sitecore website.
I know that the package designer can do part of the job, but not entirely.
Having a backup is always a good way when the site is broken accidently.
Is there a way or a tool to backup the whole Sitecore website?
I am new to the Sitecore, so any advise is welcome.
Thank you!
We've got a SQL job running to back-up the databases nightly.
Apart from that, when I deploy code and it's a small bit I usually end up backing up only the parts I'm going to replace. If it's a big code deploy I just back up the whole website (code-wise anyway) before deploying the code package.
Apart from that we also run scheduled backups of the code (although I don't know the intervals), and of course we've got source control if everything else fails.
If you've got an automated deployment tool you could also automate the above of course.
Before a major deploy of content or code, I typically backup the master database and zip everything in the website directory minus the App_Data and temp directories. That way if the deploy goes wrong, I can restore the code and database fairly quickly and be back to the previous state.
I have no knowledge of a tool that can do this for you, but there are a few ways you can handle this in an easy way:
1) you can create a database backup of the master database, but this only contains content and no files like media files that are saved on disk or your complete and build solution. It is always a good idea to schedule your database backup every night and save the backups for at least a week or more.
2) When you use the package designer, you can create dynamic pacakges that can contain all your content, media files and solution files on disk. This is an easy way to deploy the site onto a new Sitecore installation all at once, but it requires a manual backup every time.
3) Another way you can use is to serialize your entire content-tree to an xml-format on disk from the Developer tab. Once serialized, you can revert them back into the content tree.
I'd suggest thinking of this in two parts, the first part is backing up the application which is a simple as making sure your application is in some SCM system.
For that you can use Team Development for Sitecore. One of it's features allows you to connect a Visual Studio project to your Sitecore instance.
You can select Sitecore items that you want to be stored in your solution and it will serialize them and place them into your solution.
You can then check them into your SCM system and sleep easier.
The thing to note is deciding which item to place in source control, generally you can think of Sitecore items has developer owned and Content Editor owned. The items you will place in your solution are the items that are developer owned; templates, sublayouts, layouts, and content items that you need for the site to function are good examples.
This way if something goes bad a base restoration is quick and easy.
The second part is the backup of the content in Sitecore that has been added since your deployment. For that like Trayek said above use a SQL job to do the back-ups at whatever interval your are comfortable with.
If you're bored I have a post on using TDS (Team Development for Sitecore) you can check out at Working with Sitecore, Part Nine: TDS
Expanding bit more on what Trayek said, my suggestion would be to have a Continuous Integration (CI) and have automated deploy using Team City.
A good answer is also given here on Stack Overflow.
Basically in your case Teamcity would automatically
1. take back up of the current website (i.e. code) and deploy the new code on top of it.
2. Scripts can also be written to take a differential backup of the SQL databases, if need be.
Hope this helps.
Take a look at Sitecore Instance Manager module. Works really well for packaging entire Sitecore instance.