AWS, OpsWorks and Chef dependencies: what's the cleanest solution? - amazon-web-services

I've got a Chef project that, locally with Vagrant, works really nicely. I'm using librarian-chef, which means I can specify my dependencies in a Cheffile like this:
site 'http://community.opscode.com/api/v1'
cookbook 'jenkins'
When I then run librarian-chef install, it pulls down jenkins and all the cookbooks it depends on into a cookbooks directory.
There's also another directory, site-cookbooks, which is where I'm writing all of my own custom cookbooks and recipes.
In the Vagrantfile, you can then tell it to look at two different paths for cookbooks:
config.vm.provision "chef_solo" do |chef|
chef.cookbooks_path = ["cookbooks", "site-cookbooks"]
# snip
end
This works perfectly when I run vagrant up. However, it doesn't seem to play nicely with AWS OpsWorks – as this requires all cookbooks to be at the top level of the Chef repository.
My question then, is: what's the nicest way to use Chef with OpsWorks without including all of the dependencies at the top level of my repository?

OpsWorks doesn't play nice with a number of tools created for Chef. I tried using it not long after it came out and gave up on it (OpsWorks was using Chef 9 at the time).
I suggest you either move from OpsWorks to Enterprise Chef, or try the following:
1. Create a separate repo for your cookbooks
Keep all cookbooks in a separate repo. If you want it a part of a larger repository, you can include it as git submodule. Git submodules are generally a bad thing but cookbooks are a separate entity, that can live independently of the rest of your project, so it actually works quite well in this case.
To add a cookbooks repository inside another repo, use:
git submodule add git://github.com/my/cookbooks.git ./cookbooks
2. Keep your cookbooks together with community cookbooks
You can either clone the cookbooks into your repository, add them as submodules, or try using librarian-chef/Berkshelf to manage them. You could try using this method, it should work with librarian-chef: https://sethvargo.com/using-amazon-opsworks-with-berkshelf/

As of recently, OpsWorks supports Chef 11.10 and Berkshelf, giving you a much nicer way of managing cookbook dependencies.

Related

What is the best practice to zip the build results, by build scripts or build tools?

I'm writing my only build scripts and setup jenkins, Jenkins provide plugins to zip the build results, meanwhile I can zip the result in my own build scripts and call that scripts from Jenkins. Which way is better?
If you are in a corporate environment where a number of teams share the same Jenkins master, each plugin you add increases the probability of a plugin failure as and when you upgrade Jenkins. In addition, a bad plugin can bring down your entire CI server. So, in a common master scenario, be very conservative on adding plugins, don't add a plugin unless it is absolutely necessary. For something as simple as creating zips, any build tool worth its salt has a task that can zip contents in a given folder. Read through Maven and Gradle for a start.

Chef environments in AWS Opsworks

Since AWS Opsworks added support for Chef 12 to Opsworks there seems to be support for chef environments. I am fairly new to chef. As I understood chef environments are stored in the environments/-folder in my cookbook repo. This is where I thus created a testing.json file with the name attribute using this exact name. I got the template from the chef doku.
I defined a chef_environment-attribute in the custom json of my testing stack setting this environment to 'testing' (my environment).
I am using berks package to package the cookbooks in a tarball. I pull these in via S3 into my example Opsworks-stack. I ran update_custom_cookbooks on my stack which failed with the message that chef could not find the environment testing.
I first noticed that berks package does not include the environment/-folder since it is not a cookbook. Hence, I added the environments folder to the tarball. I tried update the cookbooks again which failed with the same message.
So what's my misconception here? What is opsworks trying to tell me?
OpsWorks Stacks does not support Chef environments. As it is based on Chef Solo, there isn't really a ton of value in supporting. The main difference between roles and envs in normal Chef is that envs can specify cookbook version requirements, however as Solo requires you to have handled dependency resolution beforehand (via berks package in your case), this feature cannot be used. You can make a role with the same attribute information and use that instead. This can be slightly annoying when dealing with env-aware cookbooks that also use Chef Search but since those rarely work on OpsWorks Stacks anyway, this doesn't come up much.

How can I build my local git repo on external server?

In our company we have really powerful linux based build servers (double Xeon with 40 core) and not so powerful win7 laptops. We building our product in C/C++ language for an esoteric CPU. The compiler only exist in Linux. I can edit my git repo with Qt Creator. It is working and quite fast and everything. But I can't build the source on our Laptop. We have a main git repo and I can clone the same repo to my laptop and to our build server. I want to achieve that when I press the build button my code magically building on build server. I did a proof of concept solution where my build script do a git diff on my repo and scp it to the build server than it ssh to build server apply that diff on the server repo than start and wait the compilation. But that solution is not so fool proof. I think a better approaching/method is exist. So how can I build my git repo on external server?
If you can push to a bare repo on the build server, then you can associate to that bare repo a post-receive hook (.git/hooks/post-receive) which will:
checkout the code
#!/bin/sh
git --work-tree=/var/www/domain.com --git-dir=/var/repo/site.git checkout -f
trigger the compilation.
That way, you don't have to handle the diff yourself.
You only have to associate to the build button the action to push your branch to the bare repo of the build server, and the post-receive hook will do the rest.
You could switch to a forking Workflow, where each developer in the company has a personal public bare repo, which is a fork of the official central repository.
Then, when you want to build your changes, you push them to (a branch or the master of) your own personal public repo.
The build server not only clones the official central repository, but also your public repo. So when you push to your personal public repo, the build server merges the changes and does a personal build for you. Just like it probably already does for the official central repository?
Note that this is not too different from #VonC s answer, just focusses a bit more on the workflow. The personal public repo may well be on the build server, like #VonC suggests. Or it could be somewhere else. As long as it's some place public enough that the build server and you and your colleagues can find it.
Consider integrating http://jenkins-ci.org/ to your workflow, to take care of the build process, using a "git post-receive hook" to trigger the build as (suggested by #VonC).
If you want to use the "Forking Workflow" as suggested by #flup, you can take a look to http://gitlab.com which provides an easy way to manage pull/merge requests, fork repositories and to add hooks.

Searching for a project skeleton for Chef + Django on Linux

Is there a pre-existing, best practices project skeleton for Chef + Django web applications on Linux (Ubuntu preferably)?
For production Django systems our preferred setup is Supervisor, Nginx, Ubuntu and Uwsgi. Additionally we use Chef to do configuration management and Vagrant + Chef to do development environment management.
While this system is great once they're all up and running they can be very time consuming to setup properly.
My ideal solution would be pre-made Chef Github repository which was a skeleton for a best-practices Django deployment. (It would come with a chef-solo.rb ready to be used to deploy to some cloud ubuntu instance and a Vagrantfile ready to be used to create Vagrant dev machine.) Basically all you would have to do is add a Chef cookbook to deploy your application code and tweak a few settings.
Does anything like that ideal solution exist?
Here's typical chef based configuration solution:
one git repo saves chef-repo. you can use knife solo init <repo-name> to create it. Or just clone that from git repo of opscode.com
one git repo per cookbook. you can use berkshelf cookbook <your-cookbook-name> to create a full set of cookbook content including cookbook itself, test-kitchen, vagrant and berks. Please install berkshelf first via gem install berkshelf
For any other cookbooks that from cookbook community or git repo, you can use berkshelf to download them and managed as local cookbooks.

Deploying Django with virtualenv inside a distribution package?

I have to deploy a Django application onto a SuSE Linux Enterprise 11 system. Corporate rules say I need to deploy using RPMs only. While I can use ./setup.py bdist_rpm for each dependency, it's not really sane, since RPM doesn't record all of the dependencies yet. Therefore I'd have no real advantage in using RPMs and managing dependencies manually is somewhat cumbersome and I would like to avoid it.
Now I had the following idea: While building a package, I could create a virtualenv, install all my dependencies via pip there and then package it up with the rest of the code into one solid RPM.
How sensible is this approach?
I've been using this approach for about a year now and it has worked out pretty well.
One gotcha is that you'll want to check out the bang lines in any python scripts written to the virtualenv's bin directory. These will end up being full path names used in your build environment, which probably won't be the same directory where you end up installing the virtualenv. So you may need to add some sed calls in your RPM's postinstall to adjust the paths.