Using Cloudformation with a Chef Server - amazon-web-services

I'm exploring some disaster recovery scenarios and how to comeback from them quickly. Disasters like our root AWS account being hacked, or all of Oregon going down. Basically situations where we need to recreate our entire infrastructure in another region or account.
Obviously Cloudformation is the best way to tackle this, but I have some questions on how to integrate it with Chef. My plan is to have a CF script create a new Chef server as well as all the other servers, then the Chef server pulls all it's cookbooks from a repository and configures all the servers. Is this a reasonable process or is there a better way to handle it?
I figured this was better than maintaining AMIs specific to applications and copying those over.
Thanks for the help in advance!

Chef has a chef-server cookbook that will provision a stand-alone instance. I'm actually writing a wrapper for it now as I'm dealing with the same situation.
Our plan is similar to yours but we're using Terraform to orchestrate the environments.
Have a repository of cookbooks gzipped and ready for deployment. After provisioning and configuring the Chef server, deploy said cookbooks to it. The Chef server would need to be fully bootstrapped with all cookbooks, environment configs, and data bags before attempting to create any other nodes. Once complete, bootstrapping the rest of the environment would look like any other deployment.

Related

What should I use for configuration management on AWS

I am trying to find a solution for configuration management using AWS OpsWorks. What I can see is AWS offers three services for OpsWorks
Chef Automate
Puppet
AWS stacks
I have read basics of all three of them but unable to compare between three of them. I am unable to understand when to use which solution.
I want to implemnet a solution for my multiple EC2 instances, using which I can deliver updates to all my instances from a central repository(github). And, rollback changes if needed.
So following are my queries:
Which of the three solutions is best for this use case?
What should I use if my instances are in different regions?
I am unable to find anything useful on these topics so that I can make my decision. It would be great if I can get links to some useful articles as well.
Thanks in advance.
Terraform, Packer and Ansible are a great resource, I use them everyday to configure AMI's and build out all my infrastructure.
Terraform - Configuration Management for Infrastructure, it allows you to provision all the AWS, Azure, GCE components you needs to run your application.
Packer - Creates reusable images by pre installing software that is common to your applications.
Ansible - pre and post provisioning configuration management. You can use Ansible with Packer to provision software in an AMI, then if needed, use Ansible to configure it after provisioning. There is no need for a chef server or puppet master, you can run Ansible from your desktop if you have access to the cloud servers.
This examples provisions all the infrastructure for a Wordpress site, and uses Ansible to configure it post provisioning.
https://github.com/strongjz/tf-wordpress
All of this as well can automated in a Jenkins pipeline or with other Continous Deployment tools like CircleCI etc.
Ansible has no restriction on regions, neither does Terraform. Packer is a local build tool or on a CD server.
Examples:
https://www.terraform.io/intro/examples/aws.html
https://github.com/ansible/ansible-examples
https://www.packer.io/intro/getting-started/build-image.html

Continuous Integration on AWS EMR

We have a long running EMR cluster that has multiple libraries installed on it using bootstrap actions. Some of these libraries are under continuous development and their codebase is on GitHub.
I've been looking to plug Travis CI with AWS EMR in a similar way to Travis and CodeDeploy. The idea is to get the code on GitHub tested and deployed automatically to EMR while using bootstrap actions to install the updated libraries on all EMR's nodes.
A solution I came up with is to use an EC2 instance in the middle, where Travis and CodeDeploy can be first used to deploy the code on the instance. After that a lunch script on the instance is triggered to create a new EMR cluster with the updated libraries.
However, the above solution means we need to create a new EMR cluster every time we deploy a new version of the system
Any other suggestions?
You definitely don't want to maintain an EC2 instance to orchestrate a CI/CD process like that. First of all, it introduces a number of challenges because then you need to deal with an entire server instance, keep it maintained, deal with networking, apply monitoring and alerts to deal with availability issues, and even then, you won't have availability guarantees, which may cause other issues. Most of all, maintaining an EC2 instance for a purpose like that simply is unnecessary.
I recommend that you investigate using Amazon CodePipeline with a Lambda Step Function.
The Step Function can be used to orchestrate the provisioning of your EMR cluster in a fully serverless environment. With CodePipeline, you can setup a web hook into your Github repo to pull your code and spin up a new deployment automatically whenever changes are committed to your master Github branch (or whatever branch you specify). You can use EMRFS to sync an S3 bucket or folder to your EMR file system for your cluster and then obtain the security benefits of IAM, as well as additional consistency guarantees that come with EMRFS. With Lambda, you also get seamless integration into other services, such as Kinesis, DynamoDB, and CloudWatch, among many others, that will simplify many administrative and development tasks, as well as enable you to have more sophisticated automation with minimal effort.
There are some great resources and tutorials for using CodePipeline with EMR, as well as in general. Here are some examples:
https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-apache-spark-applications-using-aws/
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html
https://chalice-workshop.readthedocs.io/en/latest/index.html
There are also great tutorials for orchestrating applications with Lambda Step Functions, including the use of EMR. Here are some examples:
https://aws.amazon.com/blogs/big-data/orchestrate-apache-spark-applications-using-aws-step-functions-and-apache-livy/
https://aws.amazon.com/blogs/big-data/orchestrate-multiple-etl-jobs-using-aws-step-functions-and-aws-lambda/
https://github.com/DavidWells/serverless-workshop/tree/master/lessons-code-complete/events/step-functions
https://github.com/aws-samples/lambda-refarch-imagerecognition
https://github.com/aws-samples/aws-serverless-workshops
In the very worst case, if all of those options fail, such as if you need very strict control over the startup process on the EMR cluster after the EMR cluster completes its bootstrapping, you can always create a Java JAR that is loaded as a final step and then use that to either execute a shell script or use the various Amazon Java libraries to run your provisioning commands. In even this case, you still have no need to maintain your own EC2 instance for orchestration purposes (which, in my opinion, still would be hard to justify even if it was running in a Docker container in Kubernetes) because you can easily maintain that deployment process as well with a fully serverless approach.
There are many great videos from the Amazon re:Invent conferences that you may want to watch to get a jump start before you dive into the workshops. For example:
https://www.youtube.com/watch?v=dCDZ7HR7dms
https://www.youtube.com/watch?v=Xi_WrinvTnM&t=1470s
Many more such videos are available on YouTube.
Travis CI also supports Lambda deployment, as mentioned here: https://docs.travis-ci.com/user/deployment/lambda/

chef provisioning recipe to make AWS security groups, how to run from server vs chef client

I need to keep track of my AWS security groups better.
The recipes that use chef/provisioning/aws_driver would let me make recipes per SG and track IPs added/etc.
I can run them just fine locally with chef-client -z -r
What I really want is to upload the cookbook to my chef server and run it any time I need to change a SG. But chef seems to require recipes apply to nodes, not to AWS cloudiness.
Basically I want to run chef-client from my workstation and have it execute a cookbook that doesn't impact any running servers, or create them, but rather hits AWS and converges the resources specified.
If you create a client.rb for your workstation with the chef server URL and keys:
chef_server_url "http://servername/organizations/myorg"
validation_key "path/to/validation/key"
client_key "path/to/client/key"
you should be able to run provisioning recipes that have been uploaded to the server. E.g. if they're in a 'provisioning' cookbook:
chef-client -c client.rb -o provisioning::myrecipe
You probably want to create a provisioning node. Chef Server is essentially a glorified database and isn't intended to be an active controller. There is Chef Push Jobs, but even that is pushing to nodes.
Instead, create a node that is essentially a proxy for the resource that can't run chef client itself, and have that run chef client as a CRON service. Of course you don't need to create a separate node for every resource, one node can easily manage many of them. If you have a very large number you might have to start partitioning these resources. Or you might partition for security causes.
If everything is a declarative resource that behaves idempotently (as all good Chef things should be), then you can have two nodes with the same recipes to provide redundancy.

efficient way to administer or manage an auto-scaling instances in aws

As a sysadmin, i'm looking for an efficient way or best practices that you do on managing an ec2 instances with autoscaling.
How you manage automate this following scenario: (our environment is running with autoscaling, Elastic Load Balancing and cloudwatch)
patching the latest version of the rpm packages of the server for security reasons? like (yup update/upgrade)
making a configuration change of the Apache server like a change of the httpd.conf and apply it to all instances in the auto-scaling group?
how do you deploy the latest codes to your app to the server with less disruption in production?
how do you use puppet or chef to automate your admin task?
I would really appreciate if you have anything to share on how you automate your administration task with aws
Check out Amazon OpsWorks, the new Chef based DevOps tool for Amazon Web Services.
It gives you the ability to run custom Chef recipes on your instances in the different layers (Load Balancer, App servers, DB...), as well as to manage the deployment of your app from various source repositories (Git, Subversion..).
It supports auto-scaling based on load (like the auto-scaling that you are already using), as well as auto-scaling based on time, which is more complex to achieve with standard EC2 auto-scaling.
This is relatively a young service and not all functionality is available already, but it might be useful for your.
patching the latest version of the rpm packages of the server for
security reasons? like (yup update/upgrade)
You can use puppet or chef to create a cron job that takes care of this for you (the cron would in its most basic form download and or install updates via a bash script). You may want to automatically upgrade, or simply notify an admin via email so you can evaluate before apply updates.
making a configuration change of the Apache server like a change of
the httpd.conf and apply it to all instances in the auto-scaling
group?
I usually handle all of my configuration files through my Puppet manifest. You could setup each EC2 instance to pull updates from a Puppet Server, then you can roll out changes on demand. Part of this process should be updating the AMI stored in your AutoScale group (this is done with the Amazon Command Line tools).
how do you deploy the latest codes to your app to the server with less
disruption in production?
Test it in staging first! Also a neat trick is to versioned deployments, so each time you do a deployment it gets its own folder (/var/www/v1 /var/www/v2 etc) and once you have verified the deployment was successful you simply update a symlink to point to the lastest version (/var/www/current points to /var/www/v2).
OpsWorks handles all this sort of stuff for you so you can look into that if you don't want to do it all yourself.
how do you use puppet or chef to automate your admin task?
You can use Chef or Puppet to do all sorts of things, and anything they can't (or you don't know how to) do can be done via a bash/python script that you invoke from Chef or Puppet.
I normally do things like install packages, build custom packages, set permissions, download things, start services, manage configuration files, setup cron jobs etc
I would really appreciate if you have anything to share on how you automate your administration task with aws
Look into CloudFormation. This can help you setup all your servers and related services (think EC2, LBS, CloudWatch) through configuration files, thus helping you to automate your entire stack (not just the EC2's Operating System).

code deployments on EC2

There are quite a few resources on deployments of AMI's on EC2. But are there any solutions to incremental code updates to a PHP/Java based website?
Suppose I have 10 EC2 instances all running PHP / Java based websites with docroots local to the instance. I may want to do numerous code deployments to it through out the day.
I don't want to create a new AMI copy and scale that up to new instances each time I have a code update.
Any leads on how to best do this would be greatly appreciated. We use subversion as our main code repository and in the past we've simply done an SVN update/co when we were on one to two servers.
Thanks.
You should check out Elastic Beanstalk. Essentially you just package up your WAR or other code file, upload it to a bucket via AWS's command line/Eclipse integration and the deployment is performed automatically.
http://aws.amazon.com/elasticbeanstalk/
Elastic Beanstalk is exactly designed to do this for you. We use the Elastic Beanstalk java/tomcat flavor but it also has support for php, ruby, python environment. It has web console that allows you to deploy code (it even keeps history of it), it also has git tool to deploy code from command line.
It also has monitoring, load balancer, auto scaling all built in. Only a few web form entries to control all these.
Have you considered using a tool designed to manage this sort of thing for you, Puppet is well regarded in this area.
Have a look here:
https://puppetlabs.com/puppet/what-is-puppet/
(No I am not a Puppet Labs employee :))
Capistrano is a great tool for deploying code to multiple servers at once. Chef and Puppet are great tools for setting up those servers with databases, webservers, etc.
Go for a Capistrano . Its a good way to deploy your code on multiple servers .
As already mentioned Elastic Beanstalk is a good option if you just want a webserver and don't want to worry about the details.
Also, take a look at AWS CodeDeploy. You can have much more control over the lifecycle of your instance and you'd be looking at something very similar to what you have now (a set of EC2 instances that you setup). You can even get automatic deployments on instance launch with Auto Scaling.
You can either use Capsitrano or TravisCI.