Dynamically update AMI - amazon-web-services

I have a question regarding AWS, have an AMI with windows server installed, IIS installed, and a site up and running.
My AutoScale always maintains two instances created based on this AMI.
However, whenever I need to change something on the site I need to upload a new instance, make the changes, update the AMI and update the auto-scale, which is quite time consuming.
Is there any way to automate this by linking to a Git repository?

This is more like a CI CD work rather than achieved in AWS.
You can schedule a CI CD pipeline to detect any update happens in SCM(GIT) and trigger a build job(Jenkins or similar tool) which will provide an artifact to you. You can deploy the artifact to respective application server using CD tools (ansible/even with jenkins or similar tools) whichever suits your infra. In the deploy script itself you can connect to ec2 service to create a new AMI once deployment is completed.
You need to use set of tools to achieve it SCM webhook/poll, Jenkins, Ansible.

Related

How to automate Packer AMI builds?

I built and provisioned a AMI using packer and amazon-ebs.
I need to rebuild the AMI weekly. Is there a simple solution for this? Do I need a separate ec2 for jenkins or is that overkill? Would any CI tool be good for this or is there more simple approach? My packer ami code is hosted on github.
In addition, I create a new ec2 instance from AMI and tear down old one weekly. Whats the best way to schedule ec2 tear-downs and rebuilds automatically?
So 2 issues:
Weekly rebuild of AMI
Weekly rebuild of ec2 based on rebuilt AMI
Im not experienced with any devops things so please excuse me.
I'm assuming this is the only task where you want to use an automation server. In other case, I will suggest you create a Jenkins or any other automation server. It all depends on your need.
To automate this single task, you don't necessarily need an automation server. The one method I'm going to demonstrate is one among many ways you can do it. Below are the AWS resources you require.
A Docker image where packer, aws cli, and any other dependencies installed.
An ECS task configured using the image in #1.
A CloudWatch schedule expression to trigger the ECS task periodically, in this case weekly.
Your docker image should be configured such that container execution does the rebuild of the AMI. You can write a bash script for this and configure the same as the container entry point.
The second point, rebuild of EC2 server is not a best practice. You should have a separate process in place to apply the AMI changes to respective instances. However, you can do this by scheduling a lambda function which will terminate and launch a new instance.
I know this is a broad answer and there are many other ways to do the same.

Build system when using auto scaling group with ELB in aws

I was using a free tier aws account in which I had one ec2 machine (Linux). I have a simple website with backend server running on django at 8000 port and front end server written in angular and running on http (80) port. I used nginx for https and redirection of calls to backend and frontend server.
Now for backend build system, I did these 3 main steps (which I automated by running jenkins on the same machine).
1) git pull (Pull the latest code from repo).
2) Do migrations (Updating my db with any new table).
3) Restarting the django server. (I was using gunicorn).
Now, I split my front end and backend server into 2 different machines using auto scaling groups and I am now using ELB (Aws Elastic Load balancer) to route the requests. I am done with the setup. But now I am having problem in continuous deployment. The main thing is that ELB uses auto scaling groups which in turn uses AMI.
Now, since AMI's are created once, my first question is how to automate this process and deploy my latest code in already running aws servers.
Second, if I want to run few steps just once for all the servers like my second step of updating db with new tables then how to achieve that.
And also third if these steps need to run on a machine, then do I need to have another ec2 instance to automate the process of creating AMI, updating auto scaling groups with it and then deploying latest code in that.
So, basically I want to know the best practices that people follow in deploying latest code in aws machines that were created by auto scaling groups with the help of AMI. Also I use bitbucket for code management.
First Question: how to automate 'package based deployment'.
Instead of creating a new AMI for every release, create a baseline AMI which only changes when your new release require OS changes / security patches / etc. Look into tools such as packer to create AMIs automatically. In order to automate your code deployment when it changes, you can use a package-based deployment approach, which means you create a package for every release (Should be part of your CI process), which is stored in some repository such as Nexus, Artifactory, or even a simple S3 bucket.
When you deploy a new instance of your application, it should run some sort of script to pull and unpack/install that package on the instance < this is the basic concept, there are many tools that can help you achieve this, for example, Chef, or AWS CloudFormation.
So essentially, Step 1 should pull the code, create the package and store it in some repository available to your application servers > this can be done offline.
Second Question: How to run other tasks such as updating database schema.
As mentioned above, this can also be part of your 'deployment' automation, so if you are using Chef or even a simple bash script, it can update a database schema before unpacking the new code, this really depends on your database, how you manage it, and who orchestrates the deployment.
For example, you could have a Jenkins job that pulls the new schema and updates your database when ever you rollout a release.
Your third question can be solved by Packer, it can spin up instances, create an AMI, and terminate the instance.
Read more into CICD, and CICD related tools.

Remote update ec2 instance with docker image

I have a release of my project. I build a docker image and deploy it on an ec2 instance.
Later, when I have a new release, I would like update the docker on ec2 remotely (without accessing the machine, just executing some service).
Is there a way how to do it without ECS and ElasticBeanstalk?
If it's not possible can I somehow re-run the cfn-init script?
My Research
https://aws.amazon.com/blogs/aws/new-ec2-run-command-remote-instance-management-at-scale/
You can manage your instances remotely (i.e. make changes without manually SSHing into the instance and typing commands) by using any of the many system management services out there. AWS offers Simple Systems Manager (SSM) of which the Run Command you linked is part. AWS also offers the OpsWorks service which uses Chef. You also have other products like Ansible and SaltStack, and you can optionally integrate the use of those services with the AWS SSM service.

Automate code deploy from Git lab to AWS EC2 instance

We're building an application for which we are using GitLab repository. Manual deployment of code to the test server which is Amazon AWS EC2 instance is tedious, I'm planning to automate deployment process, such that when we commit code, it should reflect in the test instance.
from my knowledge we can use AWS code-deploy service to fetch the code from GitHub. But code deploy service does not support GitLab repository . Is there a way to automate the code deployment process to AWS Ec2 instance through GitLab. or Is there a shell scripting possibility to achieve this? Kindly educate me.
One way you could achieve this with AWS CodeDeploy is by using the S3 option in conjunction with Gitlab-CI: http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-w.html
Depending on how your project is setup, you may have the possibility to generate a distribution Zip (Gradle offers this through the application plugin). You may need to generate your "distribution" file manually if your project does not offer such a capability.
Gitlab does not offer a direct S3 integration, however through the gitlab-ci.yml you would be able to download it into the container and run the necessary upload commands to put the generated zip file on the S3 container as per the AWS instructions to trigger the deployment.
Here is an example of what your brefore-script could look like in the gitlab-ci.yml file:
before_script:
- apt-get update --quiet --yes
- apt-get --quiet install --yes python
- pip install -U pip
- pip install awscli
The AWS tutorial on how to use CodeDeploy with S3 is very detailed, so I will skip attempting to reproduce the contents here.
In regards to the actual deployment commands and actions that you are currently performing manually, AWS CodeDeploy provides the capability to run certain actions through scripts defined in the app-spec file depending on event hooks for the application:
http://docs.aws.amazon.com/codedeploy/latest/userguide/writing-app-spec.html
http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref-hooks.html
I hope this helps.
This is one of my old post. But I happened to find an answer for this. Although my question is specific to work with code deploy I would say there is no such need to use any aws requirements using gitlab.
We don't require Code Deploy at all. There is no need to use any external CI server like the team city or the jenkins to perform the CI from the GitLab anymore.
We need to add the .gitlab-ci.yml file in the source directory of the branch and write an .yml script in it. There are pipelines in the GitLab that will perform the CI/CD automatically.
The pipelines of the GitLab CI/CD looks more similar to the working functionality of Jenkins Server. using the YML script we can perform SSH on the EC2 instance and place the files in it.
An example of how to write the gitlab .yml file to ssh to ec2 instance is here https://docs.gitlab.com/ee/ci/yaml/README.html

efficient way to administer or manage an auto-scaling instances in aws

As a sysadmin, i'm looking for an efficient way or best practices that you do on managing an ec2 instances with autoscaling.
How you manage automate this following scenario: (our environment is running with autoscaling, Elastic Load Balancing and cloudwatch)
patching the latest version of the rpm packages of the server for security reasons? like (yup update/upgrade)
making a configuration change of the Apache server like a change of the httpd.conf and apply it to all instances in the auto-scaling group?
how do you deploy the latest codes to your app to the server with less disruption in production?
how do you use puppet or chef to automate your admin task?
I would really appreciate if you have anything to share on how you automate your administration task with aws
Check out Amazon OpsWorks, the new Chef based DevOps tool for Amazon Web Services.
It gives you the ability to run custom Chef recipes on your instances in the different layers (Load Balancer, App servers, DB...), as well as to manage the deployment of your app from various source repositories (Git, Subversion..).
It supports auto-scaling based on load (like the auto-scaling that you are already using), as well as auto-scaling based on time, which is more complex to achieve with standard EC2 auto-scaling.
This is relatively a young service and not all functionality is available already, but it might be useful for your.
patching the latest version of the rpm packages of the server for
security reasons? like (yup update/upgrade)
You can use puppet or chef to create a cron job that takes care of this for you (the cron would in its most basic form download and or install updates via a bash script). You may want to automatically upgrade, or simply notify an admin via email so you can evaluate before apply updates.
making a configuration change of the Apache server like a change of
the httpd.conf and apply it to all instances in the auto-scaling
group?
I usually handle all of my configuration files through my Puppet manifest. You could setup each EC2 instance to pull updates from a Puppet Server, then you can roll out changes on demand. Part of this process should be updating the AMI stored in your AutoScale group (this is done with the Amazon Command Line tools).
how do you deploy the latest codes to your app to the server with less
disruption in production?
Test it in staging first! Also a neat trick is to versioned deployments, so each time you do a deployment it gets its own folder (/var/www/v1 /var/www/v2 etc) and once you have verified the deployment was successful you simply update a symlink to point to the lastest version (/var/www/current points to /var/www/v2).
OpsWorks handles all this sort of stuff for you so you can look into that if you don't want to do it all yourself.
how do you use puppet or chef to automate your admin task?
You can use Chef or Puppet to do all sorts of things, and anything they can't (or you don't know how to) do can be done via a bash/python script that you invoke from Chef or Puppet.
I normally do things like install packages, build custom packages, set permissions, download things, start services, manage configuration files, setup cron jobs etc
I would really appreciate if you have anything to share on how you automate your administration task with aws
Look into CloudFormation. This can help you setup all your servers and related services (think EC2, LBS, CloudWatch) through configuration files, thus helping you to automate your entire stack (not just the EC2's Operating System).