I'm trying to setup an Auto Scaling Group in combination with CodeDeploy. Everything works fine except for the fact that when a new instance is created CodeDeploy starts before the user data script (defined in the Launch Configuration) finishes.
The default value of this user data script downloads and install the code deploy agent and i've extended it with installation of a couple of windows features, IIS rewrite module and msdeploy.
In my appspec.yml i'm using the hook AfterInstall to deploy my IIS website and this obviously fails when msdeploy is not installed (yet).
Am i going about this the wrong way or is there a way to make CodeDeploy wait for the user data script to finish?
Unfortunately, there's no was for CodeDeploy to know anything more than the instance has loaded it's OS. The good thing is that CodeDeploy give the host agent 1 hour to start polling for commands with automatic deployments. The easiest thing to do is install the host agent after all the required dependencies are installed. The automatic deployment will be created, but can't proceed until after the host agent is started.
This is explained in detail here - https://aws.amazon.com/blogs/devops/under-the-hood-aws-codedeploy-and-auto-scaling-integration/
Ordering execution of launch scripts – The CodeDeploy agent looks for and executes deployments as soon as it starts. There is no ordering between the deployment execution and launch scripts such as user data, cfn-init, etc. We recommend you install the host agent as part of (and maybe as the last step in) the launch scripts so that you can be sure the deployment won’t be executed until the instance has installed dependencies that are not part of your CodeDeploy deployment. If you prefer baking the agent into the base AMI, we recommend that you keep the agent service in a stopped state and use the launch scripts to start the agent service.
Related
I am using AWS CodeDeploy to deploy a binary to an EC2 instance.
When I deploy a new version of the binary, do I need to tell CodeDeploy to kill the old running binary, and if so, what's the best way to do this.
Should I save the pid of the old process to a file & then kill it?
Or, does CodeDeploy automatically kill the old process?
does CodeDeploy automatically kill the old process?
It does not. You have to do it yourself. CD provides special hook for that where you can perform such operation:
ApplicationStop - This deployment lifecycle event occurs even before the application revision is downloaded. You can specify scripts for this event to gracefully stop the application or remove currently installed packages in preparation for a deployment. The AppSpec file and scripts used for this deployment lifecycle event are from the previous successfully deployed application revision.
AWS also provides example of how one can use ApplicationStop to stop wordpress. The other example is SampleApp_Linux.zip which you can download and inspect to check how ApplicationStop is implemented.
I have a question regarding AWS, have an AMI with windows server installed, IIS installed, and a site up and running.
My AutoScale always maintains two instances created based on this AMI.
However, whenever I need to change something on the site I need to upload a new instance, make the changes, update the AMI and update the auto-scale, which is quite time consuming.
Is there any way to automate this by linking to a Git repository?
This is more like a CI CD work rather than achieved in AWS.
You can schedule a CI CD pipeline to detect any update happens in SCM(GIT) and trigger a build job(Jenkins or similar tool) which will provide an artifact to you. You can deploy the artifact to respective application server using CD tools (ansible/even with jenkins or similar tools) whichever suits your infra. In the deploy script itself you can connect to ec2 service to create a new AMI once deployment is completed.
You need to use set of tools to achieve it SCM webhook/poll, Jenkins, Ansible.
I have an environment in AWS where EC2 instances are in autoscaling mode, i.e. new instances spin up as per the load on deployed instances.
Now, if I want to integrate this environment with Jenkins, how can I push my codes from Github to these EC2 instances, where my application is deployed. And with every change in my code version, Github should invoke EC2 instances to have the same versions deployed, and also every new instances should be created with this updated version of code, i.e. every autoscaled instances must have the same code version running. Please help.
I assume you have an executable version of your latest code on a deploy server. You can do this by forcing Jenkins to deploy your code when a new commit is made on a specific branch in GitHub. Then all you need is an AMI for your Auto Scaling Group that has a job/task which runs let's say every 5 minutes (based on how long one single task takes). This job/task fetches (copies) the code from the deploy server and then starts the application. As an example, in Windows Task Scheduler you can add two actions to a task: one for updating (e.g. a simple robocopy) the code and one for running the app.
I was using a free tier aws account in which I had one ec2 machine (Linux). I have a simple website with backend server running on django at 8000 port and front end server written in angular and running on http (80) port. I used nginx for https and redirection of calls to backend and frontend server.
Now for backend build system, I did these 3 main steps (which I automated by running jenkins on the same machine).
1) git pull (Pull the latest code from repo).
2) Do migrations (Updating my db with any new table).
3) Restarting the django server. (I was using gunicorn).
Now, I split my front end and backend server into 2 different machines using auto scaling groups and I am now using ELB (Aws Elastic Load balancer) to route the requests. I am done with the setup. But now I am having problem in continuous deployment. The main thing is that ELB uses auto scaling groups which in turn uses AMI.
Now, since AMI's are created once, my first question is how to automate this process and deploy my latest code in already running aws servers.
Second, if I want to run few steps just once for all the servers like my second step of updating db with new tables then how to achieve that.
And also third if these steps need to run on a machine, then do I need to have another ec2 instance to automate the process of creating AMI, updating auto scaling groups with it and then deploying latest code in that.
So, basically I want to know the best practices that people follow in deploying latest code in aws machines that were created by auto scaling groups with the help of AMI. Also I use bitbucket for code management.
First Question: how to automate 'package based deployment'.
Instead of creating a new AMI for every release, create a baseline AMI which only changes when your new release require OS changes / security patches / etc. Look into tools such as packer to create AMIs automatically. In order to automate your code deployment when it changes, you can use a package-based deployment approach, which means you create a package for every release (Should be part of your CI process), which is stored in some repository such as Nexus, Artifactory, or even a simple S3 bucket.
When you deploy a new instance of your application, it should run some sort of script to pull and unpack/install that package on the instance < this is the basic concept, there are many tools that can help you achieve this, for example, Chef, or AWS CloudFormation.
So essentially, Step 1 should pull the code, create the package and store it in some repository available to your application servers > this can be done offline.
Second Question: How to run other tasks such as updating database schema.
As mentioned above, this can also be part of your 'deployment' automation, so if you are using Chef or even a simple bash script, it can update a database schema before unpacking the new code, this really depends on your database, how you manage it, and who orchestrates the deployment.
For example, you could have a Jenkins job that pulls the new schema and updates your database when ever you rollout a release.
Your third question can be solved by Packer, it can spin up instances, create an AMI, and terminate the instance.
Read more into CICD, and CICD related tools.
I have configured an aws asg using ansible to provision new instances and then install the codedeploy agent via "user_data" script in a similar fashion as suggested in this question:
Can I use AWS code Deploy for pulling application code while autoscaling?
CodeDeploy works fine and I can install my application onto the asg once it has been created. When new instances are triggered in the ASG via one of my rules (e.g. high cpu usage), the codedeploy agent is installed correctly. The problem is, CodeDeploy does not install the application on these new instances. I suspect it is trying to run before the user_data script has finished. Has anyone else encountered this problem? Or know how to get CodeDeploy to automatically deploy the application to new instances which are spawned as part of the ASG?
AutoScaling tells CodeDeploy to start the deployment before the user data is started. To get around this CodeDeploy gives the instance up to an hour to start polling for commands for the first lifecycle event instead of 5 minutes.
Since you are having problems with automatic deployments but not manual ones and assuming that you didn't make any manual changes to your instances you forgot about, there is most likely a dependency specific to your deployment that's not available yet at the time the instance launches.
Try listing out all the things that your deployment needs to succeed and make sure that each of those is available before you install the host agent. If you can log onto the instance fast enough (before AutoScaling terminates the instance), you can try and grab the host agent logs and your application's logs to find out where the deployment is failing.
If you think the host agent is failing to install entirely, make sure you have Ruby2.0 installed. It should be there by default on AmazonLinux, but Ubuntu and RHEL need to have it installed as part of the user data before you can install the host agent. There is an installer log in /tmp that you can check for problems in the initial install (again you have to be quick to grab the log before the instance terminates).