I am figuring out the best approach to perform the following steps in an automated way:
A code is committed in GITLabs then a Trigger implemented in AWS gets notified and A script in AWS instance pulls the new code from GITLabs and deploys
I want this to be implemented to the instances launched through an auto-scaling group.
I know I can make use of the user-data commands, but that would only make sure the fresh data is pulled while the instance is launched. But once an instance is launched, then if code check-in happens, the server won't get pushed with the new code.
Related
I am not sure if this is the right platform to ask this type of question but this is my last hope as I am new to AWS.
I have requirement:
Given: AMI ID (has my_software installed, my_command.bat created to run my_software using CLI).
ToDo (Repeats every month):
Create an EC2 instance (Windows) from the given AMI-ID (Windows).
Run my_command.bat file in the instance which runs my_software which generates report.csv and log.csv files.
Send report.csv to my_s3_bucket and log to CloudWatch.
Terminate the instance. (Stop is not enough).
Please suggest the architecture for the same.
I do similar and it works as follows:
Create docker image which runs your command (it should log and write to s3 as required).
Create a batch environment which can pull the image and run the container.
Create Cloudwatch event to run the job at your desired cron schedule
The event target is an SQS queue
Create a lambda function which responds to events on the SQS queue
The lambda function submits the batch job
Note in theory you can call the Lambda function from the Cloudwatch event rule, but I have to use a queue due to a private subnet.
Batch Fargate is a good fit for this, because it will only use resources whilst your job is running. If you need to use EC2, ensure that MinvCpus is zero, so that the instance terminates after it has run. This can take a few minutes, which you will be charged for, so perhaps Fargate is a better fit.
So we are launching an ecommerce store built on magento. We are looking to deploy it on Amazon EC2 instance using RDS as database service and using amazon auto-scaling and elastic load balancer to scale the application when needed.
What I don't understand is this:
I have installed and configured my production magento enviorment on an EC2 instance (database is in RDS). This much is working fine. But now when I want to dynamically scale the number of instances
how will I deploy the code on the dynamically generated instances each time?
Will aws copy the whole instance assign it a new ip and spawn it as a
new instance or will I have to write some code to automate this
process?
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
A detailed explanation or direction towards some resources on the topic will be greatly appreciated.
You do this in the AutoScalingGroup Launch Configuration. There is a UserData section in the LaunchConfiguration in CloudFormation where you would write a script that is ran when ever the ASG scales up and deploys a new instance.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html#cfn-as-launchconfig-userdata
This is the same as the UserData section in an EC2 Instance. You can use LifeCycle hooks that will tell the ASG not to put the EC2 instance into load until everything you want to have configured it set up.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-as-lifecyclehook.html
I linked all CloudFormation pages, but you may be using some other CI/CD tool for deploying your infrastructure, but hopefully that gets you started.
To start, do check AWS CloudFormation. You will be creating templates to design how the infrastructure of your application works ~ infrastructure as code. With these templates in place, you can rollout an update to your infrastructure by pushing changes to your templates and/or to your application code.
In my current project, we have a github repository dedicated for these infrastructure templates and a separate repository for our application code. Create a pipeline for creating AWS resources that would rollout an updated to AWS every time you push to the repository on a specific branch.
Create an infrastructure pipeline
have your first stage of the pipeline to trigger build whenever there's code changes to your infrastructure templates. See AWS CodePipeline and also see AWS CodeBuild. These aren't the only AWS resources you'll be needing but those are probably the main ones, of course aside from this being done in cloudformation template as mentioned earlier.
how will I deploy the code on the dynamically generated instances each time?
Check how containers work, it would be better and will greatly supplement on your learning on how launching new version of application work. To begin, see docker, but feel free to check any resources at your disposal
Continuation with my current project: We do have a separate pipeline dedicated for our application, but will also get triggered after our infrastructure pipeline update. Our application pipeline is designed to build a new version of our application via AWS Codebuild, this will create an image that will become a container ~ from the docker documentation.
we have two triggers or two sources that will trigger an update rollout to our application pipeline, one is when there's changes to infrastructure pipeline and it successfully built and second when there's code changes on our github repository connected via AWS CodeBuild.
Check AWS AutoScaling , this areas covers the dynamic launching of new instances, shutting down instances when needed, replacing unhealthy instances when needed. See also AWS CloudWatch, you can design criteria with it to trigger scaling down/up and/or in/out.
Will aws copy the whole instance assign it a new ip and spawn it as a new instance or will I have to write some code to automate this process?
See AWS ElasticLoadBalancing and also check out more on AWS AutoScaling. On the automation process, if ever you'll push through with CloudFormation, instance and/or containers(depending on your design) will be managed gracefully.
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
As mentioned, earlier having a pipeline for rolling out new versions of your application via CodeBuild, this will create an image with the new code changes and when everything is ready, it will be deployed ~ becomes a container. The old EC2 instance or the old container( depending on how you want your application be deployed) will be gracefully shut down after a new version of your application is up and running. This will give you zero downtime.
I have an environment in AWS where EC2 instances are in autoscaling mode, i.e. new instances spin up as per the load on deployed instances.
Now, if I want to integrate this environment with Jenkins, how can I push my codes from Github to these EC2 instances, where my application is deployed. And with every change in my code version, Github should invoke EC2 instances to have the same versions deployed, and also every new instances should be created with this updated version of code, i.e. every autoscaled instances must have the same code version running. Please help.
I assume you have an executable version of your latest code on a deploy server. You can do this by forcing Jenkins to deploy your code when a new commit is made on a specific branch in GitHub. Then all you need is an AMI for your Auto Scaling Group that has a job/task which runs let's say every 5 minutes (based on how long one single task takes). This job/task fetches (copies) the code from the deploy server and then starts the application. As an example, in Windows Task Scheduler you can add two actions to a task: one for updating (e.g. a simple robocopy) the code and one for running the app.
I'm trying to save our AWS cost. What I am doing right now is to terminate ec2 instances at 8pm then launch them again at 8am. I was able to do this via Skeddly (http://www.skeddly.com/).
The problem is the codes are not update every time I launch an instance because I'm just using an AMI. What I want to find out is, are there any services which I can use to auto deploy codes using CodeDeploy every 8am so that the instances are aligned with the latest codes.
One idea for how you can do this is that you can create a lambda function with a scheduled event: https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html
In that lambda you can call CodeDeploy with the aws sdk to kick off a deployment. How you determine what is the "latest code" is up to you. If this is from github you can just link to a zip of the head of master for example. Otherwise you need something to kick off an upload of your latest code to S3.
I have an autoscaling that works great, with a launchconfiguration where i defined a userdata script that is executed on a new instance launch.
The userscript updates basecode and generate cache, this takes some seconds. But as soon as the instance is "created" (and not "ready"), the autoscaling adds it to the load balancer.
It's a problem because while the userdata script is executed, the instance does not answer with a good response (basically, 500 errors are throw).
I would like to avoid that, of course I saw this documentation : http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/InstallingAdd
As with a standalone EC2 instance, you have the option of configuring instances launched into an Auto Scaling group using user data. For example, you can specify a configuration script using the User data field in the AWS Management Console, or the --userdata parameter in the AWS CLI.
If you have software that can't be installed using a configuration script, or if you need to modify software manually before Auto Scaling adds the instance to the group, add a lifecycle hook to your Auto Scaling group that notifies you when the Auto Scaling group launches an instance. This hook keeps the instance in the Pending:Wait state while you install and configure the additional software.
Looks like i'm not in this case. Also, modify the pending hook on the userdata script is complicated. There must be a simple solution to fix my problem.
Thank you for your help !
EC2 instance Userdata does not utilize a lifecycle hook to stop a newly launched instance being brought into service until after it has finished executing.
Stopping your web server at the start of your user data script sounds a little unreliable to me, and therefore I would urge you to utilize the features AutoScaling provides that were designed to solve this very problem.
I have two suggestions:
Option 1:
Using lifecycle hooks isn't at all complicated, once you read through the docs. And in your user data, you can easily use the CLI to control the hook, check this out. In fact, a hook can be controlled from any supported language or scripting language.
Option 2:
If manually taking care of lifecycle hooks doesn't appeal to you, then I would recommend scrapping your user data script and doing a work around with AWS CodeDeploy. You could have CodeDeploy deploy nothing (eg. empty S3 folder) but you could use the deployment hook scripts to replace your user data script. Code Deploy integrates with AutoScaling seamlessly and handles lifecycle hooks automatically. A newly launched instance won't be brought into service by AutoScaling until a deployment has succeeded. Read the docs here and here for more info.
However, I would urge you to go with option 1. Lifecycle hooks were designed to solve the very problem you have. They're powerful, robust, awesome and free. Use them.
#Brooks said the easiest way to "wait" before the ELB serve the instance is to deal with ELB health status.
I solved my problem by shutting down the http server at the start of the userdata script. So the ELB can't have a green health status, and it does not send clients to the instance. I re-start the http server at the end of the script, the health is good so the ELB serve it.