Scheduled Deployment for CodeDeploy - amazon-web-services

I'm trying to save our AWS cost. What I am doing right now is to terminate ec2 instances at 8pm then launch them again at 8am. I was able to do this via Skeddly (http://www.skeddly.com/).
The problem is the codes are not update every time I launch an instance because I'm just using an AMI. What I want to find out is, are there any services which I can use to auto deploy codes using CodeDeploy every 8am so that the instances are aligned with the latest codes.

One idea for how you can do this is that you can create a lambda function with a scheduled event: https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html
In that lambda you can call CodeDeploy with the aws sdk to kick off a deployment. How you determine what is the "latest code" is up to you. If this is from github you can just link to a zip of the head of master for example. Otherwise you need something to kick off an upload of your latest code to S3.

Related

AWS architecture for create EC2 windows (using AMI), run a my_command.bat, process and terminate EC2 monthly

I am not sure if this is the right platform to ask this type of question but this is my last hope as I am new to AWS.
I have requirement:
Given: AMI ID (has my_software installed, my_command.bat created to run my_software using CLI).
ToDo (Repeats every month):
Create an EC2 instance (Windows) from the given AMI-ID (Windows).
Run my_command.bat file in the instance which runs my_software which generates report.csv and log.csv files.
Send report.csv to my_s3_bucket and log to CloudWatch.
Terminate the instance. (Stop is not enough).
Please suggest the architecture for the same.
I do similar and it works as follows:
Create docker image which runs your command (it should log and write to s3 as required).
Create a batch environment which can pull the image and run the container.
Create Cloudwatch event to run the job at your desired cron schedule
The event target is an SQS queue
Create a lambda function which responds to events on the SQS queue
The lambda function submits the batch job
Note in theory you can call the Lambda function from the Cloudwatch event rule, but I have to use a queue due to a private subnet.
Batch Fargate is a good fit for this, because it will only use resources whilst your job is running. If you need to use EC2, ensure that MinvCpus is zero, so that the instance terminates after it has run. This can take a few minutes, which you will be charged for, so perhaps Fargate is a better fit.

AWS CodePipeline deploy process

I am building a CI pipeline with AWS CodePipeline. I'm using CodeBuild to fetch my code from a repo, build a docker image and push the image to ECR. The source for my CodePipeline is my ECR repo and is triggered when an image is updated.
Now, here’s the functionality I am looking for. When a new image is pushed to ECR, I want to create an EC2 instance and then deploy the new image to that instance. When the app in the image has completed its task, I.e done something and pushed the results to S3, I want to terminate the instance. It could take hours to days before the task is complete.
Is CodeDeploy the right tool to use to deploy the ECR image to an EC2 instance for this use case? I see from the docs that CodeDeploy requires an already running instance to deploy to. I need to create one on the fly before CodeDeploy is initiated. Should I add a step in the CodePipeline to trigger a lambda that creates an instance before CodeDeploy gets run?
Any guidance would be much appreciated!
CloudTrail supports logging a PutImage event that you can use to do stuff with your pipeline. I prefer producing artifacts after specific steps in your build pipeline and then have a lambda function that reacts to an object created event. Your lambda function could then make the necessary calls to spin up ec2 instances. Your instance could then run a job and then call lambda again, which could tear it down. It sounds like you need an on-demand worker. Services like AWS Batch or ECS might be able to provide you with this functionality out of the box.

how to deploy code on multiple instances Amazon EC2 Autocaling group?

So we are launching an ecommerce store built on magento. We are looking to deploy it on Amazon EC2 instance using RDS as database service and using amazon auto-scaling and elastic load balancer to scale the application when needed.
What I don't understand is this:
I have installed and configured my production magento enviorment on an EC2 instance (database is in RDS). This much is working fine. But now when I want to dynamically scale the number of instances
how will I deploy the code on the dynamically generated instances each time?
Will aws copy the whole instance assign it a new ip and spawn it as a
new instance or will I have to write some code to automate this
process?
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
A detailed explanation or direction towards some resources on the topic will be greatly appreciated.
You do this in the AutoScalingGroup Launch Configuration. There is a UserData section in the LaunchConfiguration in CloudFormation where you would write a script that is ran when ever the ASG scales up and deploys a new instance.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html#cfn-as-launchconfig-userdata
This is the same as the UserData section in an EC2 Instance. You can use LifeCycle hooks that will tell the ASG not to put the EC2 instance into load until everything you want to have configured it set up.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-as-lifecyclehook.html
I linked all CloudFormation pages, but you may be using some other CI/CD tool for deploying your infrastructure, but hopefully that gets you started.
To start, do check AWS CloudFormation. You will be creating templates to design how the infrastructure of your application works ~ infrastructure as code. With these templates in place, you can rollout an update to your infrastructure by pushing changes to your templates and/or to your application code.
In my current project, we have a github repository dedicated for these infrastructure templates and a separate repository for our application code. Create a pipeline for creating AWS resources that would rollout an updated to AWS every time you push to the repository on a specific branch.
Create an infrastructure pipeline
have your first stage of the pipeline to trigger build whenever there's code changes to your infrastructure templates. See AWS CodePipeline and also see AWS CodeBuild. These aren't the only AWS resources you'll be needing but those are probably the main ones, of course aside from this being done in cloudformation template as mentioned earlier.
how will I deploy the code on the dynamically generated instances each time?
Check how containers work, it would be better and will greatly supplement on your learning on how launching new version of application work. To begin, see docker, but feel free to check any resources at your disposal
Continuation with my current project: We do have a separate pipeline dedicated for our application, but will also get triggered after our infrastructure pipeline update. Our application pipeline is designed to build a new version of our application via AWS Codebuild, this will create an image that will become a container ~ from the docker documentation.
we have two triggers or two sources that will trigger an update rollout to our application pipeline, one is when there's changes to infrastructure pipeline and it successfully built and second when there's code changes on our github repository connected via AWS CodeBuild.
Check AWS AutoScaling , this areas covers the dynamic launching of new instances, shutting down instances when needed, replacing unhealthy instances when needed. See also AWS CloudWatch, you can design criteria with it to trigger scaling down/up and/or in/out.
Will aws copy the whole instance assign it a new ip and spawn it as a new instance or will I have to write some code to automate this process?
See AWS ElasticLoadBalancing and also check out more on AWS AutoScaling. On the automation process, if ever you'll push through with CloudFormation, instance and/or containers(depending on your design) will be managed gracefully.
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
As mentioned, earlier having a pipeline for rolling out new versions of your application via CodeBuild, this will create an image with the new code changes and when everything is ready, it will be deployed ~ becomes a container. The old EC2 instance or the old container( depending on how you want your application be deployed) will be gracefully shut down after a new version of your application is up and running. This will give you zero downtime.

Continuous Integration and Delivery in AWS

I am figuring out the best approach to perform the following steps in an automated way:
A code is committed in GITLabs then a Trigger implemented in AWS gets notified and A script in AWS instance pulls the new code from GITLabs and deploys
I want this to be implemented to the instances launched through an auto-scaling group.
I know I can make use of the user-data commands, but that would only make sure the fresh data is pulled while the instance is launched. But once an instance is launched, then if code check-in happens, the server won't get pushed with the new code.

AWS AMI Automation using Jenkins and Cloud Formation

Now, i'm creating AWS AMI manually from an EC2 instance. and i would like to automate the process using Jenkins build process.
I've configured the jenkins-cloudformation plugin with the credentials and tried to trigger the cloud formation template to launch the EC2 instance. From here how can i proceed the automation process to create the AMI with in the cloud formation template?
Can someone help me on this?
This is an old question but here is some info for anyone trying to do such automation. You might use HashiCorp Packer for creating the image but, if you know your way around lambdas and AWS API, you do not need packer.
You can create a new AMI by launching an instance from a source AMI, customizing it the way you want, and then call AWS api to make an AMI out of the instance. Here are steps you might follow for this:
first, you need to find a source image. You can specify aws ec2 describe_images filters to do this.
once you have the image, you need launch an instance from it. Here is boto3 api to make the call.
while launching the instance, you will want to pass 'UserData' to it. You user data may be a few simple lines of installing packages or do advanced stuff. You can put it all into a script, host it in s3, and make UserData download and execute your script.
Once you are done with your work on the instance, it is time to capture it as a new AMI.
So, how would you do these and where is the glue?
You can use AWS lambda to manage these steps. One lambda can find the source AMI and launch and instance from it. Another lambda can capture the image.
Once your instance is customized, you would trigger the lambda that will capture it as an AMI. You might do that by directly invoking lambda. Depending on re-usability requirements you have, you might want to trigger that lambda from SNS or CloudWatch, in that case you would send an SNS message to your SNS topic or enable/trigger your CloudWatch rule.
You cloudformation would install these lambdas and any other components that would trigger them (SNS and CloudWatch).