Now, i'm creating AWS AMI manually from an EC2 instance. and i would like to automate the process using Jenkins build process.
I've configured the jenkins-cloudformation plugin with the credentials and tried to trigger the cloud formation template to launch the EC2 instance. From here how can i proceed the automation process to create the AMI with in the cloud formation template?
Can someone help me on this?
This is an old question but here is some info for anyone trying to do such automation. You might use HashiCorp Packer for creating the image but, if you know your way around lambdas and AWS API, you do not need packer.
You can create a new AMI by launching an instance from a source AMI, customizing it the way you want, and then call AWS api to make an AMI out of the instance. Here are steps you might follow for this:
first, you need to find a source image. You can specify aws ec2 describe_images filters to do this.
once you have the image, you need launch an instance from it. Here is boto3 api to make the call.
while launching the instance, you will want to pass 'UserData' to it. You user data may be a few simple lines of installing packages or do advanced stuff. You can put it all into a script, host it in s3, and make UserData download and execute your script.
Once you are done with your work on the instance, it is time to capture it as a new AMI.
So, how would you do these and where is the glue?
You can use AWS lambda to manage these steps. One lambda can find the source AMI and launch and instance from it. Another lambda can capture the image.
Once your instance is customized, you would trigger the lambda that will capture it as an AMI. You might do that by directly invoking lambda. Depending on re-usability requirements you have, you might want to trigger that lambda from SNS or CloudWatch, in that case you would send an SNS message to your SNS topic or enable/trigger your CloudWatch rule.
You cloudformation would install these lambdas and any other components that would trigger them (SNS and CloudWatch).
Related
Problem:
I have an EC2 instance running and I have made some modifications to the instance: installed docker, setup directories for certs, etc. Now, I am wanting to create the same instance but use infrastructure as code principals. Instead of remembering all the additions that I have done and creating a template by hand, I am trying to find a way to export my current EC2 instance into a json or yaml format so that I can terminate this instance and create another one that is equivalent to the one running.
I have tried:
aws ec2 describe-instances
Reading through the AWS CLI EC2 docs
Reading through the CloudFormation docs
Searched Google
Searched SO
Since you have no knowledge of how the instance was setup, the only choice is to create an Amazon Machine Image (AMI). This will create an exact copy of the disk, so everything you have installed will be available to any new instances launched from the AMI. The CloudFormation template can then be configured to launch instances using this AMI.
If, on the other hand, you knew all the commands that needed to be run to configure the instance, then you could provide a User Data script that would run when new instances first boot. This would configure the instances automatically and is the recommended way to configure instances because it is easy to modify and allows instances to launch with the latest version of the Operating System.
Such a script can be provided as part of a CloudFormation template.
See: Running commands on your Linux instance at launch - Amazon EC2
One option would be to create AMI from live instance and spin up new CF stack using the AMI.
Other would be importing resource: https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/
There is a tool (still in beta) developed by AWS called CloudFormer:
CloudFormer is a template creation beta tool that creates an AWS CloudFormation template from existing AWS resources in your account. You select any supported AWS resources that are running in your account, and CloudFormer creates a template in an Amazon S3 bucket.
The CloudFormer is an AWS managed template. Once you launch it, the template will create an AWS::EC2::Instance for you along with a number of other related resources. You will access the instance using URL through browser, and an AWS wizard will guide you from there.
Its tutorial even shows how to create a CloudFormation template from an existing EC2 instance.
Import the EC2 instance into CloudFormation then copy it’s template.
Read more: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html
Background:
We have several legacy applications that are running in AWS EC2 instances while we develop a new suite of applications. Our company updates their approved AMI's on a monthly basis, and requires all running instances to run the new AMI's. This forces us to regularly tear down the instances and rebuild them with the new AMI's. In order to comply with these requirements all infrastructure and application deployment must be fully automated.
Approach:
To achieve automation, I'm using Terraform to build the infrastructure and Ansible to deploy the applications. Terraform will create EC2 Instances, Security Groups, SSH Keys, Load Balancers, Route 53 records, and an Inventory file to be used by Ansible which includes the IP addresses of the created Instances. Ansible will then deploy the legacy applications to the hosts supplied by the Inventory file. I have a shell script to execute the first the Terrafrom script and then the Ansible playbooks.
Question:
To achieve full automation I need to run this process whenever an AMI is updated. The current AMI release is stored in Parameter store and Terraform can detect when there is a change, but I still need to manually trigger the job. We also have an AWS SNS topic to which I can subscribe to receive notification of new AMI releases. My initial thought was to simply put the Terraform/Ansible scripts on an EC2 instance and have a Cron job run them monthly. This would likely work, but I wonder if it is the best approach. For starters, I would need to use an EC2 instance which itself would need to be updated with new AMI's, so unless I have another process to do this I would need to do it manually. Second, although our AMI's could potentially be updated monthly, sometimes they are not. Therefore, I would sometimes be running the jobs unnecessarily. Of course I could simply somehow detect if the the AMI ID has changed and run the job accordingly, but it seems like a better approach would be to react to the AWS SNS topic.
Is it possible to run the Terrafrom/Ansible scripts without having them on a running EC2 instance? And how can I trigger the scripts in response to the SNS topic?
options i was testing to trigger ansible playbook in response to webhooks from alertmanager to have some form of self healing ( might be useful for you)
run ansible in aws lambda and have it frontend with api gaetway as webhook .. alertmanager trigger -> https://medium.com/#jacoelho/ansible-in-aws-lambda-980bb8b5791b
SNS receiver in AWS -> susbscriber-> AWS system manager - which supports ansible :
https://aws.amazon.com/blogs/mt/keeping-ansible-effortless-with-aws-systems-manager/
Alertmanager target jenkins webhook -> jenkins pipline uses ansible plugin to execute playbooks :
https://medium.com/appgambit/ansible-playbook-with-jenkins-pipeline-2846d4442a31
frontend ansible server with a webhook server which execute ansible commands as post actions
this can be flask based webserver or this git webhook provided below :
https://rubyfaby.medium.com/auto-remediation-with-prometheus-alert-manager-and-ansible-e4d7bdbb6abf
https://github.com/adnanh/webhook
you can also use AWX ( ansible tower in opensource form) which expose ansible server as a api endpoint ( webhook) - currently only webhooks supported - github and gitlab.
I have use case in amazon cloud, i'm using fargate cluster and cloudformation.
I want to do continuous deployment i.e on new image upload trigger i want to update the cloudformation stack with this new image, also run this automated deployment when client wants using manual trigger.
What should i use for continuous deployment, aws code deploy or aws lambda.
aws CodeDeploy has a provider CloudFormation with limited option and less control i believe.
aws lambda has a great control over CloudFormation client through its boto api.
I also read somewhere that when you get some limitations in CodeDeploy or CodePipeline you can integrate lambda to get rid of this limitation. So why not use lambda in the first place for continuous deployment only.
I'm very convinced about aws lambda over aws CodeDeploy after doing some research, However, i'm open for comments and suggestions.
You can use both of them to achieve perfect CI-CD implementation
If image gets uploaded the Lambda will be triggered and Lambda will be having your configurations and parameters
Using that it will call CodeDeploy to build your ECR images and It will get deployed to your Farget cluster
You can also achieve your second need using this implementation, manual trigger when client wants
In lambda you can trigger manualy passing parameters runtime
I hope this helps you
I am building a CI pipeline with AWS CodePipeline. I'm using CodeBuild to fetch my code from a repo, build a docker image and push the image to ECR. The source for my CodePipeline is my ECR repo and is triggered when an image is updated.
Now, here’s the functionality I am looking for. When a new image is pushed to ECR, I want to create an EC2 instance and then deploy the new image to that instance. When the app in the image has completed its task, I.e done something and pushed the results to S3, I want to terminate the instance. It could take hours to days before the task is complete.
Is CodeDeploy the right tool to use to deploy the ECR image to an EC2 instance for this use case? I see from the docs that CodeDeploy requires an already running instance to deploy to. I need to create one on the fly before CodeDeploy is initiated. Should I add a step in the CodePipeline to trigger a lambda that creates an instance before CodeDeploy gets run?
Any guidance would be much appreciated!
CloudTrail supports logging a PutImage event that you can use to do stuff with your pipeline. I prefer producing artifacts after specific steps in your build pipeline and then have a lambda function that reacts to an object created event. Your lambda function could then make the necessary calls to spin up ec2 instances. Your instance could then run a job and then call lambda again, which could tear it down. It sounds like you need an on-demand worker. Services like AWS Batch or ECS might be able to provide you with this functionality out of the box.
I'm trying to save our AWS cost. What I am doing right now is to terminate ec2 instances at 8pm then launch them again at 8am. I was able to do this via Skeddly (http://www.skeddly.com/).
The problem is the codes are not update every time I launch an instance because I'm just using an AMI. What I want to find out is, are there any services which I can use to auto deploy codes using CodeDeploy every 8am so that the instances are aligned with the latest codes.
One idea for how you can do this is that you can create a lambda function with a scheduled event: https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html
In that lambda you can call CodeDeploy with the aws sdk to kick off a deployment. How you determine what is the "latest code" is up to you. If this is from github you can just link to a zip of the head of master for example. Otherwise you need something to kick off an upload of your latest code to S3.