aws CodeDeploy vs aws Lambda - amazon-web-services

I have use case in amazon cloud, i'm using fargate cluster and cloudformation.
I want to do continuous deployment i.e on new image upload trigger i want to update the cloudformation stack with this new image, also run this automated deployment when client wants using manual trigger.
What should i use for continuous deployment, aws code deploy or aws lambda.
aws CodeDeploy has a provider CloudFormation with limited option and less control i believe.
aws lambda has a great control over CloudFormation client through its boto api.
I also read somewhere that when you get some limitations in CodeDeploy or CodePipeline you can integrate lambda to get rid of this limitation. So why not use lambda in the first place for continuous deployment only.
I'm very convinced about aws lambda over aws CodeDeploy after doing some research, However, i'm open for comments and suggestions.

You can use both of them to achieve perfect CI-CD implementation
If image gets uploaded the Lambda will be triggered and Lambda will be having your configurations and parameters
Using that it will call CodeDeploy to build your ECR images and It will get deployed to your Farget cluster
You can also achieve your second need using this implementation, manual trigger when client wants
In lambda you can trigger manualy passing parameters runtime
I hope this helps you

Related

AWS Lambda Container deployment without SAM

Is this possible to deploy AWS Lambda Containers without using SAM.
Every article I found on internet is suggesting to use SAM to deploy.
As SAM is a wrapper on AWS cloud formation I want to use only cloud formation YAML to deploy lambda containers.
As you already know, you need to create a docker image of your lambda application and then push to AWS docker registry which is ECR, now there are several ways to deploy your lambda:
1- use AWS console, go into lambda in GUI choose "container image" in the options and provide ECR link (No cloudformation will be used this way)
2- create a SAM template and then use AWS CLI or AWS Console again to setup a cloudformation and SAM will be compiled to cloudformation later in the process.
3- directly create a cloudformation template and then use CLI or AWS console to deploy your lambda
4- use CDK to do your deployment
And maybe many other choices and methods. now depending on what exactly you want to do, ask for more specific detail.

terraform : find out if AWS Service is supported in targeted region

We are using codepipeline to deploy our application on to the AWS EC2 Nodes.
However codepipeline is not supported in all the AWS Regions and causing our terraform deployment to fail.
I would like to use userdatascript on AWS EC2 nodes, where AWS Regions lacking support of AWS Codepipeline.
Is there any way for me to detect/findout if codepipeline service supported/or not on targeted region through Terraform ?
AWS provides endpoint for the codepipeline in this documentation - https://docs.aws.amazon.com/general/latest/gr/codepipeline.html
My logical/hypothetical solution here is below
Run the curl command via local-exec or use http get data source to hit the endpoints on targeted region , the endpoint follow the below pattern https://codepipeline.<InsertTargetedRegion>.amazonaws.com
From the result of the step 1, make logical decision. if endpoint is reachable, create AWS Codepipeline and downstream resources, if endpoint is not reachable, create EC2 LC with userdata script and drop the AWS Codepipeline.
The other solution ( which is little clumsy ) , I can think of is to make a terraform list for the regions which do not support codepipeline as service and make some logical decision based on that.
However this clumsy solution required human effort (checking/knowing if region support aws codepipeline and update terraform list ) and updating terraform configuration every now and then.
I am wondering, if there is any other way to know if targeted region supports codepipeline or not.
Thank You.
I think that having a static list of supported regions is simply the easiest and most direct way of knowing where the script can run. Then the logic is quite easy: if the current region is supported continue, if not error and stop. Any other logic will be cumbersome and unnecessary.
Yes, you can use a static file, but is it a scalable solution? How can you track if a new region adds. I think this link will help you.
https://aws.amazon.com/blogs/aws/new-query-for-aws-regions-endpoints-and-more-using-aws-systems-manager-parameter-store/
With AWS CLI you can query services availability with regions

AWS CodePipeline deploy process

I am building a CI pipeline with AWS CodePipeline. I'm using CodeBuild to fetch my code from a repo, build a docker image and push the image to ECR. The source for my CodePipeline is my ECR repo and is triggered when an image is updated.
Now, here’s the functionality I am looking for. When a new image is pushed to ECR, I want to create an EC2 instance and then deploy the new image to that instance. When the app in the image has completed its task, I.e done something and pushed the results to S3, I want to terminate the instance. It could take hours to days before the task is complete.
Is CodeDeploy the right tool to use to deploy the ECR image to an EC2 instance for this use case? I see from the docs that CodeDeploy requires an already running instance to deploy to. I need to create one on the fly before CodeDeploy is initiated. Should I add a step in the CodePipeline to trigger a lambda that creates an instance before CodeDeploy gets run?
Any guidance would be much appreciated!
CloudTrail supports logging a PutImage event that you can use to do stuff with your pipeline. I prefer producing artifacts after specific steps in your build pipeline and then have a lambda function that reacts to an object created event. Your lambda function could then make the necessary calls to spin up ec2 instances. Your instance could then run a job and then call lambda again, which could tear it down. It sounds like you need an on-demand worker. Services like AWS Batch or ECS might be able to provide you with this functionality out of the box.

Configure serverless framework not to upload to S3?

Need to deploy a serverless function in aws lambda using serverless. Serverless uses aws Cloud formation to build the stack completely and uploads the module into S3. It uploads by default into S3, but the intended file is less than 10 mb which could be attached in aws lambda directly. How to configure the serverless.yml to achieve the scenario.
This is not possible.
You've asked serverless to create a CloudFormation template that creates some lambdas. When AWS executes the template, it executes it in the cloud away from your computers local files. Thats why your code is packaged, uploaded to S3, and made available for CloudFormation use.
CloudFormation does allow for code to be inline in the template but serverless does not support this. And there is no way to ask CloudFormation to create a lambda without code attached for manual upload at a later date.
Frankly the cost to have the additional bucket and a few small files is minimal (if any). If the concern is the additional deployment bucket, you can specify a deployment bucket name for multiple serverless deployments.

AWS AMI Automation using Jenkins and Cloud Formation

Now, i'm creating AWS AMI manually from an EC2 instance. and i would like to automate the process using Jenkins build process.
I've configured the jenkins-cloudformation plugin with the credentials and tried to trigger the cloud formation template to launch the EC2 instance. From here how can i proceed the automation process to create the AMI with in the cloud formation template?
Can someone help me on this?
This is an old question but here is some info for anyone trying to do such automation. You might use HashiCorp Packer for creating the image but, if you know your way around lambdas and AWS API, you do not need packer.
You can create a new AMI by launching an instance from a source AMI, customizing it the way you want, and then call AWS api to make an AMI out of the instance. Here are steps you might follow for this:
first, you need to find a source image. You can specify aws ec2 describe_images filters to do this.
once you have the image, you need launch an instance from it. Here is boto3 api to make the call.
while launching the instance, you will want to pass 'UserData' to it. You user data may be a few simple lines of installing packages or do advanced stuff. You can put it all into a script, host it in s3, and make UserData download and execute your script.
Once you are done with your work on the instance, it is time to capture it as a new AMI.
So, how would you do these and where is the glue?
You can use AWS lambda to manage these steps. One lambda can find the source AMI and launch and instance from it. Another lambda can capture the image.
Once your instance is customized, you would trigger the lambda that will capture it as an AMI. You might do that by directly invoking lambda. Depending on re-usability requirements you have, you might want to trigger that lambda from SNS or CloudWatch, in that case you would send an SNS message to your SNS topic or enable/trigger your CloudWatch rule.
You cloudformation would install these lambdas and any other components that would trigger them (SNS and CloudWatch).