AWS CloudFormation does not recreate my application - amazon-web-services

I follow the tutorial on http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
The tutorial demonstrate how to automatically deploy a lambda and an API gateway using AWS cloudformation.
After some time I was able to complete the tutorial with success. This means that when I push a commit to the github repository linked to the AWS CodePipeline the changed code is uploaded/packaged to AWS -> build -> and deployed (i.e. i can see the code change)
My problem is that I tried to delete the lambda function and then invoke the Codepipeline by pushing a git commit. This trickered the codepipeline and I could watch source, build and staging steps complete successfully. However, I cannot find the lambda? I thought that cloudformation would recreate the application ? Can you help?

If you deleted the function manually then you're most likely running into this issue:
Resources that are created as part of an AWS CloudFormation stack must be managed from the same stack. Modifications to a resource must be done by a stack update. If a resource is deleted, a stack update is also necessary to remove the resource from the template. If a resource has been accidentally or purposely manually deleted, you can encounter errors when attempting to perform a stack update.
https://aws.amazon.com/premiumsupport/knowledge-center/failing-stack-updates-deleted/
You can resolve this by manually recreating the resource with the same name, then allowing CloudFormation to manage the resource in future.

The reason why I did not see any lambda function was because I only created the change set ("create or update change set") and missed to add the actual deploy stage "execute change set".

Related

can we do something like Terraform Plan in Serverless?

when I change something in my lambda repo and redeploy the lambda with serverless framework it make changes. I want to know the changes that going to happen prior to deploying the lambda.
I tried serverless changeset plugin, but it doesn't show a comparison between my current lambda configuration and the changes going to happen by deploying the lambda after making some changes in my lambda repo [e.g. the lambda name, tags etc.]
You can enable changesets with deploymentMethod: changesets so that serverless deploy doesn't actually execute the changes, but instead creates a changeset inside CloudFormation which you can inspect inside the console and then initiate from there.

how does terraform handle updating/deleting resources?

I am switching to gitlab and plan to use terraform. I have used cloudformation before and understand , deploying stack to aws, creating change stack and updating resources. how does updating/deleting work in terraform.
Its similar to CFN. TF has a state file (can be local or remote) where it stores information about your currently deployed resources and their configuration.
After any changes to your TF config files, TF would create a plan of how to apply your changes in relation to what it has in the state. The plan is similar to changeset in CFN, it will show what resources have to be deleted, replaced, created or modified.
Just like with changeset you have option to review the plan and if you agree with a proposed actions, you can apply it.
The biggest difference is what happens if there is a failure. Cloudformation will rollback the stack to the previous state whereas Terraform will leave the resources in a partially deployed state.

Setting up CodePipeline with Terraform

I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.

Serverless keeps trying to create user pool domain when it already exist

I have a aws cognito user group configured to my serverless.yml. Whenever I do a serverless deploy, it will try to create the same user pool domain even though it already exist, hence returning me the error of:
[aws-cognito-idp-userpool] domain already exist
The only workaround is for me to delete the user pool domain every time I want to do a serverless deploy from the AWS UI. Anyone faced this issue before?
I believe there's no way to skip it,
Check this - https://github.com/serverless/serverless/issues/3183
You can try to break the serverless.yaml file into multiple files and deploy them separately for easier management,
So use the file only to create/deploy resources you need to freshly create.
The serverless.yaml will get converted into the vendor-specific Code to Infra service file,
eg. CloudFormation for AWS
Hope this helps
This is actually a CloudFormation issue vs. a Serverless issue. I ran into it in my Serverless app, BUT had my UserPool* resources independently defined in the resources section of the serverless.yml file. I changed the Domain Prefix and that requires the resource to be recreated. Here's the issue: CloudFormation always creates a resource first before deleting the old one, which blocks the new domain from being associated with the User Pool.
I've seen this behavior with other resources and the recommended behavior is to:
1. Blank out the resource from the template
2. Update the stack (deletes resource)
3. Restore the resource in template
4. Update the stack (creates a new one vs. replace).
This way you still leverage your automation tools without going to the console. Not perfect, and it'd be more preferable if there was a way to force the replacement sequence in CloudFormation. If your setup has Serverless generating the resource, then deleting via the console may be your only option.

Pipeline replaces previously deployed lambda when deploying new lambda

I'm referencing this aws tutorial to deploy our lambdas cross-account wise.
I'm able to get the lambdas to deploy over successfully but I notice that if I go deploy another lambda (lambda_b), RE-USING the SAME pipeline but for a different lambda, this different lambda (lambda_b) will replace the other lambda (say lambda_a) that was deployed earlier so that at any time, I only have a single lambda within the aws console.
Could this replacement be happening because of how i'm creating the changeset?
I just don't know how to proceed or where to look to get an idea of why it doesn't deploy lambda_b without replacing lambda_a even though we're re-using the same pipeline for all lambdas.
To deploy lambda_a I had to go through all steps, 1-6 of the tutorial linked above
However, to deploy lambda_b, I only rerun step 4 and 5 of the above, is that maybe why? When I try rerunning from the beginning again, it doesn't see the changeset for step 1
In the codepipiline, cloudformation yaml file, is there a way to set a retain:true attribute or some kind of way so that I can show all the lambdas that we've deployed so far; right now, i'm only able to show the lambda that was lastly deployed since a new lambda deployment (lambda_b) always replaces the old lambda deployment (lambda_a)
I want the console to show both lambda_a, and lambda_b
Seeing as you're using CloudFormation to deploy the lambda function, when a resource (lambda_a) is removed from the template it will be deleted as part of the CloudFormation clean up step.
You need to retain both functions in the template you're deploying to have both lambda_a and lambda_b deployed at the same time.