So, first go at AWS LAmbda Containers (need to deploy a "big-ish" Lambda) and we use Serverless framework for all other Lambdas and I tried it for the container as well.
It all went fine and the Lambda was created with all the expected parameters following this blog/guide: https://www.serverless.com/blog/container-support-for-lambda
Of course I had messed up the code and forgotten a module so teh Lambda didn't run in AWS.
I added the module and did a re-deploy (sls deploy) from my laptop and it writes out everything as "success" like in half a second so it is clear it is not deploying anything (I am using the --force flag as well but no difference.
The only way to get it updated seems to be to alter some code and save it and Serverless will "detect" a change and redeploy (for real).
This will cause a problem in our DevOps deploy pipeline so any way of getting it redeploying through a parameter/command?
Related
when I change something in my lambda repo and redeploy the lambda with serverless framework it make changes. I want to know the changes that going to happen prior to deploying the lambda.
I tried serverless changeset plugin, but it doesn't show a comparison between my current lambda configuration and the changes going to happen by deploying the lambda after making some changes in my lambda repo [e.g. the lambda name, tags etc.]
You can enable changesets with deploymentMethod: changesets so that serverless deploy doesn't actually execute the changes, but instead creates a changeset inside CloudFormation which you can inspect inside the console and then initiate from there.
I have already my whole AWS infrastructure set up in Terraform and everything works fine. So now, instead of deploying it from my local machine running terraform apply, I want to deploy my Infrastructure with an AWS Lambda Script completely serverless. Is there anyone who knows how to do this or where to read about this concept? Didn't find anything useful on the internet until now.
I think my sourcecode could lie on a S3 Bucket and the Lambda function grabs it, and runs it in terraform also set up in the function itself i guess due to terraform is such a small program.
I would attempt that as follows:
Create a lambda container image which would include official terraform binary. The actual lambda function code would use, lets say, python's python-terraform package to interact with the binary. Or directly invoke the binary using subprocess.run.
Setup a lambda execution role with all the permissions needed for creation of your resources.
Create a lambda function using the container image.
I haven't tried that personally yet, but I think it is something that should work.
I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.
I'm referencing this aws tutorial to deploy our lambdas cross-account wise.
I'm able to get the lambdas to deploy over successfully but I notice that if I go deploy another lambda (lambda_b), RE-USING the SAME pipeline but for a different lambda, this different lambda (lambda_b) will replace the other lambda (say lambda_a) that was deployed earlier so that at any time, I only have a single lambda within the aws console.
Could this replacement be happening because of how i'm creating the changeset?
I just don't know how to proceed or where to look to get an idea of why it doesn't deploy lambda_b without replacing lambda_a even though we're re-using the same pipeline for all lambdas.
To deploy lambda_a I had to go through all steps, 1-6 of the tutorial linked above
However, to deploy lambda_b, I only rerun step 4 and 5 of the above, is that maybe why? When I try rerunning from the beginning again, it doesn't see the changeset for step 1
In the codepipiline, cloudformation yaml file, is there a way to set a retain:true attribute or some kind of way so that I can show all the lambdas that we've deployed so far; right now, i'm only able to show the lambda that was lastly deployed since a new lambda deployment (lambda_b) always replaces the old lambda deployment (lambda_a)
I want the console to show both lambda_a, and lambda_b
Seeing as you're using CloudFormation to deploy the lambda function, when a resource (lambda_a) is removed from the template it will be deleted as part of the CloudFormation clean up step.
You need to retain both functions in the template you're deploying to have both lambda_a and lambda_b deployed at the same time.
I am working to extend this solution https://github.com/adieuadieu/serverless-chrome to my needs.
I am using serverless (on my laptop with Debian 9) to deploy it to AWS Lambda. I would like to use AWS-Sam-local https://github.com/awslabs/aws-sam-local to run it locally for developing.
I would like to use AWS-Sam-local because I believe that there is difference between running this solution via serverless webpack serve --function run and sam local start-api. The difference I think, is event object which I want to make contain POST or binary data (multipart files transfer). For that I have to allow binary transfer via API Gateway.
But correct me if I am wrong because I am totally green in the AWS and Serverless field and this is my first time with this technologies.
The problem I get is aws-sam-local needs the CloudFormation template to know how to run serverless-chrome project. If I make deploy to AWS and go to CloudFormation Console I can copy that template after selecting it in "Stacks" table and clicking "Template" tab. Then I use cfn-flip to convert JSON into YAML. In the end I got template.yml, but running sam local start-api gives me error:
2017/10/06 11:03:23 Connected to Docker 1.32
ERROR: No Serverless functions were found in your SAM template.
Please tell me what to do to make serverless-chrome run locally as it would run on AWS Lambda.
The templates Serverless uses to deploy are available in two places:
Remotely, in the S3 deployment bucket
locally, in .serverless/