I managed to deploy a AWS Lambda function using Travis, however I also need to add a trigger to it (in my case it's Kinesis). Has anyone done it? If there is no out of the box way to do it with travis I suppose I need to add the script using AWS CLI? Anyone has done it and could share some advice or where I could take a look as reference?
I wanted primarily to add the trigger with Travis but Terraform makes it much simpler.
https://www.terraform.io/docs/providers/aws/r/lambda_event_source_mapping.html
So I can create my IAM roles, Kinesis Stream and the event mapping between Kinesis and Lambda using terraform https://www.terraform.io/docs/providers/aws/r/lambda_event_source_mapping.html
If you have any different ways that you believe is better, please do not hesitate in adding here. Thanks.
Related
I am trying to understand the correct way to setup my project on AWS so that I ultimately get the possibility to have CI/CD on the lambda functions. And also to ingrain good practices.
My application is quite simple : an API that calls lambda functions based on users' requests.
I have deployed the application using AWS SAM. For that, I used a SAM template that was using local paths to the lambda functions' code and that created the necessary AWS ressources (API Gateway and Lambda). It was necessary to use local paths for the lambda functions because the way SAM works does not allow using existing S3 buckets for S3 events trigger (see here) and I deploy a Lambda function that is watching the S3 bucket to see any updated code to trigger lambda updates.
Now what I have to do is to push my Lambda code on Github. And have a way that Github pushes the lambda functions' code from github to the created S3 bucket during the SAM deploy and the correct prefix. Now what I would like is a way to automatically to that upon Github push.
What is the preferred way to achieve that ? I could not find clear information in AWS documentation. Also, if you see a clear flaw in my process don't hesitate to point it out.
What you're looking to do is a standard CI/CD pipeline.
The steps of your pipeline will be (more or less): Pull code from GitHub -> Build/Package -> Deploy
You want this pipeline to be triggered upon a push to GitHub, this can be done by setting up a Webhook which will then trigger the pipeline.
Last two steps are supported by SAM which I think you have already implemented before, so will be a matter of triggering the same from the pipeline.
These capabilities are supported by most CI/CD tools, if you want to keep everything in AWS you could use CodePipeline which also supports GitHub integration. Nevertheless, Jenkins is perfectly fine and suitable for your use case as well.
There are a lot of ways you can do it. So would depend eventually on how you decide to do it and what tools you are comfortable with. If you want to use native AWS tools, then Codepipeline is what might be useful.
You can use CDK for that
https://aws.amazon.com/blogs/developer/cdk-pipelines-continuous-delivery-for-aws-cdk-applications/
If you are not familiar with CDK and would prefer cloudformation, then this can get you started.
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-github-gitclone.html
I'm trying to find out if it's possible to copy a snapshot from one account to another in different region in one go, without intermediate ( meaning copy/share to the other account then copy from the new account to the other region ), using lambda function and boto3.
I have searched in aws documentation but with no luck
When you need such "complex" logic, it can be implemented with either CloudFormation or Terraform. The flow will be like the comments suggested, copy to another region and give permission to another account.
This AWS blog speaks of a similar requirement with example CloudFormation templates here.
If you are unfamiliar with CloudFormation, you can get started with their docs but it isn't something you can do when in a hurry. Just good practice you can develop early on.
I would like to read an output parameter from a CF stack dynamically in a Lambda#Edge (javascript).
Can this be achieved using the amplify library or some other way? Any examples of how to do this?
I believe I need to give the script the DescribeStack permission to do this.
I have a problem with my AWS RDS. The instance that I use is Postgres. So, currently, I want to write a lambda function that will validate data for the each record. But, I can not find any solution for to create a trigger when a new record inserted. Could somebody help me with this problem?
I guess follwoing links would work for you. generally I have never used Lambda trigger on rds.
https://aws.amazon.com/blogs/database/capturing-data-changes-in-amazon-aurora-using-aws-lambda/
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.CommonDBATasks.CDC.html
I need your experience to determine which is the best tool to invoke an AWS lambda from another one. Here are the known tools, if you can give me your pros and cons for the one you know and its efficiency:
Invoke function
DynamoDB Stream
Kinesis Stream
SNS
SQS
S3 Bucket and Put Object
Any other proposal?
Thanks a lot for your help to determine the best strategy.
Note: I am using serverless and NodeJS if it can lead to another compatible option.
In my case, I have no real problem. I just want to take advantage of your experiences using this tool. I both need s3 for PDF files and dynamoDB to store. I just would like to use one of the available tool to communicate between my different components (lambdas) of my API. Maybe some of you think SNS should be the best option. Why. Some other S3? etc. This is not specific if my usage but yours in fact ;-) I think it is just difficult to determine the best adapted choice for a newcomer. In my case I would like to uniformize my communication between my different services (modularity/reproductive method) without any constraint of what service actually does. A kind of universal lambda communication tool.
You're thinking about this in the wrong way. You don't choose one mechanism over another like this. Instead, the mechanism is dictated by how events are being generated. Did an update happen to a DynamoDB table? Then a DynamoDB Streams event triggers a Lambda function. Did a file get uploaded to S3? Then an S3 object uploaded event triggers a Lambda function.