What is the best way to invoke a lambda from another one? - amazon-web-services

I need your experience to determine which is the best tool to invoke an AWS lambda from another one. Here are the known tools, if you can give me your pros and cons for the one you know and its efficiency:
Invoke function
DynamoDB Stream
Kinesis Stream
SNS
SQS
S3 Bucket and Put Object
Any other proposal?
Thanks a lot for your help to determine the best strategy.
Note: I am using serverless and NodeJS if it can lead to another compatible option.
In my case, I have no real problem. I just want to take advantage of your experiences using this tool. I both need s3 for PDF files and dynamoDB to store. I just would like to use one of the available tool to communicate between my different components (lambdas) of my API. Maybe some of you think SNS should be the best option. Why. Some other S3? etc. This is not specific if my usage but yours in fact ;-) I think it is just difficult to determine the best adapted choice for a newcomer. In my case I would like to uniformize my communication between my different services (modularity/reproductive method) without any constraint of what service actually does. A kind of universal lambda communication tool.

You're thinking about this in the wrong way. You don't choose one mechanism over another like this. Instead, the mechanism is dictated by how events are being generated. Did an update happen to a DynamoDB table? Then a DynamoDB Streams event triggers a Lambda function. Did a file get uploaded to S3? Then an S3 object uploaded event triggers a Lambda function.

Related

AWS Event Bridge schema types as lambda layer

We are building a notification service using AWS EventBridge and different notifications will be sent depending on the event. We are thinking of a good way to share the type bindings for the event schemas among different teams/lambda functions. So we are considering to include the types as a lambda layer, but then the question is how to incorporate it into local development workflow. Is anyone doing this sort of things or any advice? Thanks in advance.
A Lambda layer is simply a module/library in your application, in this way you be able to use in your local environment and implement in your Lambda like the other libraries and get an easier way to share with other Lambda functions in your account.
The way that you will implement this layer will depends of your technology, as we can see on this table:
And read more in this page Creating and sharing Lambda layers
In your case, using a service like S3 or DynamoDB will be an easy way to manage this type binding/schemas, but this depends of data modeling/needs.

Track Roles/Identities which are deleting/updating/inserting Items in DynamoDB Table

I'm searching for a method to track the identities which are doing modifications on my table besides the application service itself. In the beginning I though there could be two options, but:
CloudTrail - the documentation (Logging DynamoDB Operations by Using AWS CloudTrail) says, as far as I understood, I'd be only able to track changes made to the infrastructure itself, but not to the actual use of a table.
DynamoDB Streams - I'd guessed that the modifying identity is also passed in a stream event, but actually it's not. I'm using NEW_AND_OLD_IMAGES as the stream type.
Am I overlooking something or is there probably another possibility anywhere else? The streams event does pass me an EventID. Is this of use somewhere?
Grateful for any tips on how to solve this, even if it's a complete different approach.
AWS CloudTrail now supports logging for DynamoDB actions!
AWS CloudTrail Adds Logging of Data Events for Amazon DynamoDB

How to implement a Lambda trigger to fire once on a global dynamoDb table

I have a dynamoDb table that is setup as global (2019 version) between two regions.
I have a lambda function assigned as a trigger on the table.
When a record is inserted into, say, the east version of the table then the east version of the lambda is triggered. The record is then replicated to the west version of the table and the west version of the lambda is triggered.
I want one lambda triggered. But I also want both triggers to be enabled in case one region goes down.
How can I achieve this?
I would rather not make my trigger logic idempotent.
I don’t know if this could be implemented without idempotency unless you want to make it extremely brittle and complicated. It is difficult if not impossible to obtain exactly once end-to-end delivery within a distributed system without using some kind of an idempotency filter.
Without knowing more about what you are doing, what about instead of writing into DynamoDB directly, you write to a local Step Function first. The data is processed in the step function and then it is written to DynamoDB. That way when it hits DynamoDB, it is already "processed" and the replication happens with no problems. I am not sure this would work as I do not know your full situation, but it might.
I think you can use a FIFO SQS for this. Lambda which responds to your dynamoDB stream can write to a FIFO SQS.

AWS Lambda environment

To reduce the cost on instances, we were looking for options.
AWS lambda seems to be a good option for us.
Its still in the preliminary stage of searching for available alternatives.
My concern is if we switch some of our applications to lambda, we will be confined to use AWS environments only , and in future it might become a boundation for a scenario , which we cant predict at the moment.
So my question is, is there a way that we can still use lambda in an environment which is not an AWS environment.
Thanks!
AWS Lambda functions are basically containers, where its lifecycle is managed by Amazon.
When you use Lambda, there are several best practices you can follow, to avoid full locking. One of the recommended practice is to separate the business logic from Lambda handler. When you separate the Lambda handler, it only works as the controller which points to the executing code.
/handler.js
/lib
/create-items
/list-items
For example, if you design a web application API this way with NodeJS in Lambda, you can later move the business logic to an ExpressJS server by moving the handler code to ExpressJS Routes.
As you can see, you will still require putting additional effort to move an application from Lambda to another environment. By properly designing, you can only reduce the efforts.
As per my knowledge,
Its AWS lambda function, so it is suppose to be deployed on AWS instances only, because they support the needed environment.
From AWS site there are couple of options ...
https://docs.aws.amazon.com/lambda/latest/dg/deploying-lambda-apps.html

What is the best way to work with environments in AWS API Gateway?

I am using AWS to build an API, and deploy this to multiple stages.
When a call is made to a specific environment, I need to get a stage variable in Lambda and then data is recorded in a DynamoDB table such as "environment-Table".
Is this the best way to work with environments (like development, production etc) using AWS API Gateway, Lambda and DynamoDB?
It difficult to say what the best approach is for your specific situation, given the limited data in your post. Managing multiple environments such as development and production was one of the intended uses of stage and stage variables. I don't see any obvious problems with what your are proposing.
Depending on your use case, you can call a Lambda function to record data in DynamoDB, or you may be able to skip the Lambda function and record the data in DynamoDB directly using the AWS proxy integration type.