Is there a need for private link when using lambda destinations - amazon-web-services

In a project I'm currently working on I have two configurations where I need to pass information from one lambda to another and Lambda destinations looks like the ideal solution. In the first I have two lambda functions deployed inside the same subnet, I would like to have lambda A pass to lambda B on success and to sqs on failure. The second configuration is the same except that lambda B does not live inside a VPC. When I try to have A pass to B in the first configuration (where they both live in the VPC) lambda A does not time out as it would if you were trying to pass to an sqs without a private link configured but lambda b does not get invoked.

Calling a Lambda from another is not a best practice. Try AWS Step Function

Related

How to do A/B deployment of lambda which are connected via SQS?

So, I have an application which invokes a URL from API gateway due to which a Lambda function gets triggered which writes to SQS queue and the queue is polled by another lambda function which send the transformed message to SNS topic (This has been configured via code base of lambda).
Something like this
So if i want to do A/B deployment on those lambdas function it doesn't seem to work.
So what I tried was with creating another stage variable in API gateway and creating different lambda unction and SQS but at the end I was not able to point the application to new version without redeploying my code (What I want is to send traffic to newly deployed changes after testing)
Also tried with creating different version of lambda function and alias record to shift with weight but that way we won't be able to test new version without effecting the older one.
Is there any way we can do A/B on this?

Is it possible to lambda function access to rds db via calling another rds db connector lambda function?

I know lambda function can both call another lambda function and access rds db. This is the case where both features are contained in one function.
This time, I want to create lambda A that manages accessing to rds db, and lambda B calls lambda A and access to rds db. (either by invoke lambda function directly or api url call)
I feel this can be done but couldn't find and hints but can't see anyone who have done my case.
Added
For more specific scenario, I want lambda A to connect to my rds db using pymysql, returns pymysql.connect output. Lambda B calls lambda A, get pymysql.connect and put additional cursor and execute queries that I want.
The problem at the moment is when lambda is invoked, the output is in json and reading that json gives byte (or string by various methods). Beside, the original data type for pymysql.connect is <class 'pymysql.connections.Connection'>, therefore it does not work.
Yes this can be done. The same AWS Lambda programming model would still apply. So you'd have to make your own API / model of how Lambda functions A and B would communicate with each other.

How can an S3 event trigger a Lambda Function in a VPC?

I have one query. I tried to google it but could not find the answer specific to my query.
S3 is a global service. We can access it via the internet or using the VPC endpoint from our private network. That I understand.
If lambda functions are present inside VPC. Then how does s3 event trigger lambda functions?
You have to differentiate between the Lambda Service, a Lambda Function, and an Execution Context.
The Lambda service operates the Lambda functions, and an Execution Context is an instance of a Lambda Function. Only the Execution Context is located in the VPC. The rest of the components reside outside of it. The Lambda service can always communicate with the Execution Contexts of any particular Lambda Function to pass events to it and monitor the execution. It does that through a private channel and not through the VPC.
S3 is also not really a global service. The buckets and APIs reside in specific regions. It has a global namespace, meaning that bucket names have to be globally unique. This means some APIs will do "global checks", but when S3 acts, it acts inside of a region.
Let's talk through what happens in the S3-Lambda integration. When an event happens in a bucket (e.g. an object is created), the S3 service checks, which endpoints are interested in this event. If you want to send an event to a Lambda function, it has to be in the same region as the bucket. S3 will then contact the Lambda service and tell it to invoke the Lambda function with this specific event. S3 doesn't care about the results here.
This is where Lambda takes over. The service checks if S3 is permitted to invoke the function in question. If that's the case, it will check for existing Execution Contexts for that function that aren't busy. Once it finds
one, it sends the event to the Execution Context, which is executed inside the VPC and can access resources in the VPC.
Assuming everything goes well, this is how it ends, otherwise, Lambda will retry the event in another Execution Context.
References
AWS Docs: Using AWS Lambda with Amazon S3
AWS Docs: Asynchronous Lambda Invocation

FInd out who invoked AWS lambda function

My AWS lambda function is getting invoked my multiple places like api-gateway, aws-SNS and cloud-watch event. Is there any way to figure out who invoked the lambda function? My lambda function's logic depends on the invoker.
Another way to achieve this is having three different lambda functions but I don't want to go that way if I can find invoker information in a single Lambda function itself.
I would look at the event object as all of the three services will have event of different structure.
For example, for CloudWatch Events I would check if there is a source field in the event. For SNS I would check for Records and API gateway for httpMethod.
But you can check for any other attribute that is unique to a given service. If you are not sure, just print out to logs example events from your function for the three services and check what would be the most suited attribute to look for.

How AWS Lambda function throw SubnetIPAddressLimitReachedException?

In one of my project, one of the AWS Lambda function (usually called every minute) invoking another AWS Lambda function inside its function ( using AWSLambdaClient lambdaClient;). sometimes lambdaClient on invocation of lambda function (its not frequent say 4 to 5 time in every hour) throwing SubnetIPAddressLimitReachedException :
2016-11-24 14 <---------------------> INFO xyzHandler:395 - Lambda was not able to set up VPC access for the Lambda function because one or more configured subnets has no available IP addresses. (Service: AWSLambda; Status Code: 502; Error Code: SubnetIPAddressLimitReachedException; Request ID: XXXX)
I searched here and here , but I didn't find any clear explaination of this exception ?
When your Lambda function is configured to execute inside your VPC, you specify one or more subnet IDs in which the Lambda function will execute.
The subnets that you specify needs to have enough free IP addresses inside them to accomodate all of the simultaneous executions of your Lambda function.
For example, if you choose one subnet and it is defined as a /24, then you have at most 254 or so IP addresses.
If your Lambda function(s) are called 300 times simultaneously, they're going to need 300 individual IP addresses, which your subnet cannot accomodate. In this case, you will get the SubnetIPAddressLimitReachedException error.
When Lambda functions complete, their resources will be reused. So they will free up the used IP addresses and/or re-use them during subsequent Lambda executions.
There is currently no way to limit the number of simultaneous executions within Lambda itself. I've seen people use other services, such as Kinesis, to limit it.
There are 3 avenues of resolution:
If your Lambda function does not need to execute within your VPC, and/or access resources from within your VPC, move it out of the VPC.
Specify more or different subnet IDs with more available IP addresses.
Modify your Lambda function to not call other Lambda functions. The root Lambda function and the subsequently called Lambda functions will each require an IP address.
Accessing Resources in a VPC
You can set this up when you create a new function. You can also update an existing function so that it has VPC access. You can configure this feature from the Lambda Console or from the CLI. Here’s how you set it up from the Console:
That’s all you need to do! Be sure to read Configuring a Lambda Function to Access Resources in an Amazon VPC in the Lambda documentation if you have any questions.
Resource link:
Access Resources in a VPC from Your Lambda Functions
Accessing the Internet and other AWS Resources in your VPC from AWS
Lambda