AWS X-Ray Sampling Graph not showing data - amazon-web-services

I have a lambda function written in c# which is also accessible through API Gateway. I have enabled X-Ray tracing for both of them i.e. Lambda and API gateway. To view better traces I have created a sampling rule in AWS Console, As mentioned below image but I am not able to get any data in the graph for that sampling rule.
I have also tried to add sampling rules from code, as in the image below
The JSON file is something like this
I will really appreciate it if you can guide me to get trace data in the sampling rule graph.
A question that I have in mind, If I create a new sampling rule, do I have to make changes to the code as well? If yes, what will be the required changes for lambda (C#) code.

AWS Lambda will adopt the default sampling rule, which is 1 request per second and 5 percent of additional requests and the sampling rule in Lambda can't be configured at this moment. https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html

Related

Can I trace every request using AWS X-Ray?

According to the docs,
the X-Ray SDK applies a sampling algorithm to determine which requests get traced. By default, the X-Ray SDK records the first request each second, and five percent of any additional requests.
Is it possible to trace all requests?
It is possible to set the sampling rate to 100%.
However, as noted in the FAQs:
X-Ray should not be used as an audit or compliance tool because it does not guarantee data completeness.

How to auto-scale an AWS lambda using auto-scaling service

I am trying to add the lambda in the Auto-scaling target, and getting the error "No scalable resources found" while trying to fetch by tag.
Is it possible or allowed to add lambda to the auto-scaling target?
UPDATE:
I am trying to figure out how to change the provisional concurrency during non-peak hours on the application that will help to save some cost so I was exploring the option of auto-scaling
Lambda automatically scales out for incoming requests, if all existing execution contexts (lambda instances) are busy. There is basically nothing you need to do here, except maybe set the maximum allowed concurrency if you want to throttle.
As a result of that, there is no integration with AutoScaling, but you can still use an Application Load Balancer to trigger your Lambda Function if that's what you're after.
If you're building a purely serverless application, you might want to look into the API Gateway instead of the ALB integration.
Update
Since you've clarified what you want to use auto scaling for, namely changing the provisioned concurrency of the function, there are ways to build something like that. Clément Duveau has mentioned a solution in the comments that I can get behind.
You can create a Lambda Function with two CloudWatch events triggers with Cron-Expressions. One for when you want to scale out and another one for when you want to scale in.
Inside the lambda function you can use the name of the rule that triggered the function to determine if you need to do a scale out or scale in. You can then use the PutFunctionConcurrency API-call through one of the SDKs mentioned at the bottom of the documentation to adjust the concurrency as you see fit.
Update 2
spmdc has mentioned an interesting blog post using application auto scaling to achieve this, I had missed that one - you might want to check it out, looks promising.

What is `Active tracing` mean in lambda with Xray?

I deployed a lambda with xray is enabled. And I am able to see all trace in XRay console from my lambda. But I can see a warning message in below screenshot. It shows Active tracing requires permissions that are not configured to lambda. But I don't understand what Active tracing mean. I have read article like this https://docs.aws.amazon.com/xray/latest/devguide/xray-services-lambda.html but it doesn't explain very well.
So what does Active tracing mean and does it cost too much?
I also had this warning under "Active tracing." If you click into Edit it gives a bit more explanation, saying it needs permission to send trace data.
You can find the documentation here, but the short version is that you'll want to add the AWSXRayDaemonWriteAccess policy to your lambda function's execution role.
The different levels of x-ray integration with AWS services is explained here:
Active instrumentation – Samples and instruments incoming requests.
Passive instrumentation – Instruments requests that have been sampled by another service.
Request tracing – Adds a tracing header to all incoming requests and propagates it downstream.
Tooling – Runs the X-Ray daemon to receive segments from the X-Ray SDK.
AWS Lambda supports both active and passive instrumentation. So basically you use passive instrumentation if your function handles requests that have been sampled by some other service (e.g. API gateway). In contrast, if your function gets "raw" un-sampled requests, you should use active instrumentation, so that the sampling takes place.

Connect two different AWS Lambda layers?

Is there any way to connect two different AWS Lambda layers?
Usually, we could invoke one lambda function by another lambda function. Is that possible in the lambda layer as well?
Lambda layers are used for dependencies only and do not include application code that can be directly invoked. This provides the ability to create one set of dependencies and share them across lambda functions reducing the chance of issues with versioning of dependencies as well as reducing the over all amount of lambda code storage used by your account in the region. Per this link, AWS Provides 75GB of storage for lambda layers and function code per region.
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
You can attach more than one layer to a lambda function. They will apply in a layer order until all layers have been added. This can be done using the web console. There is a "layers" button in the center of the console. Select it, then select a layer you have created and the version of the layer code.
To learn how to create a lambda layer for python, or see an example of lambda layers in use, please see these step by step video instructions: https://geektopia.tech/post.php?blogpost=Create_Lambda_Layer_Python

Trigger RDS lambda on CloudFront access

I'm serving static JS files over from my S3 Bucket over CloudFront and I want to monitor whoever accesses them, and I don't want it to be done over CloudWatch and such, I want to log it on my own.
For every request to the CloudFront I'd like to trigger a lambda function that inserts data about the request to my MySQL RDS instance.
However, CloudFront limits Viewer Request Viewer Response triggers too much, such as 1-second timeout (which is too little to connect to MySQL), no VPC configuration to the lambda (therefore I can't even access the RDS subnet) and such.
What is the most optimal way to achieve that? Setup an API Gateway and how would I send a request to there?
The typical method to process static content (or any content) accessed from CloudFront is to enable logging and then process the log files.
To enable CloudFront Edge events, which can include processing and changing an event, look into Lambda#Edge.
Lambda#Edge
I would enable logging first and monitor the traffic for a while. When the bad actors hit your web site (CloudFront Distribution) they will generate massive traffic. This could result in some sizable bills using Lambda Edge. I would also recommend looking in Amazon WAF to help mitigate Denial of Service attacks which may help with the amount of Lambda processing.
This seems like a suboptimal strategy, since CloudFront suspends request/response processing while the trigger code is running -- the Lambda code in a Lambda#Edge trigger has to finish executing before processing of the request or response continues, hence the short timeouts.
CloudFront provides logs that are dropped multiple times per hour (depending on the traffic load) into a bucket you select, which you can capture from an S3 event notification, parse, and insert into your database.
However...
If you really need real-time capture, your best bet might be to create a second Lambda function, inside your VPC, that accepts the data structures provided to the Lambda#Edge trigger.
Then, inside the code for the viewer request or viewer response trigger, all you need to do is use the built-in AWS SDK to invoke your second Lambda function asynchronously, passing the event to it.
That way, the logging task is handed off, you don't wait for a response, and the CloudFront processing can continue.
I would suggest that if you really want to take this route, this will be the best alternative. One Lambda function can easily invoke a second one, even if the second function is not in the same account, region, or VPC, because the invocation is done by communicating with the Lambda service's endpoint API.
But, there's still room for some optimization, because you have to take another aspect of Lambda#Edge into account, and it's indirectly related to this:
no VPC configuration to the lambda
There's an important reason for this. Your Lambda#Edge trigger code is run in the region closest to the edge location that is handling traffic for each specific viewer. Your Lambda#Edge function is provisioned in us-east-1, but it's then replicated to all the regions, ready to run if CloudFront needs it.
So, when you are calling that 2nd Lambda function mentioned above, you'll actually be reaching out to the Lambda API in the 2nd function's region -- from whichever region is handling the Lambda#Edge trigger for this particular request.
This means the delay will be more, the further apart the two regions are.
This your truly optimal solution (for performance purposes) is slightly more complex: instead of the L#E function invoking the 2nd Lambda function asynchronously, by making a request to the Lambda API... you can create one SNS topic in each region, and subscribe the 2nd Lambda function to each of them. (SNS can invoke Lambda functions across regional boundaries.) Then, your Lambda#Edge trigger code simply publishes a message to the SNS topic in its own region, which will immediately return a response and asynchronously invoke the remote Lambda function (the 2nd function, which is in your VPC in one specific region). Within your Lambda#Edge code, the environment variable process.env.AWS_REGION gives you the region where you are currently running, so you can use this to identify how to send the message to the correct SNS topic, with minimal latency. (When testing, this is always us-east-1).
Yes, it's a bit convoluted, but it seems like the way to accomplish what you are trying to do without imposing substantial latency on request processing -- Lambda#Edge hands off the information as quickly as possible to another service that will assume responsibility for actually generating the log message in the database.
Lambda and relational databases pose a serious challenge around concurrency, connections and connection pooling. See this Lambda databases guide for more information.
I recommend using Lambda#Edge to talk to a service built for higher concurrency as the first step of recording access. For example you could have your Lambda#Edge function write access records to SQS, and then have a background worker read from SQS to RDS.
Here's an example of Lambda#Edge interacting with STS to read some config. It could easily be refactored to write to SQS.