Disabling AWS X-ray in FastAPI Lambda application - amazon-web-services

We have a Lambda application written in FastAPI and we had enabled X-ray for it. However, we decided to move to Datadog and now we don't need X-ray anymore. However, we're finding it hard to actually disable X-ray from recording traces.
These are the approaches we've tried:
global_sdk_config.set_sdk_enabled(False) (Disabling through the SDKConfig module)
AWS_XRAY_SDK_ENABLED=false (Disabling through the environment variable)
Not even calling the functions to start/end segments
Adding another env var according to the source code - AWS_XRAY_SDK_ENABLED - and setting it to false
But none of them worked. We are still getting the traces on the AWS X-ray console.
We also found an open issue with the aws-xray-sdk-python library that says that it's currently may not even be possible to disable X-ray for async. And we use async extensively since we have FastAPI.
We wanted to understand from the community if turning off X-ray is indeed impossible for async flow (because the Github issue is still open). We have even considered reaching out to AWS technical support, but wanted to understand from the StackOverflow community first.

I figured this out.
Xray tracing is separately configured for API Gateways as well. I had to turn that off, along with tracing for functions, in order to completely remove Xray from our architecture.

Related

can I write such a API code that runs on serverless (aws lambda) and the same code can runs on ec2?

I am looking for a language / framework or a method by which I can build API / web application code such that it can run on Serverless compute's like aws lambda and the same code runs on a dedicated compute system like lightsail or EC2.
First I thought of using Docker to do this but AWS Lambda entry point is a specific function signature which is very different than Spring Controllers. Is there a solution available currently?
So basically when I run it in lambda - it will have cold start issue, later when the app is ready or get popular I would like to move it to a EC2 instance for better performance and higher traffic load.
I want to start right in this situation so that later it can be easy to port and resolve the performance issue's
I'd say; no this is not possible easily.
When you are building an api that you'd want to run on lambda's you most likely will be using an API Gateway which takes care of your routing to different lambda functions (best practice). So the moment you would me working on an api like this migrating to EC2 would be a nightmare as you would need to rebuild the whole application a more of a monolith application which could run on EC2.
I would honestly commit to either run it on EC2/Containers or run it on Lambda, if cold start is your main issue with Lambda's you might wanna look into Lambda Snapstart for Java or use another language like Typescript/Python.
After some correct keywords in google I finally got what I was looking for, checkout this blog and code library shared by AWS which helps you convert the request and response of the request as per the framework required http request
Running APIs Written in Java on AWS Lambda: https://aws.amazon.com/blogs/opensource/java-apis-aws-lambda/
Repo Code: https://github.com/awslabs/aws-serverless-java-container
Thanks Ricardo for your response - will do check out Lambda Snapstart for sure and try it as well. I have not tested out this completely but it looks promising to some extent.

When I deploy LAMP Stack on Google Cloud Platform the deployment shows these warnings. How to fix this?

When I deploy LAMP Stack in Google cloud platform this warnings show after finished the deployment.
This deployment has resources from the Runtime Configurator service,
which is in Beta. There is no planned date for moving this feature
into General Availability (GA). Examples of runtimeconfig types used:
runtimeconfig.v1beta1.config, runtimeconfig.v1beta1.waiter
How can avoid this?
As the warning message states that the feature that you would like to use is not completely available yet and there is no planned date as well and to confirm this I have tried creating the same deployment in my test environment and I even ended up with the same warning message even after it was deployed.
The warning was an intended behavior and made to notify that the product you are trying to use Run time Configurator is still at Pre-GA offerings hence not to rely completely on it. If you want to continue the usage of the respective deployments using Run time Configurator you can ignore the default warning messages.

What is the difference between using API GATEWAY Framework like serverless versus using API Gateway without framework with AWS Management Console?

I believe that i meant to use the Serverless Framework or not. Considering this, the difference is more in the deployment process, maintenance and “portability”.
At the end of the day, both will produce the same result but in the console, things use to take way more time and also lead you to create errors by forgetting to click somewhere or to configure something.
SAM, Chalice, Serverless Framework, Zappa, etc (Infrastructure as Code) will save you time since you can have one or two files that will have all your configurations, instead of going page by page and setting things up.
To answer your question, yes. The difference is in the process of deployment, maintenance, and portability. As long as I know everything that can be made through code can be made via "clickops" with extra effort. The API gateway created at the console will be the same created via code.

Lambda AWS X-Ray. Python SDK - Deactivate Locally

I have a Flask app running as an AWS Lambda Function deployed with Zappa and would like to activate X-Ray to get more information for the different functions.
Activating X-Ray with Zappa was easy enough - it only requires adding this line in the zappa-settings.json:
"xray_tracing": true
Further, I installed the AWS X-Ray Python SDK and added a few decorators to some functions, like this:
#xray_recorder.capture()
When I deploy this as a Lambda function, it all works well. The problem is using the system locally, both when running tests and when running the Flask in a local server instead of as a lambda function.
When I use any of the functions that are decorated either in a test or through the local server, the following exception is thrown:
aws_xray_sdk.core.exceptions.exceptions.SegmentNotFoundException: cannot find the current segment/subsegment, please make sure you have a segment open
Which of course makes sense, because AWS Lambda handles the creation of segments.
Are there any good ways to deactivate capturing locally? This would be useful e.g. for running unit tests locally on functions that I would like to watch in X-Ray.
One of the feature request of this SDK is to have a "disabled" global flag so everything becomes no-ops https://github.com/aws/aws-xray-sdk-python/issues/26.
However, it still depends on what you are testing against. It's good practice to test what actually will be run on Lambda. You can set some environment variables so the SDK thinks it is running on Lambda.
You can see the SDK is looking for two env vars https://github.com/aws/aws-xray-sdk-python/blob/master/aws_xray_sdk/core/lambda_launcher.py. One is LAMBDA_TASK_ROOT set to true so it knows to switch to lambda-mode. The other one is _X_AMZN_TRACE_ID which contains the tracing context normally passed by lambda container.
If you just want to test non-XRay code you can set AWS_XRAY_CONTEXT_MISSING to LOG_ERROR so the SDK doesn't complain on context missing and simply give up capturing wrapped functions. This will run much less code path than mimic lambda behaviors. Ideally it would be better for the lambda local testing tool to be X-Ray friendly. Are you using https://github.com/awslabs/aws-sam-cli? There is already an open issue for this feature https://github.com/awslabs/aws-sam-cli/issues/217

Why might an AWS Lambda function run in docker-lambda, but crashes on AWS?

There are a set of Docker images available called docker-lambda that aim to replicate the AWS Lambda environment as closely as possible. It's third party, but it's somewhat endorsed by Amazon and used in their SAM Local offering. I wrote a Lambda function that invokes an ELF binary, and it works when I run it in a docker-lambda container with the same runtime and kernel as the AWS Lambda environment. When I run the same function on AWS Lambda, the method mysteriously crashes with no output. I've given the function more RAM than the process could possibly use, and the dependencies are clearly there because the I know that it's not really possible to debug this without access to the binaries, but are there any common/possible reasons why this might be happening?
Additionally, are there any techniques to debug something like this? When running in Docker, I can add a --cap-add SYS_PTRACE flag to invoke a command using strace and see what might be causing an error. I'm not aware of an equivalent debugging technique on AWS Lambda.
Willing to bet that the AWS Lambda execution environment isn't playing nicely with that third party image. I've had no problems with Amazon's amazon/aws-lambda-python:3.8 image.
Only other thing I can think of is tweaking the timeout of your lambda.