How to integrate AWS-XRay for others AWS services as SQS, S3 - amazon-web-services

at my app, I was able to track all the lambda, APIGateway and DynamoDB requests through AWS-X-Ray.
I am doing the same as the answer in this question:
Adding XRAY Tracing to non-rest functions e.g., SQS, Cognito Triggers etc
However, how would be the case of S3, SQS or other services/non-rest functions ??
I saw some old code that does not even use aws-sdk, the dependencies are import direct like:
import {S3Client, S3Address, RoleService, SQSService} from '#sws/aws-bridge';
So, in these cases, how to integrate/activate AWS-XRay?
Thank you very much in advance!
Cheers,
Marcelo

At the moment Lambda doesn't support continuing traces from triggers other than REST APIs or direct invocation
The upstream service can be an instrumented web application or another Lambda function. Your service can invoke the function directly with an instrumented AWS SDK client, or by calling an API Gateway API with an instrumented HTTP client.
https://docs.aws.amazon.com/xray/latest/devguide/xray-services-lambda.html
In every other case it will create its own, new Trace ID and use that instead.
You can work around this yourself by creating a new AWS-Xray segment inside the Lambda Function and using the incoming TraceID from the event. This will result in two Segments for your lambda invocation. One which Lambda itself creates, and one which you create to extend the existing trace. Whether that's acceptable or not for your use case is something you'll have to decide for yourself!
If you're working with Python you can do it with aws-xray-lambda-segment-shim.
If you're working with NodeJS you can follow this guide on dev.to.
If you're working with .NET there are some examples on this GitHub issue.

Related

can I write such a API code that runs on serverless (aws lambda) and the same code can runs on ec2?

I am looking for a language / framework or a method by which I can build API / web application code such that it can run on Serverless compute's like aws lambda and the same code runs on a dedicated compute system like lightsail or EC2.
First I thought of using Docker to do this but AWS Lambda entry point is a specific function signature which is very different than Spring Controllers. Is there a solution available currently?
So basically when I run it in lambda - it will have cold start issue, later when the app is ready or get popular I would like to move it to a EC2 instance for better performance and higher traffic load.
I want to start right in this situation so that later it can be easy to port and resolve the performance issue's
I'd say; no this is not possible easily.
When you are building an api that you'd want to run on lambda's you most likely will be using an API Gateway which takes care of your routing to different lambda functions (best practice). So the moment you would me working on an api like this migrating to EC2 would be a nightmare as you would need to rebuild the whole application a more of a monolith application which could run on EC2.
I would honestly commit to either run it on EC2/Containers or run it on Lambda, if cold start is your main issue with Lambda's you might wanna look into Lambda Snapstart for Java or use another language like Typescript/Python.
After some correct keywords in google I finally got what I was looking for, checkout this blog and code library shared by AWS which helps you convert the request and response of the request as per the framework required http request
Running APIs Written in Java on AWS Lambda: https://aws.amazon.com/blogs/opensource/java-apis-aws-lambda/
Repo Code: https://github.com/awslabs/aws-serverless-java-container
Thanks Ricardo for your response - will do check out Lambda Snapstart for sure and try it as well. I have not tested out this completely but it looks promising to some extent.

What is the proper way to build many Lambda functions and updated them later?

I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.

Is it possible to instruct AWS Custom Authorizers to call AWS Lambdas based on Stage Variables?

I am mapping Lambda Integrations like this on API Gateway:
${stageVariables.ENV_CHAR}-somelambda
So I can have d-somelambda, s-somelambda, etc. Several versions for environments, all simultaneous. This works fine.
BUT, I am using Custom Authorizers, and I have d-authorizer-jwt and d-authorizer-apikey.
When I deploy the API in DEV stage, it's all ok. But when I deploy to PROD stage, all lambda calls are dynamically pointing properly to *p-lambdas*, except the custom authorizer, which is still pointing to "d" (DEV) and calling dev backend for needed validation (it caches, but sometimes checks the database).
Please note I don't want necessarily to pass the Stage Variables like others are asking, I just want to call the correct Lambda out of a proper configuration like Integration Request offers. By having access to Stage Variables as a final way of solving this, I would need to change my approach and have a single lambda for all envs, and dynamically touch the required backend based on Stage Variables... not that good.
Tks
Solved. It works just as I described. There are some caveats:
a) You need to previously grant access to that lambda
b) You can't test the authorizer due to a UI glitch ... it doesn't ask for the StageVar so you will never reach the lambda
c) You need to deploy the API to get the Authorizers updated on a particular Stage
Cannot tell why it didn't work on my first attempt.

Automatically instrumenting Java application using AWS X-Ray

I am trying to achieve automatic instrumentation of all calls made by AWS SDKs for Java using X-Ray.
The X-Ray SDK for Java automatically instruments all AWS SDK clients when you include the AWS SDK Instrumentor submodule in your build dependencies.
(from the documentation)
I have added these to my POM
aws-xray-recorder-sdk-core
aws-xray-recorder-sdk-aws-sdk
aws-xray-recorder-sdk-spring
aws-xray-recorder-sdk-aws-sdk-instrumentor
and am using e.g. aws-java-sdk-ssm and aws-java-sdk-sqs.
I expected to only have to add the X-Ray packages to my POM and provide adequate IAM policies.
However, when I start my application I get exceptions such as these:
com.amazonaws.xray.exceptions.SegmentNotFoundException: Failed to begin subsegment named 'AWSSimpleSystemsManagement': segment cannot be found.
I tried wrapping the SSM call in a manual segment and so that worked but then immediately the next call from another AWS SDK throws a similar exception.
How do I achieve the automatic instrumentation mentioned in the documentation? Am I misunderstanding something?
It depends on how you make AWS SDK calls in your application. If you have added X-Ray servlet to your spring application per https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-filters.html then each time your application receives a request, the X-Ray servlet filter will open a segment and store it in the thread serving that request. Any AWS SDK calls you make as part of that request/response cycle will pick up that segment as the parent.
The error you got means the the X-Ray instrumentor tries to record the AWS API call to a subsegment but it cannot find a parent (which request this call belongs to).
Depending on your use case you might want to explicitly instrument certain AWS SDK clients and left others plain, if some of those clients are making calls in a background worker.

Emulating Amazon SQS during development

I'm quite interested in beginning some development using Amazon SQS, perhaps SimpleDB too, my question is this, are there any open source solutions that mimic the functionality, just for the purposes of development. I've already encountered the Eucalyptus project (http://open.eucalyptus.com) for creating an EC-esque cloud.
I've not had any success with google, I suspect it's because the cost of entry is so inexpensive, but still, does anyone know of anything like this?
For SQS I wrote ElasticMQ, which you can run either embedded (it's written in Scala, so runs on the JVM) or stand-alone. It has both persistent and in-memory modes, the first being good for dev, second for testing.
If you need a test double for more than just SQS, you can try LocalStack.
To simulate SQS, it internally uses ElasticMQ mentioned by adamw.
You can start LocalStack via Docker, for example, and it will start the following services:
API Gateway at http://localhost:4567
Kinesis at http://localhost:4568
DynamoDB at http://localhost:4569
DynamoDB Streams at http://localhost:4570
Elasticsearch at http://localhost:4571
S3 at http://localhost:4572
Firehose at http://localhost:4573
Lambda at http://localhost:4574
SNS at http://localhost:4575
SQS at http://localhost:4576
Redshift at http://localhost:4577
ES (Elasticsearch Service) at http://localhost:4578
SES at http://localhost:4579
Route53 at http://localhost:4580
CloudFormation at http://localhost:4581
CloudWatch at http://localhost:4582
SSM at http://localhost:4583
Some of the Amazon SDKs have "mock" mode, which is:
The mock service is an alternate way
to use the sample code. The service
doesn't call AWS, but instead returns
a set response that you can modify to
suit your needs (the XML response
files are in the Mock directory). The
mock service makes it easy for you to
test how your application handles
different responses.
For SQS, it appears the Perl and PHP SDKs have mock mode. I know that the .NET SDK for Amazon RDS also has the mock mode.
The Java SDK doesn't contain mock implementations:
The client mock implementations have been removed. Instead, developers
are encouraged to use more flexible and full featured mock libraries,
such as EasyMock, jMock
If the SDK you will be using doesn't have the mock mode available, you could probably create your own similar type of thing which returns the preconfigured responses instead of actually hitting up the service.
See here for more info
GoAws - https://github.com/p4tin/goaws - was just released as beta. (disclaimer - I am the developer).
Regarding the Java SDK, it does no longer contain mock implementations:
The client mock implementations have been removed. Instead, developers
are encouraged to use more flexible and full featured mock libraries,
such as EasyMock, jMock
If you are in .NET or Mono you can try Stratosphere. It has local implementations that mimic SimpleDB, SQS and S3. For SimpleDB mock implementation it uses SQLite, for SQS and S3 it stores messages/objects in file system.
if you need to simulate SNS as well as SQS you can check out: Yopa