Is it possible to instruct AWS Custom Authorizers to call AWS Lambdas based on Stage Variables? - amazon-web-services

I am mapping Lambda Integrations like this on API Gateway:
${stageVariables.ENV_CHAR}-somelambda
So I can have d-somelambda, s-somelambda, etc. Several versions for environments, all simultaneous. This works fine.
BUT, I am using Custom Authorizers, and I have d-authorizer-jwt and d-authorizer-apikey.
When I deploy the API in DEV stage, it's all ok. But when I deploy to PROD stage, all lambda calls are dynamically pointing properly to *p-lambdas*, except the custom authorizer, which is still pointing to "d" (DEV) and calling dev backend for needed validation (it caches, but sometimes checks the database).
Please note I don't want necessarily to pass the Stage Variables like others are asking, I just want to call the correct Lambda out of a proper configuration like Integration Request offers. By having access to Stage Variables as a final way of solving this, I would need to change my approach and have a single lambda for all envs, and dynamically touch the required backend based on Stage Variables... not that good.
Tks

Solved. It works just as I described. There are some caveats:
a) You need to previously grant access to that lambda
b) You can't test the authorizer due to a UI glitch ... it doesn't ask for the StageVar so you will never reach the lambda
c) You need to deploy the API to get the Authorizers updated on a particular Stage
Cannot tell why it didn't work on my first attempt.

Related

Using CDK to Create a Step Function With Dependencies on Other AWS Resources (Like a Lambda) Owned By Different Projects

We're using AWS Step Functions in our application. We have one step function we're creating with the use of the CDK as part of a deployment of Application A from Repository A. That step function needs to include a lambda function as one of the steps. The problem we're having is that this lambda function is created and maintained independently in a different repository (Repository B). We're not sure the best way to connect one AWS resource (AWS Lambda) with another AWS resource (AWS Step Functions) when the creation of those two resources is happening independently in two different places.
We'd like to not manually create the lambda or step function (or both) in each environment. It's time consuming, prone to error and we're going to have a lot of these situations occur.
Our best thought at the moment is that we could maybe have Application A create the step function, but have it create and reference an empty lambda. Initially the step function won't be fully functional of course, but then when we deploy Application B it could look for that empty lambda function and upload new code to it.
And, so that we don't have an issue where deploying Application B first results in non-working code. We can also handle the opposite condition: Application B could create the lambda function before uploading the code to it if it doesn't already exist. Application A could then look to see if the lambda function already exists when creating the step function and just reference the lambda function in the step function directly.
Concerns with this approach:
This is extra work and adds a lot of complexity to the deployment, so more potential for failure
I'm not sure I can easily look up a lambda function like this anyway (I guess it would have to be by name since we couldn't know what the ARN would be when we're writing the code). But then we have issues if the name changes too, so maybe there's a pre-defined ID or something we could use to look it up instead.
Potential for code failing in production. If when deploying to QA for testing we deploy Application A, then Application B, we really only know that scenario works. If, then, when going to production we deploy them in the opposite order it might break.
What are some good options for this kind of thing because I can't think of anything great. My best idea involves not using lambda at all but instead having the step function step be queueing something up in SQS, then Application B can just read from that queue no problem. It feels like this is a common enough scenario though that there must be some clean way to do it with lambda and I wouldn't want my decisions on what service type I can use in AWS be stymied by deployment feasibility.
Thanks
You can easily include an existing Lambda function in a new CDK-created Step Function. Use the Function.fromFunctionArn static method to get a read-only reference to the Lambda using its ARN. The CDK uses the ARN to add the necessary lambda:InvokeFunction permissions to the Step Functions' assumed role.
import { aws_stepfunctions_tasks as tasks } from 'aws-cdk-lib';
const importedLambdaTask = new tasks.LambdaInvoke(this, 'ImportedLambdaTask', {
lambdaFunction: lambda.Function.fromFunctionArn(
this,
'ImportedFunc',
'arn:aws:lambda:us-east-1:123456789012:function:My-Lambda5C096DFA-RLhGGzBJSnMN'
),
resultPath: '$.importedLambdaTask',
});
If you prefer not to hard code the Lambda ARN int the CDK stack, save the ARN to a SSM Parameter Store Parameter. Then import it into the stack by name and pass it to fromFunctionArn:
const lambdaArnParam = ssm.StringParameter.fromStringParameterName(
this,
'ArnFromParamStore',
'lambda-arn-saved-as-ssm-param'
);
Edit: Optionally add a Trigger construct to your CDK Application A to confirm the existence of the Application B Lambda dependency before deploying. Triggers are a newish CDK feature that let you run Lambda code during deployments. The Trigger Function should return an error if it cannot find the external Lambda, thereby causing Application A's deployment to fail.

How to integrate AWS-XRay for others AWS services as SQS, S3

at my app, I was able to track all the lambda, APIGateway and DynamoDB requests through AWS-X-Ray.
I am doing the same as the answer in this question:
Adding XRAY Tracing to non-rest functions e.g., SQS, Cognito Triggers etc
However, how would be the case of S3, SQS or other services/non-rest functions ??
I saw some old code that does not even use aws-sdk, the dependencies are import direct like:
import {S3Client, S3Address, RoleService, SQSService} from '#sws/aws-bridge';
So, in these cases, how to integrate/activate AWS-XRay?
Thank you very much in advance!
Cheers,
Marcelo
At the moment Lambda doesn't support continuing traces from triggers other than REST APIs or direct invocation
The upstream service can be an instrumented web application or another Lambda function. Your service can invoke the function directly with an instrumented AWS SDK client, or by calling an API Gateway API with an instrumented HTTP client.
https://docs.aws.amazon.com/xray/latest/devguide/xray-services-lambda.html
In every other case it will create its own, new Trace ID and use that instead.
You can work around this yourself by creating a new AWS-Xray segment inside the Lambda Function and using the incoming TraceID from the event. This will result in two Segments for your lambda invocation. One which Lambda itself creates, and one which you create to extend the existing trace. Whether that's acceptable or not for your use case is something you'll have to decide for yourself!
If you're working with Python you can do it with aws-xray-lambda-segment-shim.
If you're working with NodeJS you can follow this guide on dev.to.
If you're working with .NET there are some examples on this GitHub issue.

What is the proper way to build many Lambda functions and updated them later?

I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.

Specify API Gateway id instead of using 'random' id

With deploying an AWS Lambda function (via Serverless Framework), and exposing it via a HTTPS endpoint in AWS API Gateway... is it possible to construct and set the API Gateway id and thus determine the first part of the HTTP endpoint for that Lambda function?
When deploying an AWS Lambda function and adding a HTTP event, I now get a random id as (the first hostname) in https://klv5e3c8z5.execute-api.eu-west-1.amazonaws.com/v1/fizzbuzz. New/fresh deployments receive new random string, that 10 character id.
Instead of using that, I would like to determine and set that id. (I will make sure that it's sufficiently unique myself, or deal with endpoint naming collisions myself.)
Reason for this is that in a separate Serverless project, I need to use that endpoint (and thus need to know that id). Instead of having it being determined by project 1 and then reading/retrieving that in project 2, I want to construct and set the endpoint in project 1 so that I can use the known endpoint in project 2 as well.
(A suggestion was to use a custom domain as an alternative/alias for that endpoint... but if possible I want don't want to introduce a new component in the mix, and a solution that does not include Cloud-it-might-take-30-minutes-to-create-a-domain-Front is better :-) )
If this isn't possible, I might need to use the approach as described at http://www.goingserverless.com/blog/api-gateway-url, mentioning that the endpoint is being exposed from one project via the CloudFormation stack, to be read from and used in the other project, but that introduces (a little latency and) a dependency in deploying the second project.
The "first hostname" you want to set is called "REST API id" and is generated by API Gateway when creating the API. The API used to create API's in API Gateway doesn't offer the ability to specify the REST API id, so no, there is no way to specify the id.
The reason for that is probably that these id's are used as part of a public facing domain name. As this domain name doesn't include an identifier for the AWS account it belongs to, the id's have to be globally unique, so AWS generates them to avoid collisions. As AWS puts it (emphasis by me):
For an edge-optimized API, the base URL is of the http[s]://*{restapi-id}*.execute-api.amazonaws.com/stage format, where {restapi-id} is the API's id value generated by API Gateway. You can assign a custom domain name (for example, apis.example.com) as the API's host name and call the API with a base URL of the https://apis.example.com/myApi format.
For the option to create a custom domain name you should consider that there is even some more complexity associated with it, as you must provision a matching SSL-certificate for the domain as well. While you can use ACM for that, there is currently the limitation that SSL-certificates for CloudFront distributions (which edge-optimized API Gateway API's use behind the scenes) need to be issued in us-east-1.
The option you already mentioned to export the API endpoint in the CloudFormation stack as output value and use that exported value in your other stack would work well. As you noted that'd create a dependency between the two stacks, so once you deployed project 2, which uses the output value from project 1, you can only delete the CloudFormation stack for project 1 after the project 2 stack is either deleted or updated to not use the exported value anymore. That can be a feature, but from your description it sounds like it wouldn't for your use case.
Something similar to exported stack output values would be to use some shared storage instead of making use of CloudFormation's exported output values features. What comes to mind here is the SSM Parameter Store, which offers some integration into CloudFormation. The integration makes it easy to read a parameter from the SSM Parameter Store in the stack of project 2. For writing the value to the Parameter Store in project 1 you'd need to use a custom resource in your CloudFormation template. There is at least one sample implementation for that available on Github.
As you can see there are multiple options available to solve your problem. Which one to choose depends on your projects needs.
Question: "is it possible to construct and set the API Gateway id?"
Answer: No (see the other answer to this question).
I was able to get the service endpoint of project 1 into the serverless.yml file of project 2 though, to finally construct the full URL of the service that I needed. I'm sharing this because it's an alternative solution that also works in my case.
In the serverless.yml of project 2, you can refer to the service endpoint of project 1 via service_url: "${cf:<service-name>-<stage>.ServiceEndpoint}". Example: "${cf:my-first-service-dev.ServiceEndpoint}".
CloudFront exposes the ServiceEndpoint that contains the full URL, so including the AWS Gateway REST API id.
More information in Serverless Framework documentation: https://serverless.com/framework/docs/providers/aws/guide/variables/#reference-cloudformation-outputs.
It seems that Serverless Framework is adding this ServiceEndpoint as stack output.

Handling different end points for AWS API Gateway Stages

I want to be able to change my end point defined in each API Gateway method so that a staging environment called "Dev" points to my internal Dev API and Prod stage of course would route to my Production API.
Right now I'd have to manually change each method and then deploy to the prod stag but now to do any testing I'd have to change them all back again for a dev stage.
I am moving ahead with a DNS switch to move Dev to Prod but future development still requires a change on every method.
example:
I have a resource called User and a GET Method which maps to an end point (HTTP Proxy) -> http://dev.mytestapp.com/api/v1/user
I then deploy to a Stage called Dev - the Dev stage gives me a URL to call to request this resource, eg. https://xxxxobl.execute-api.us-east-1.amazonaws.com/dev/user
Now I test and it works as expected so I want to move this to a production stage, just called stage. When I deploy to prod, my calling url is now https://xxxxobl.execute-api.us-east-1.amazonaws.com/prod/user
but the problem is that the API is still mapping the end point to http://dev.mytestapp.com/api/v1/user and not something like http://prod.mytestapp.com/api/v1/user
So my stage and url have changed but the actual API being called is hard coded to dev.
Any ideas?
Thanks
You can take advantage of stage variables to have end points route to different APIs. This page shows you how to set up a stage variable for a http proxy. You can use the stage variables for lambda functions as well.
having different stages mean having different environment for same lambda using same api.But different stages like pro,qa, test.