Handling different end points for AWS API Gateway Stages - amazon-web-services

I want to be able to change my end point defined in each API Gateway method so that a staging environment called "Dev" points to my internal Dev API and Prod stage of course would route to my Production API.
Right now I'd have to manually change each method and then deploy to the prod stag but now to do any testing I'd have to change them all back again for a dev stage.
I am moving ahead with a DNS switch to move Dev to Prod but future development still requires a change on every method.
example:
I have a resource called User and a GET Method which maps to an end point (HTTP Proxy) -> http://dev.mytestapp.com/api/v1/user
I then deploy to a Stage called Dev - the Dev stage gives me a URL to call to request this resource, eg. https://xxxxobl.execute-api.us-east-1.amazonaws.com/dev/user
Now I test and it works as expected so I want to move this to a production stage, just called stage. When I deploy to prod, my calling url is now https://xxxxobl.execute-api.us-east-1.amazonaws.com/prod/user
but the problem is that the API is still mapping the end point to http://dev.mytestapp.com/api/v1/user and not something like http://prod.mytestapp.com/api/v1/user
So my stage and url have changed but the actual API being called is hard coded to dev.
Any ideas?
Thanks

You can take advantage of stage variables to have end points route to different APIs. This page shows you how to set up a stage variable for a http proxy. You can use the stage variables for lambda functions as well.

having different stages mean having different environment for same lambda using same api.But different stages like pro,qa, test.

Related

can I write such a API code that runs on serverless (aws lambda) and the same code can runs on ec2?

I am looking for a language / framework or a method by which I can build API / web application code such that it can run on Serverless compute's like aws lambda and the same code runs on a dedicated compute system like lightsail or EC2.
First I thought of using Docker to do this but AWS Lambda entry point is a specific function signature which is very different than Spring Controllers. Is there a solution available currently?
So basically when I run it in lambda - it will have cold start issue, later when the app is ready or get popular I would like to move it to a EC2 instance for better performance and higher traffic load.
I want to start right in this situation so that later it can be easy to port and resolve the performance issue's
I'd say; no this is not possible easily.
When you are building an api that you'd want to run on lambda's you most likely will be using an API Gateway which takes care of your routing to different lambda functions (best practice). So the moment you would me working on an api like this migrating to EC2 would be a nightmare as you would need to rebuild the whole application a more of a monolith application which could run on EC2.
I would honestly commit to either run it on EC2/Containers or run it on Lambda, if cold start is your main issue with Lambda's you might wanna look into Lambda Snapstart for Java or use another language like Typescript/Python.
After some correct keywords in google I finally got what I was looking for, checkout this blog and code library shared by AWS which helps you convert the request and response of the request as per the framework required http request
Running APIs Written in Java on AWS Lambda: https://aws.amazon.com/blogs/opensource/java-apis-aws-lambda/
Repo Code: https://github.com/awslabs/aws-serverless-java-container
Thanks Ricardo for your response - will do check out Lambda Snapstart for sure and try it as well. I have not tested out this completely but it looks promising to some extent.

AWS CDK - How to run API and Lambdas locally?

edit: Turns out the solution is in the docs. I had bog standard normal 'sam' installed but I needed what they call the 'public preview version' AKA 'sam-beta-cdk'. With this installed the API can be started locally with sam-betacdk start-api and works well. While I appreciate the answers which suggest that development should be done using purely TDD I feel there is also value in this more interactive, manual mode as it permits quicker exploration of the problem space.
I'm trying to build my first app with CDK + Typescript using API Gateway, Lambdas and DynamoDB. I have built a couple of Lambdas and deployed them and they work fine live on the web. However I don't want a minute long deploy cycle and various associated AWS costs as part of my workflow. What I want is to be able to test my API locally.
I have struggled to find docs on how to do this. Amazon seem to recommend using the SAM CLI here so that is what I've been trying.
The docs claim running sam local xyz runs cdk synth to make a "could assembly" in ./aws-sam/build but I see no evidence of this. Instead what I get is a complaint that sam could not find a 'template.yml'. So I manually run cdk synth > template.yml which creates one in the root folder. Then I run sam local start-api and it seems happy to start up.
Then I try and hit my test lambda using CURL: curl 'http://127.0.0.1:3000/test' I get {"message":"Internal server error"} and a huge ugly stack trace in the console that is running sam local start-api
The lambda is this...
exports.handler = async function() {
console.log("WooHoo! Test handler ran")
return {statusCode: 200, headers: {"Content-Type": "application/json"}, body: "Test handler ran!"}
}
Start of the huge ugly stack trace...
Mounting /home/user/code/image-cache/asset.beeaa749e012b5921018077f0a5e4fc3ab271ef1c191bd12a82aa9a92148782e as /var/task:ro,delegated inside runtime container
START RequestId: 99f53642-b294-4ce5-a1b4-8c967db80ce1 Version: $LATEST
2021-09-15T12:33:37.086Z undefined ERROR Uncaught Exception {"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'test'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js","stack":["Runtime.ImportModuleError: Error: Cannot find module 'test'","Require stack:","- /var/runtime/UserFunction.js","- /var/runtime/index.js"," at _loadUserApp (/var/runtime/UserFunction.js:100:13)"," at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
The end of the huge ugly stack trace...
Invalid lambda response received: Lambda response must be valid json
So it would seem sam local start-api can't find test and throws and error which means the API gateway doesn't get a valid 'lambda response'. So far this has not helped me chase down the problem :/ It certainly seems aware that test is a route, as trying to hit other endpoints gives the classic {"message":"Missing Authentication Token"} but it chokes hard trying to fulfill it despite me having both functions/test.ts and the compiled functions/test.js present.
I have the test route and handler defined in my CDK stack definition like so...
const testLambda = new lambda.Function(this, "testLambdaHandler", {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset("functions"),
handler: "test.handler"
})
api.root
.resourceForPath("test")
.addMethod("GET", new apigateway.LambdaIntegration(testLambda))
I considered posting my template.yml but that is even longer than the big ugly error message so I haven't.
So I have three questions (well actually a million but I don't want to be too cheeky!)
Is this actually the canonical way of locally testing apps made with CDK
If so, where am I going wrong?
If not, what is the better/proper way?
Lambda handlers are just functions. They do not need any special environment to function - they are called at a specific point in the Lambda Invocation process, and provided an event (a json object) and a context (another json object)
You can (and should!) unit test them just like any other individual function in your language/testing framework.
As #Lucasz mentioned, you should rely on the fact that, if set up properly, API gateway and Lambda will interact the same way every time. Once you have run one end to end test and you know that the basics work, any further implementation can be done trough unit testing
There are numerous libraries for mocking AWS service calls in unit testing, and there are plenty of good practice work arounds for the services that are more difficult to mock (ie: its difficult to mock a Lambda call from inside another lambda - but if you wrap that lambda call in its own function, you can mock the function itself to return whatever you want it to - and this is good practice for testing as well!)
using jest, in a coded unit test, you can call the lambda handler, give it stubbed or mocked event json, as well as a context json (probably just blank as youre not using it) and the lambda handler will act just like any other function with two parameters you've ever written, including returning what you want it to return.
You must be doing something wrong with your file directory. Where is your index.js located? If you generate the template.json, is the directory correct?
Also in what directory do you execute the Sam local command?
The thing with testing your serverless application is you don't have to test your full application. You need to count on AWS that API gateway, dynamodb and lambda is perfectly working.
The only thing you need to test is the logic you implemented.
In here you make sure your function prints out something and returns a 200. That's all you have to do.
Look into 'jest' for testing js.
If you want to test cdk you should into https://docs.aws.amazon.com/cdk/latest/guide/testing.html
Also "running Aws locally" is not good practice. it's never the same as how it's running in real life aka the cloud. You use plugins for this, tools for that... Local is not the same as in the cloud.
If you have any more questions, feel free to ask.

Specify API Gateway id instead of using 'random' id

With deploying an AWS Lambda function (via Serverless Framework), and exposing it via a HTTPS endpoint in AWS API Gateway... is it possible to construct and set the API Gateway id and thus determine the first part of the HTTP endpoint for that Lambda function?
When deploying an AWS Lambda function and adding a HTTP event, I now get a random id as (the first hostname) in https://klv5e3c8z5.execute-api.eu-west-1.amazonaws.com/v1/fizzbuzz. New/fresh deployments receive new random string, that 10 character id.
Instead of using that, I would like to determine and set that id. (I will make sure that it's sufficiently unique myself, or deal with endpoint naming collisions myself.)
Reason for this is that in a separate Serverless project, I need to use that endpoint (and thus need to know that id). Instead of having it being determined by project 1 and then reading/retrieving that in project 2, I want to construct and set the endpoint in project 1 so that I can use the known endpoint in project 2 as well.
(A suggestion was to use a custom domain as an alternative/alias for that endpoint... but if possible I want don't want to introduce a new component in the mix, and a solution that does not include Cloud-it-might-take-30-minutes-to-create-a-domain-Front is better :-) )
If this isn't possible, I might need to use the approach as described at http://www.goingserverless.com/blog/api-gateway-url, mentioning that the endpoint is being exposed from one project via the CloudFormation stack, to be read from and used in the other project, but that introduces (a little latency and) a dependency in deploying the second project.
The "first hostname" you want to set is called "REST API id" and is generated by API Gateway when creating the API. The API used to create API's in API Gateway doesn't offer the ability to specify the REST API id, so no, there is no way to specify the id.
The reason for that is probably that these id's are used as part of a public facing domain name. As this domain name doesn't include an identifier for the AWS account it belongs to, the id's have to be globally unique, so AWS generates them to avoid collisions. As AWS puts it (emphasis by me):
For an edge-optimized API, the base URL is of the http[s]://*{restapi-id}*.execute-api.amazonaws.com/stage format, where {restapi-id} is the API's id value generated by API Gateway. You can assign a custom domain name (for example, apis.example.com) as the API's host name and call the API with a base URL of the https://apis.example.com/myApi format.
For the option to create a custom domain name you should consider that there is even some more complexity associated with it, as you must provision a matching SSL-certificate for the domain as well. While you can use ACM for that, there is currently the limitation that SSL-certificates for CloudFront distributions (which edge-optimized API Gateway API's use behind the scenes) need to be issued in us-east-1.
The option you already mentioned to export the API endpoint in the CloudFormation stack as output value and use that exported value in your other stack would work well. As you noted that'd create a dependency between the two stacks, so once you deployed project 2, which uses the output value from project 1, you can only delete the CloudFormation stack for project 1 after the project 2 stack is either deleted or updated to not use the exported value anymore. That can be a feature, but from your description it sounds like it wouldn't for your use case.
Something similar to exported stack output values would be to use some shared storage instead of making use of CloudFormation's exported output values features. What comes to mind here is the SSM Parameter Store, which offers some integration into CloudFormation. The integration makes it easy to read a parameter from the SSM Parameter Store in the stack of project 2. For writing the value to the Parameter Store in project 1 you'd need to use a custom resource in your CloudFormation template. There is at least one sample implementation for that available on Github.
As you can see there are multiple options available to solve your problem. Which one to choose depends on your projects needs.
Question: "is it possible to construct and set the API Gateway id?"
Answer: No (see the other answer to this question).
I was able to get the service endpoint of project 1 into the serverless.yml file of project 2 though, to finally construct the full URL of the service that I needed. I'm sharing this because it's an alternative solution that also works in my case.
In the serverless.yml of project 2, you can refer to the service endpoint of project 1 via service_url: "${cf:<service-name>-<stage>.ServiceEndpoint}". Example: "${cf:my-first-service-dev.ServiceEndpoint}".
CloudFront exposes the ServiceEndpoint that contains the full URL, so including the AWS Gateway REST API id.
More information in Serverless Framework documentation: https://serverless.com/framework/docs/providers/aws/guide/variables/#reference-cloudformation-outputs.
It seems that Serverless Framework is adding this ServiceEndpoint as stack output.

Is it possible to instruct AWS Custom Authorizers to call AWS Lambdas based on Stage Variables?

I am mapping Lambda Integrations like this on API Gateway:
${stageVariables.ENV_CHAR}-somelambda
So I can have d-somelambda, s-somelambda, etc. Several versions for environments, all simultaneous. This works fine.
BUT, I am using Custom Authorizers, and I have d-authorizer-jwt and d-authorizer-apikey.
When I deploy the API in DEV stage, it's all ok. But when I deploy to PROD stage, all lambda calls are dynamically pointing properly to *p-lambdas*, except the custom authorizer, which is still pointing to "d" (DEV) and calling dev backend for needed validation (it caches, but sometimes checks the database).
Please note I don't want necessarily to pass the Stage Variables like others are asking, I just want to call the correct Lambda out of a proper configuration like Integration Request offers. By having access to Stage Variables as a final way of solving this, I would need to change my approach and have a single lambda for all envs, and dynamically touch the required backend based on Stage Variables... not that good.
Tks
Solved. It works just as I described. There are some caveats:
a) You need to previously grant access to that lambda
b) You can't test the authorizer due to a UI glitch ... it doesn't ask for the StageVar so you will never reach the lambda
c) You need to deploy the API to get the Authorizers updated on a particular Stage
Cannot tell why it didn't work on my first attempt.

AWS API Gateway: New Stage not work

I have created my DEV environment without any problem. It's work Fine.
but I'm trying to create a QA environment (or any other) and it does not work.
the only difference between the two environments is the variable that refers to the backend (I have tried putting the same one and the problem persists)
if I try some method in the different environments by means of the "Test" function, both work. But when I try from postman, only work DEV. The only error I see for CloudWatch is the following:
Execution failed due to configuration error: Invalid endpoint address.
Any idea? Thanks
the problem was the name of variables in Stage Variables
I think the problem that you are having is that you need to deploy your stage. i.e.
API -> Resources -> Actions (on root of api) -> Deploy Api
Then select the stage you want to deploy get the new endpoint and test from postman.