edit: Turns out the solution is in the docs. I had bog standard normal 'sam' installed but I needed what they call the 'public preview version' AKA 'sam-beta-cdk'. With this installed the API can be started locally with sam-betacdk start-api and works well. While I appreciate the answers which suggest that development should be done using purely TDD I feel there is also value in this more interactive, manual mode as it permits quicker exploration of the problem space.
I'm trying to build my first app with CDK + Typescript using API Gateway, Lambdas and DynamoDB. I have built a couple of Lambdas and deployed them and they work fine live on the web. However I don't want a minute long deploy cycle and various associated AWS costs as part of my workflow. What I want is to be able to test my API locally.
I have struggled to find docs on how to do this. Amazon seem to recommend using the SAM CLI here so that is what I've been trying.
The docs claim running sam local xyz runs cdk synth to make a "could assembly" in ./aws-sam/build but I see no evidence of this. Instead what I get is a complaint that sam could not find a 'template.yml'. So I manually run cdk synth > template.yml which creates one in the root folder. Then I run sam local start-api and it seems happy to start up.
Then I try and hit my test lambda using CURL: curl 'http://127.0.0.1:3000/test' I get {"message":"Internal server error"} and a huge ugly stack trace in the console that is running sam local start-api
The lambda is this...
exports.handler = async function() {
console.log("WooHoo! Test handler ran")
return {statusCode: 200, headers: {"Content-Type": "application/json"}, body: "Test handler ran!"}
}
Start of the huge ugly stack trace...
Mounting /home/user/code/image-cache/asset.beeaa749e012b5921018077f0a5e4fc3ab271ef1c191bd12a82aa9a92148782e as /var/task:ro,delegated inside runtime container
START RequestId: 99f53642-b294-4ce5-a1b4-8c967db80ce1 Version: $LATEST
2021-09-15T12:33:37.086Z undefined ERROR Uncaught Exception {"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'test'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js","stack":["Runtime.ImportModuleError: Error: Cannot find module 'test'","Require stack:","- /var/runtime/UserFunction.js","- /var/runtime/index.js"," at _loadUserApp (/var/runtime/UserFunction.js:100:13)"," at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
The end of the huge ugly stack trace...
Invalid lambda response received: Lambda response must be valid json
So it would seem sam local start-api can't find test and throws and error which means the API gateway doesn't get a valid 'lambda response'. So far this has not helped me chase down the problem :/ It certainly seems aware that test is a route, as trying to hit other endpoints gives the classic {"message":"Missing Authentication Token"} but it chokes hard trying to fulfill it despite me having both functions/test.ts and the compiled functions/test.js present.
I have the test route and handler defined in my CDK stack definition like so...
const testLambda = new lambda.Function(this, "testLambdaHandler", {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset("functions"),
handler: "test.handler"
})
api.root
.resourceForPath("test")
.addMethod("GET", new apigateway.LambdaIntegration(testLambda))
I considered posting my template.yml but that is even longer than the big ugly error message so I haven't.
So I have three questions (well actually a million but I don't want to be too cheeky!)
Is this actually the canonical way of locally testing apps made with CDK
If so, where am I going wrong?
If not, what is the better/proper way?
Lambda handlers are just functions. They do not need any special environment to function - they are called at a specific point in the Lambda Invocation process, and provided an event (a json object) and a context (another json object)
You can (and should!) unit test them just like any other individual function in your language/testing framework.
As #Lucasz mentioned, you should rely on the fact that, if set up properly, API gateway and Lambda will interact the same way every time. Once you have run one end to end test and you know that the basics work, any further implementation can be done trough unit testing
There are numerous libraries for mocking AWS service calls in unit testing, and there are plenty of good practice work arounds for the services that are more difficult to mock (ie: its difficult to mock a Lambda call from inside another lambda - but if you wrap that lambda call in its own function, you can mock the function itself to return whatever you want it to - and this is good practice for testing as well!)
using jest, in a coded unit test, you can call the lambda handler, give it stubbed or mocked event json, as well as a context json (probably just blank as youre not using it) and the lambda handler will act just like any other function with two parameters you've ever written, including returning what you want it to return.
You must be doing something wrong with your file directory. Where is your index.js located? If you generate the template.json, is the directory correct?
Also in what directory do you execute the Sam local command?
The thing with testing your serverless application is you don't have to test your full application. You need to count on AWS that API gateway, dynamodb and lambda is perfectly working.
The only thing you need to test is the logic you implemented.
In here you make sure your function prints out something and returns a 200. That's all you have to do.
Look into 'jest' for testing js.
If you want to test cdk you should into https://docs.aws.amazon.com/cdk/latest/guide/testing.html
Also "running Aws locally" is not good practice. it's never the same as how it's running in real life aka the cloud. You use plugins for this, tools for that... Local is not the same as in the cloud.
If you have any more questions, feel free to ask.
Related
I am looking for a language / framework or a method by which I can build API / web application code such that it can run on Serverless compute's like aws lambda and the same code runs on a dedicated compute system like lightsail or EC2.
First I thought of using Docker to do this but AWS Lambda entry point is a specific function signature which is very different than Spring Controllers. Is there a solution available currently?
So basically when I run it in lambda - it will have cold start issue, later when the app is ready or get popular I would like to move it to a EC2 instance for better performance and higher traffic load.
I want to start right in this situation so that later it can be easy to port and resolve the performance issue's
I'd say; no this is not possible easily.
When you are building an api that you'd want to run on lambda's you most likely will be using an API Gateway which takes care of your routing to different lambda functions (best practice). So the moment you would me working on an api like this migrating to EC2 would be a nightmare as you would need to rebuild the whole application a more of a monolith application which could run on EC2.
I would honestly commit to either run it on EC2/Containers or run it on Lambda, if cold start is your main issue with Lambda's you might wanna look into Lambda Snapstart for Java or use another language like Typescript/Python.
After some correct keywords in google I finally got what I was looking for, checkout this blog and code library shared by AWS which helps you convert the request and response of the request as per the framework required http request
Running APIs Written in Java on AWS Lambda: https://aws.amazon.com/blogs/opensource/java-apis-aws-lambda/
Repo Code: https://github.com/awslabs/aws-serverless-java-container
Thanks Ricardo for your response - will do check out Lambda Snapstart for sure and try it as well. I have not tested out this completely but it looks promising to some extent.
at my app, I was able to track all the lambda, APIGateway and DynamoDB requests through AWS-X-Ray.
I am doing the same as the answer in this question:
Adding XRAY Tracing to non-rest functions e.g., SQS, Cognito Triggers etc
However, how would be the case of S3, SQS or other services/non-rest functions ??
I saw some old code that does not even use aws-sdk, the dependencies are import direct like:
import {S3Client, S3Address, RoleService, SQSService} from '#sws/aws-bridge';
So, in these cases, how to integrate/activate AWS-XRay?
Thank you very much in advance!
Cheers,
Marcelo
At the moment Lambda doesn't support continuing traces from triggers other than REST APIs or direct invocation
The upstream service can be an instrumented web application or another Lambda function. Your service can invoke the function directly with an instrumented AWS SDK client, or by calling an API Gateway API with an instrumented HTTP client.
https://docs.aws.amazon.com/xray/latest/devguide/xray-services-lambda.html
In every other case it will create its own, new Trace ID and use that instead.
You can work around this yourself by creating a new AWS-Xray segment inside the Lambda Function and using the incoming TraceID from the event. This will result in two Segments for your lambda invocation. One which Lambda itself creates, and one which you create to extend the existing trace. Whether that's acceptable or not for your use case is something you'll have to decide for yourself!
If you're working with Python you can do it with aws-xray-lambda-segment-shim.
If you're working with NodeJS you can follow this guide on dev.to.
If you're working with .NET there are some examples on this GitHub issue.
I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.
I have a Flask app running as an AWS Lambda Function deployed with Zappa and would like to activate X-Ray to get more information for the different functions.
Activating X-Ray with Zappa was easy enough - it only requires adding this line in the zappa-settings.json:
"xray_tracing": true
Further, I installed the AWS X-Ray Python SDK and added a few decorators to some functions, like this:
#xray_recorder.capture()
When I deploy this as a Lambda function, it all works well. The problem is using the system locally, both when running tests and when running the Flask in a local server instead of as a lambda function.
When I use any of the functions that are decorated either in a test or through the local server, the following exception is thrown:
aws_xray_sdk.core.exceptions.exceptions.SegmentNotFoundException: cannot find the current segment/subsegment, please make sure you have a segment open
Which of course makes sense, because AWS Lambda handles the creation of segments.
Are there any good ways to deactivate capturing locally? This would be useful e.g. for running unit tests locally on functions that I would like to watch in X-Ray.
One of the feature request of this SDK is to have a "disabled" global flag so everything becomes no-ops https://github.com/aws/aws-xray-sdk-python/issues/26.
However, it still depends on what you are testing against. It's good practice to test what actually will be run on Lambda. You can set some environment variables so the SDK thinks it is running on Lambda.
You can see the SDK is looking for two env vars https://github.com/aws/aws-xray-sdk-python/blob/master/aws_xray_sdk/core/lambda_launcher.py. One is LAMBDA_TASK_ROOT set to true so it knows to switch to lambda-mode. The other one is _X_AMZN_TRACE_ID which contains the tracing context normally passed by lambda container.
If you just want to test non-XRay code you can set AWS_XRAY_CONTEXT_MISSING to LOG_ERROR so the SDK doesn't complain on context missing and simply give up capturing wrapped functions. This will run much less code path than mimic lambda behaviors. Ideally it would be better for the lambda local testing tool to be X-Ray friendly. Are you using https://github.com/awslabs/aws-sam-cli? There is already an open issue for this feature https://github.com/awslabs/aws-sam-cli/issues/217
I am using AWS API Gateway with Lambda/ECS for compute and Cognito for users. But I find it really hard to test such applications. With AWS SAM Local I maybe able to test simple Lambda and API gateway functionality but if I use things like API Gateway authorizers I find it hard to test these end to end.
Looks like to test such applications, I need an entire new setup just for testing? I mean like a separate API Gateway with Lambda/ECS cluster/Cognito user pool just to enable testing? This seems very slow, and I think I will not be able to get things like a code coverage report anymore?
Disclaimer: I'm fairly new to AWS Lambda/ECS/Cognito so take this with a grain of salt.
Unit Tests: SAM Local or some other local docker hosting with a unit testing library (mocha) would be good for this because:
Speed. All your tests should execute quickly against a lambda function
Example : wildrydes with mocha
Integration Tests: Once you stage your changes, there's a bunch of options calling the API. I'd start off with postman to run the API tests and you can chain them together or run them in command line if needed.
End to End (E2E) tests: If the API is your front end then there might not be any difference between E2E and API tests. UI, Voice, Chat front ends differ significantly as do the options so I'll suggest some options:
UI : Selenium (has the most support and options available to you including docker images: Selenium Hub or standalone)
Voice: Suggestions?
Text: Suggestions?
Step functions :
allows you to visualize each step
retries when there are errors
allows you to diagnose and debug problems
X-Ray: collects data about requests that your app serves, and provides tools you can use to view
As for code coverage, I'm not sure how you currently do code coverage. Something like this npm run coverage, maybe?
I am assuming you are using cloudformation for deploying such an extensive stack and the following answer is based on that assumption.
Thus in addition to the #lloyd's answer, I would like to add that you can add custom resources within your cloudformation template for testing each individual lambdas or even api endpoints.
Also for lambda, you can use Deployment Preferences Hooks to test your serverless lambdas before and after moving your lambda to the new version.
https://github.com/awslabs/serverless-application-model/blob/release/v1.8.0/docs/safe_lambda_deployments.rst