I want to test the resolution of variables inside my serverless.yaml file, e.g. some come from command line, some from a file and others from s3.
e.g.
environment
whitelist: ${file(config/forwardproxy.sit.yaml):Common.defaultWhitelist}
I want to do a deploy with a dry run. The --nodeploy option only seems to be available with azure provider.
Is there a way to do this with the AWS provider?
I use serverless package for the purpose of testing variable resolution, syntax checking, plugin configuration, etc. If you have no use for the zip it creates, you can just trash it afterwards.
You can use plugin serverless-offline to run lambdas (if that's what you are trying to do).
Usage: $ serverless offline
Related
The problem
I'm approaching AWS, and the first test project will be a website, but i'm struggling on how to approach the resource and the tools to accomplish this.
AWS documentation is not really beginner-friendly, so to me it is like to being punched in the face at the first boxe training session.
First attempt
I've installed bot AWS and SAM cli tools, so what I would expect is to be able to create an empty stack at first and adding the resource one by one as the specifications are given/outlined, but instead what I see is that i need to give a template to the tool to create the new stack, but that means I need to know how to write it beforehand and therefore the template specifications for each resource type.
Second attempt
This lead me to create the stack and the related resources from the online console to get the final stack template, but then I need to test every new resource or any updated resource locally, so I have to copy the template from the online console to my machine and run the cli tools with this, but obviously it is not the desired development flow.
What I expected
Coming from a standard/classical web development I would expect to be able to create the project locally, test the related resources locally, version it, and delegate the deployment to the pipeline.
So what?
All this made me understand that "probably" I'm missing somenthing on how to use the aws cli tools and how the development for an aws-hosted application is meant to be done.
I'm not seeking for a guide on specific resource types like every single tutorial I've found online, but something on a higher level on how to handle a project development on aws, best practices and stuffs like that, I can then dig deeper on any resource later when needed.
AWS's Cloud Development Kit ticks the boxes on your specific criteria.
Caveat: the CDK has a learning curve in line with its power and flexibility. There are much easier ways to deploy a web app on AWS, like the higher-level AWS Amplify framework, with abstractions tailored to front-end devs who want to minimise the mental energy spent on the underlying infrastructure.
Each of the squillion AWS and 3rd Party deploy tools is great for somebody. Nevertheless, looking at your explicit requirements in "What I expected", we can get close to the CDK as an objective answer:
Coming from a standard/classical web development
So you know JS/Python. With the CDK, you code infrastructure as functions and classes, rather than 500 lines of YAML as with SAM. The CDK's reference implementation is in Typescript. JS/Python are also supported. There are step-by-step AWS online workshops for these and the other supported languages.
create the project locally
Most of your work will be done locally in your language of choice, with a cdk deploy CLI command to
bundle the deployment artefacts and send them up to the cloud.
test the related resources locally
The CDK has built-in testing and assertion support.
version it
"Deterministic deploy" is a CDK design goal. Commit your code and the generated deployment artefacts so you have change control over your infrastructure.
delegate the deployment to the pipeline
The CDK has good pipeline support: i.e. a push to the remote main branch can kick off a deploy.
AWS SAM is actually a good option if you are just trying to get your feet wet with AWS. SAM is an open-source wrapper around the aws-cli, which allows you to create aws resources like Lambda in say ~10 lines of code vs ~100 lines if you were to use the aws-cli directly. Yes, you'll need to learn SAM specific things like SAMtemplate and SAM-cli but it is pretty straightforward using this doc.
Once you get the hang of it, it would be easier to start looking under the hood of what/how SAM is doing things and get into the weeds with aws-cli if you wanted. Which will then allow you to build out custom solutions (using aws-cli) for your complex use cases that SAM may not support. Caveat: SAM is still pretty new and has open issues that could be a blocker for advanced features/complex use cases.
AWS Step Functions may be run in a local Docker environment using Step Functions Local Docker. However, the step functions need to be defined using the JSON-based Amazon States Language. This is not at all convenient if your AWS infrastructure (Step Functions plus lambdas) is defined using AWS CDK/CloudFormation.
Is there a way to create the Amazon States Language definition of a state machine from the CDK or CloudFormation output, such that it’s possible to run the step functions locally?
My development cycle is currently taking me 30 minutes to build/deploy/run my Lambda-based step functions in AWS in order to test them and there must surely be a better/faster way of testing them than this.
We have been able to achieve this by the following:
Download:
https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local.html
To run step functions local, in the directory where you extracted the local Step Function files run:
java -jar StepFunctionsLocal.jar --lambda-endpoint http://localhost:3003
To create a state machine, you need a json definition (It can be pulled from the generated template or can get the toolkit plug in for Vs code, type step functions, select from a template and that can be your starter. Can also get it from the AWS console in the definition tab on the step function.
Run this command in the same directory as the definition json:
aws stepfunctions --endpoint http://localhost:8083 create-state-machine --definition "cat step-function.json" --name "local-state-machine" --role-arn "arn:aws:iam::012345678901:role/DummyRole"
You should be able to hit the SF now (hopefully) :)
You can use cdk watch or the --hotswap option to deploy your updated state machine or Lambda functions without a CloudFormation deployment.
https://aws.amazon.com/blogs/developer/increasing-development-speed-with-cdk-watch/
If you want to test with Step Functions local, cdk synth generates the CloudFormation code containing the state machine's ASL JSON definition. If you get that and replace the CloudFormation references and intrinsic functions, you can use it to create and execute the state machine in Step Functions Local.
How some people have automated this:
https://nathanagez.com/blog/mocking-service-integration-step-functions-local-cdk/
https://github.com/kenfdev/step-functions-testing
Another solution that might help is to use localstack what is supports many tools such CDK or CloudFormation and let developers to run stack locally.
There are a variety ways to run it, one of them is to run it manually in docker container, according to the instruction get started.
Next following the instruction what's next configure aws-cli or use awslocal.
All next steps and templates should be the same as for AWS API in the cloud.
We would like to be able to understand the version of our software that is currently deployed to a particular AWS lambda. What are the best practices for tracking a code commit hash to an AWS lambda? We've looked at AWS tagging and we've also looked at AWS Lambda Aliasing but neither approach seems convenient or user-friendly. Are there other services that AWS provides?
Thanks
Without context and a better understanding of your use case around the commit hash, its difficult to give a directly useful answer, and as other answers have shown, there are many ways you could accomplish this. That being said, the commit hash of particular code is ultimately metadata and AWS has a solution for dealing with resource metadata: tags.
Best practice is to tag your resources with metadata. Almost all, if not all, AWS resources (including Lambda) support tags. As stated in the AWS documentation “tagging allows you to quickly search, filter, and manage resources” and subject to AWS limits your tags can be pretty much any key-value pair you like, including “commit: hash”.
The tagging strategy here would be to assign the commit hash to a tag, “commit” with the value “e63abe27”. You can tag resources manually through the console or you can apply tags as part of your build process.
Once tagged, at a high level, you would then be able to identify which commit is being used by listing the tags for the lambda in question. The CLI command would be something like:
aws lambda list-tags —resource arn:aws:lambda:us-east-1:123456789012:function:myFunction
You can learn more about tags and tagging strategy by reviewing the AWS docs here and you can download the Tagging Best Practice whitepaper here.
One alternative could be to generate a file with the Git SHA as part of your build system that is packaged together with the other files in the build artifact. The following script generates a gitSha.json file in the ${outputDir}:
#!/usr/bin/env bash
gitSha=$(git rev-parse --short HEAD)
printf "{\"gitSha\": \"%s\"}" ${gitSha} > "${outputDir}/git-sha.js"
Consequently, the gitSha.json may look like:
{"gitSha": "5c805638"}
This file can then be accessed either by downloading the package. Alternatively, you can create a function that inspects the file in runtime and returns its value to the caller, writes it to a log, or similar depending on your use case.
This script was implemented using bash and git rev-parse, but you can use any scripting language in combination with a Git library that you are comfortable with.
Best way to version Lambda is to Create Version option and adding these versions into an Alias.
We use this mechanism extensively for mapping a single AWS Lambda into different API gateway endpoints. We use environment variables provided by AWS Lambda to move all configurations outside the Lambda function. In each version of the Lambda, we change these environment variables as required and create a new version. This version can be mapped to an alias, which will help to keep the API gateway or the integration points intact (without any change for the integration)
if using serverless
try in serverless.yml
provider:
versionFunctions: true
and
functions:
MyFunction:
tags:
try serverless plugins, one of them
serverless plugin install --name serverless-version-tracker
it uses git tags as version
you need to manage git tugs then
I have a Flask app running as an AWS Lambda Function deployed with Zappa and would like to activate X-Ray to get more information for the different functions.
Activating X-Ray with Zappa was easy enough - it only requires adding this line in the zappa-settings.json:
"xray_tracing": true
Further, I installed the AWS X-Ray Python SDK and added a few decorators to some functions, like this:
#xray_recorder.capture()
When I deploy this as a Lambda function, it all works well. The problem is using the system locally, both when running tests and when running the Flask in a local server instead of as a lambda function.
When I use any of the functions that are decorated either in a test or through the local server, the following exception is thrown:
aws_xray_sdk.core.exceptions.exceptions.SegmentNotFoundException: cannot find the current segment/subsegment, please make sure you have a segment open
Which of course makes sense, because AWS Lambda handles the creation of segments.
Are there any good ways to deactivate capturing locally? This would be useful e.g. for running unit tests locally on functions that I would like to watch in X-Ray.
One of the feature request of this SDK is to have a "disabled" global flag so everything becomes no-ops https://github.com/aws/aws-xray-sdk-python/issues/26.
However, it still depends on what you are testing against. It's good practice to test what actually will be run on Lambda. You can set some environment variables so the SDK thinks it is running on Lambda.
You can see the SDK is looking for two env vars https://github.com/aws/aws-xray-sdk-python/blob/master/aws_xray_sdk/core/lambda_launcher.py. One is LAMBDA_TASK_ROOT set to true so it knows to switch to lambda-mode. The other one is _X_AMZN_TRACE_ID which contains the tracing context normally passed by lambda container.
If you just want to test non-XRay code you can set AWS_XRAY_CONTEXT_MISSING to LOG_ERROR so the SDK doesn't complain on context missing and simply give up capturing wrapped functions. This will run much less code path than mimic lambda behaviors. Ideally it would be better for the lambda local testing tool to be X-Ray friendly. Are you using https://github.com/awslabs/aws-sam-cli? There is already an open issue for this feature https://github.com/awslabs/aws-sam-cli/issues/217
I have a serverless framework service with (say)five aws lambda functions using python. By using github I have created a CodePipeline for CI/CD.
When I push the code changes, it deploys all the functions even only function is changed.
I want to avoid the deployment of all functions and the CI/CD should determine the changed function and deploy it. Rest of functions should not be deployed again.
Moreover, is there anyway to deal with such problems using AWS SAM, as at this stage I have an option to switch towards SAM by quitting serverless framework
Unfortunately there is no "native" way to do it. You would need to write a bash that will loop through the changed files and call sls deploy -s production -f for each one of them.
I was also faced this issue, and eventually it drove me to create an alternative.
Rocketsam takes advantage of sam local to allow deploying only changed functions instead of the entire microservice.
It also supports other cool features such as:
Fetching live logs for each function
Sharing code between functions
Template per function instead of one big template file
Hope it solves your issue :)