Is there a way of running AWS Step Functions locally when defined by CDK? - amazon-web-services

AWS Step Functions may be run in a local Docker environment using Step Functions Local Docker. However, the step functions need to be defined using the JSON-based Amazon States Language. This is not at all convenient if your AWS infrastructure (Step Functions plus lambdas) is defined using AWS CDK/CloudFormation.
Is there a way to create the Amazon States Language definition of a state machine from the CDK or CloudFormation output, such that it’s possible to run the step functions locally?
My development cycle is currently taking me 30 minutes to build/deploy/run my Lambda-based step functions in AWS in order to test them and there must surely be a better/faster way of testing them than this.

We have been able to achieve this by the following:
Download:
https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local.html
To run step functions local, in the directory where you extracted the local Step Function files run:
java -jar StepFunctionsLocal.jar --lambda-endpoint http://localhost:3003
To create a state machine, you need a json definition (It can be pulled from the generated template or can get the toolkit plug in for Vs code, type step functions, select from a template and that can be your starter. Can also get it from the AWS console in the definition tab on the step function.
Run this command in the same directory as the definition json:
aws stepfunctions --endpoint http://localhost:8083 create-state-machine --definition "cat step-function.json" --name "local-state-machine" --role-arn "arn:aws:iam::012345678901:role/DummyRole"
You should be able to hit the SF now (hopefully) :)

You can use cdk watch or the --hotswap option to deploy your updated state machine or Lambda functions without a CloudFormation deployment.
https://aws.amazon.com/blogs/developer/increasing-development-speed-with-cdk-watch/
If you want to test with Step Functions local, cdk synth generates the CloudFormation code containing the state machine's ASL JSON definition. If you get that and replace the CloudFormation references and intrinsic functions, you can use it to create and execute the state machine in Step Functions Local.
How some people have automated this:
https://nathanagez.com/blog/mocking-service-integration-step-functions-local-cdk/
https://github.com/kenfdev/step-functions-testing

Another solution that might help is to use localstack what is supports many tools such CDK or CloudFormation and let developers to run stack locally.
There are a variety ways to run it, one of them is to run it manually in docker container, according to the instruction get started.
Next following the instruction what's next configure aws-cli or use awslocal.
All next steps and templates should be the same as for AWS API in the cloud.

Related

Can a Amazon States Language json file be generated from CDK step functions TS file

If I have a CDK *.ts file that defines my AWS Step Functions, is it possible to generate an Amazon States Language asl.json file that I can use it to visualize that step function process (using the AWS Toolkit for VS)?
I took a look at: Is there a way of running AWS Step Functions locally when defined by CDK?, Is there a way to create step functions graph using CDK?, and the AWS CDK for Step Functions: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-stepfunctions-readme.html but none of those resources indicated a process to generate that asl.json file. The AWS Step Function module has an Import, what I am looking for is essentially, the reverse or an Export.
AWS Toolkit for Visual Studio Code supports visualization of CDK state machines. Run cdk synth then the state machine resource will appear in the CDK explorer. Right click on it to display the graph.
This may not be exactly what your looking for, but check out this Question and Answer.
From this you would be able to get the Amazon State Language JSON of your state machine that is defined in your CDK *.ts file and do whatever you want with it upon deployment.
If you wanted to see the JSON that is created just to visualize it though, you can always go to the AWS Console > Step Functions > [Your State Machine] > Definition tab, to see the asl.json

Is there a way to do a dry run with serverless CLI?

I want to test the resolution of variables inside my serverless.yaml file, e.g. some come from command line, some from a file and others from s3.
e.g.
environment
whitelist: ${file(config/forwardproxy.sit.yaml):Common.defaultWhitelist}
I want to do a deploy with a dry run. The --nodeploy option only seems to be available with azure provider.
Is there a way to do this with the AWS provider?
I use serverless package for the purpose of testing variable resolution, syntax checking, plugin configuration, etc. If you have no use for the zip it creates, you can just trash it afterwards.
You can use plugin serverless-offline to run lambdas (if that's what you are trying to do).
Usage: $ serverless offline

Deploy Lambda Functions in or via Bamboo

How can we deploy a set of Lambda functions using the Serverless stack via Bamboo? Is there any way to write scripts/tasks to deploy lambda functions or applications in Bamboo?
You can use the AWS Lambda Function task in Bamboo as shown here:
The link to the description how to use the task is here:
https://utoolity.atlassian.net/wiki/spaces/TAWS/pages/50004002/Using+the+AWS+Lambda+Function+task+in+Bamboo
A description of deploying the lambda function is here:
https://utoolity.atlassian.net/wiki/spaces/TAWS/pages/50003999/Deploying+to+AWS+Lambda
If the aws lambda is written in node.js, you can directly add a node js task and include your script files or even write script inline in tasks. If written using C#, you can use MS build tasks to deploy the same. I have used both of them. You can find the Node js or MS build in Atlassian marketplace. If its not suitable, you can write your own using java and deploy.

How to avoid deployment of all five functions in a server of serverless framework if only one function is changed

I have a serverless framework service with (say)five aws lambda functions using python. By using github I have created a CodePipeline for CI/CD.
When I push the code changes, it deploys all the functions even only function is changed.
I want to avoid the deployment of all functions and the CI/CD should determine the changed function and deploy it. Rest of functions should not be deployed again.
Moreover, is there anyway to deal with such problems using AWS SAM, as at this stage I have an option to switch towards SAM by quitting serverless framework
Unfortunately there is no "native" way to do it. You would need to write a bash that will loop through the changed files and call sls deploy -s production -f for each one of them.
I was also faced this issue, and eventually it drove me to create an alternative.
Rocketsam takes advantage of sam local to allow deploying only changed functions instead of the entire microservice.
It also supports other cool features such as:
Fetching live logs for each function
Sharing code between functions
Template per function instead of one big template file
Hope it solves your issue :)

Is it possible to directly call docker run from AWS lambda

I have a Java standalone application which I have dockerized. I want to run this docker everytime an object is put into S3 storage. On way is to do it via AWS batch which I am trying to avoid.
Is there a direct and easy way to call docker run from a lambda?
Yes and no.
What you can't do is execute docker run to run a container within the context of the Lambda call. But you can trigger a task on ECS to be executed. For this to work, you need to have a cluster set up on ECS, which means you need to pay for at least one EC2 instance. Because of that, it might be better to not use Docker, but I know too little about your application to judge that.
There are a lot of articles out there how to connect S3, Lambda and ECS. Here is a pretty in-depth article by Amazon that you might be interested in:
https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/
If you are looking for code, this repository implements what is discussed in the above article:
https://github.com/awslabs/lambda-ecs-worker-pattern
Here is a snippet we use in our Lambda function (Python) to run a Docker container from Lambda:
result = boto3.client('ecs').run_task(
cluster=cluster,
taskDefinition=task_definition,
overrides=overrides,
count=1,
startedBy='lambda'
)
We pass in the name of the cluster on which we want to run the container, as well as the task definition that defines which container to run, the resources it needs and so on. overrides is a dictionary/map with settings that you want to override in the task definition, which we use to specify the command we want to run (i.e. the argument to docker run). This enables us to use the same Lambda function to run a lot of different jobs on ECS.
Hope that points you in the right direction.
Yes. It is possible to run containers out Docker images stored in Docker Hub within AWS Lambda using SCAR.
For example, you can create a Lambda function to execute a container out of the ubuntu:16.04 image in Docker Hub as follows:
scar init ubuntu:16.04
And then you can run a command or a shell-script within that container upon each invocation of the function:
scar run scar-ubuntu-16-04 whoami
SCAR: Request Id: ed5e9f09-ce0c-11e7-8375-6fc6859242f0
Log group name: /aws/lambda/scar-ubuntu-16-04
Log stream name: 2017/11/20/[$LATEST]7e53ed01e54a451494832e21ea933fca
---------------------------------------------------------------------------
sbx_user1059
You can use your own Docker images stored in Docker Hub. Some limitations apply but it can be effectively used to run generic applications on AWS Lambda. It also features a programming model for file-processing event-driven applications. It uses uDocker under the hood.
Yes try Udocker.
Udocker is a simple tool written in Python, it has a minimal set of dependencies so that can be executed in a wide range of Linux systems.
udocker does not make use of docker nor requires its installation.
udocker "executes" the containers by simply providing a chroot like environment over the extracted container. The current implementation uses PRoot to mimic chroot without requiring privileges.
Examples
Pull from docker hub and list the pulled images.
udocker pull fedora
Create the container from a pulled image and run it.
udocker create --name=myfed fedora
udocker run myfed cat /etc/redhat-release
And also its good to check Hackernoon.
Because:
In Lambda, the only place you are allowed to write is /tmp. But udocker will attempt to write to the homedir by default. And other stuff.