I am trying to go to a specific state directly or I want to have the flexibility to start from beginning of the workflow. I am not able to pass the next state as dynamic variable. How can I achieve this please?
workflow:
{
{
"Comment": "A description of my state machine",
"StartAt": "gotochoice",
"States": {
"gotochoice": {
"Type": "Choice",
"Choices": [
{
"Variable": "$$.Execution.Input.initial",
"BooleanEquals": true,
"Next": "$$.Execution.Input.startState"
}
],
"Default": "defaultState"
}
},
//Other states
}
}
From above workflow I want to specify the start state dynamically. But "Next" is not accepting the variable from executionContext. Any workaround or any suggestions to fix this issue please?
Basically i just want to start my state machine from a certain failed state. I know below can be done, but i don’t want to create a new state machine for that.Any other alternative please.
https://aws.amazon.com/blogs/compute/resume-aws-step-functions-from-any-state/
Just in case if anyone is still looking for an answer, this is not possible at this stage. But may be in future according to aws support.
Related
I am using serverless template to create a lambda function in AWS.
If I don't specify any value for the property "ReservedConcurrentExecutions", then the function gets created with Unreserved concurrency.
Now, I would like to use reserved concurrency (or unreserved) depending on an input parameter.
Function with Reserved Concurrency:
"MyFunction": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "MyFunctionHandler",
"CodeUri": "myfunction.zip",
"ReservedConcurrentExecutions" : 2,
}
}
Function with Unreserved Concurrency: (just don't use the ReservedConcurrentExecutions property)
"MyFunction": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "MyFunctionHandler",
"CodeUri": "myfunction.zip",
}
}
I know I can declare the 2 functions separately and have a Condition to create one or the other.
What I would like to know is if it is possible to have just one function and conditionally add the ReservedConcurrentExecutions property.
Thank you!
Serverless framework does not support conditional statements and properties to resources, but you can try and use this "ifelse" plugin.
Im trying to do the following at AWS Step Functions:
IF ExampleState fails, do "Next":"Anotherlambda"
IF ExampleState completes successfull, end the execution.
How can i do that? The CHOICE state doesn't support ErrorEquals: States.TaskFailed.
In my case, when ExampleState fails, State Machine STOPS and gives me error, but i want to continue and catch some info from the error and save it with another lambda
Thanks!
All i wanted AWS Step Functions to do is, if a State succeeded, finish the execution, if fails, run another lambda. Like an IF / ELSE on programming.
Step Functions gives this easy to you as a CATCH block that only activates if catches an error and does what you want. Here the solution:
"StartAt": "ExampleLambda",
"States": {
"ExampleLambda": {
"Type": "Task",
"Resource": "xxx:function:ExampleLambda",
"Catch": [
{
"ErrorEquals":["States.TaskFailed"],
"Next": "SendToErrorQueue"
}
],
"End": true
}
We are creating a workflow composed of multiple SQL Operations(Aggregations, Transposes etc.) via AWS Step functions. Every operation is modelled as a separate Lambda which houses the SQL query.
Now, every query accepts its input parameters from the state machine, so every lambda task is as below:
"SQLQueryTask": {
"Type": "Task",
"Parameters": {
"param1.$": "$$.Execution.Input.param1",
"param2.$": "$$.Execution.Input.param2"
},
"Resource": "LambdaArn",
"End": true
}
The Parameters block thus repeats for every SQLQuery node.
Added to this since Lambdas can fail intermittently and we would like to retry for them ; we also need to have below retry block in every State:
"Retry": [ {
"ErrorEquals": [ "Lambda.ServiceException", "Lambda.AWSLambdaException", "Lambda.SdkClientException"],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
} ]
This is making the state definition very complex. Is there No way to extract out the common part of state definition to a reusable piece?
One solution could be using AWS CDK (https://aws.amazon.com/cdk/)
This allows developers to define higher-level abstractions of resources, which can easily be reused.
There are some example here that could be helpful: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-stepfunctions-readme.html
I have created a simple AWS state machine with lambda functions. Like below
{
"Comment":"Validates data",
"StartAt": "ChooseDocumentType",
"States": {
"ChooseDocumentType": {
"Type": "Choice",
"Choices":[
{
"Variable":"$.documentType",
"StringEquals":"RETURN",
"Next":"ValidateReturn"
},
{
"Variable":"$.documentType",
"StringEquals":"ASSESSMENT",
"Next":"ValidateAssessment"
}
],
"Default":"DefaultState"
},
"ValidateReturn":{
"Type":"Task",
"Resource":"arn:aws:lambda:us-west-2:111111111:function:ValidateReturn",
"Next":"DefaultState"
},
"ValidateAssessment":{
"Type":"Task",
"Resource":"arn:aws:lambda:us-west-2:111111111:function:ValidateAssessment",
"Next":"DefaultState"
},
"DefaultState":{
"Type":"Pass",
"End":true
}
}
}
Questions
1> How do i create stages for this state machine. (like production, development etc)?
2>Each lambda function has alias pointing to different versions. So development alias always point to $latest version and production alias point to, lets say, version 2. How do i dynamically associate state machine's stages with these lambda alias? So state machine in development stage should use lambda function with alias development and so on.
I am using AWS console to manage state machines and lambdas, and i don't see any action to create stages for state machine
You can declare the alias and the version in the Lambda ARN:
# default, $LATEST
arn:aws:lambda:us-west-2:111111111:function:ValidateAssessment
# using alias
arn:aws:lambda:us-west-2:111111111:function:ValidateAssessment:development
# using version
arn:aws:lambda:us-west-2:111111111:function:ValidateAssessment:2
Use these in the Step Function definition according to your needs.
Re: # 2, if your main concern is controlling which Lambda alias gets invoked, there is a way you can do that via a single step function.
Your step function state definition would be something like:
{
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"Parameters": {
"InvocationType": "RequestResponse",
"FunctionName": "someFunction",
"Qualifier.$": "$.lambdaAlias",
"Payload": {}
},
}
So where you execute the step function and would specify the stage if there was such a thing, you'd pass a lambdaAlias parameter. (There's nothing magical about that name, you can pull it from whatever step function input parameter you want.)
The request payload to your Lambda would go in Parameters.Payload.
https://docs.aws.amazon.com/step-functions/latest/dg/connect-lambda.html
Just wondering whats the best practice for determining what permissions I should give for my CloudFormation template?
After some time of trying to give the minimal permissions it require, I find that thats really time consuming and error prone. I note that depending on the state of my stack, really new vs some updates vs delete, I will need different permissions.
I guess, it should be possible for there to be some parser that given a CloudFormation template can determine the minimum set of permissions it require?
Maybe I can give ec2:* access to resources tagged Cost Center: My Project Name? Is this ok? But I wonder what happens when I change my project name for example?
Alternatively, isit ok to assume its ok to give say ec2:* access based on the assumption the CloudFormation parts is usually only executed off CodeCommit/Github/CodePipeline and its not something that is likely to be public/easy to hack? --- Tho this sounds like a flawed statement to me ...
In the short term, you can use aws-leastprivilege. But it doesn't support every resource type.
For the long term: as mentioned in this 2019 re:invent talk, CloudFormation is working towards open sourcing and migrating most of its resource types to a new public resource schema. One of the benefits of this is that you'll be able to see the permissions required to perform each operation.
E.g. for AWS::ImageBuilder::Image, the schema says
"handlers": {
"create": {
"permissions": [
"iam:GetRole",
"imagebuilder:GetImageRecipe",
"imagebuilder:GetInfrastructureConfiguration",
"imagebuilder:GetDistributionConfiguration",
"imagebuilder:GetImage",
"imagebuilder:CreateImage",
"imagebuilder:TagResource"
]
},
"read": {
"permissions": [
"imagebuilder:GetImage"
]
},
"delete": {
"permissions": [
"imagebuilder:GetImage",
"imagebuilder:DeleteImage",
"imagebuilder:UnTagResource"
]
},
"list": {
"permissions": [
"imagebuilder:ListImages"
]
}
}