I created an aws event bridge rule for slack. Now i would like to display different template based on a condition. The condition variable and its value will be part of the event message. I declare a variable into InputPathMap and used this variable as a condition parameter. I am getting an error when I deployed using SAM. It shows the variable value is null and did not deploy code to aws.
partial info of my rule.
...
InputTransformer:
InputPathsMap:
"actionMsg" : "$.detail.actionMsg"
"actionValue" : "$.detail.actionValue"
InputTemplate: !Sub >
!If [
<actionValue>,
{
"channel": "slackChannelName",
"text": "condition 1 : <actionMsg>"
...(more)
},
{
"channel": "slackChannelName",
"text": "condition 2 : <actionMsg>"
...(more)
}
]
I searched in google and saw the aws condition info.
Can I set condition with the variable which I defined? Would you please give me an example, hints or link? I would appreciate.
This won't be possible to do within a single EventBridge Rule. You'll have to define separate 2 rules with an Event Pattern for each unique $detail.actionValue that you care about.
A possible solution might be to define a rule with its own input transform and event pattern for matching your first condition
{
"...": "...",
"detail": {
"...": "...",
"actionValue": ["foo"]
}
Then define a "default rule" with your default input transform that catches everything except actionValue == "foo"
{
"...": "...",
"detail": {
"...": "...",
"actionValue": [ {"anything-but": ["foo"]} ]
}
Related
I am running a step function with many different step yet I am still stuck on the 2nd step.
The first step is a Java Lambda that gets all the input parameters and does what it needs to do.
The lambda returns null as it doesn't need to return anything.
The next step is a call for API gateway which needs to use one of the parameters in the URL.
However, I see that neither the URL has the needed parameter nor do I actually get the parameters into the step. ("input": null under TaskStateEntered)
The API gateway step looks as follows: (I also tried "Payload.$": "$" instead of the "Input.$": "$")
"API Gateway start": {
"Type": "Task",
"Resource": "arn:aws:states:::apigateway:invoke",
"Parameters": {
"Input.$": "$",
"ApiEndpoint": "aaaaaa.execute-api.aa-aaaa-1.amazonaws.com",
"Method": "GET",
"Headers": {
"Header1": [
"HeaderValue1"
]
},
"Stage": "start",
"Path": "/aaa/aaaa/aaaaa/aaaa/$.scenario",
"QueryParameters": {
"QueryParameter1": [
"QueryParameterValue1"
]
},
"AuthType": "IAM_ROLE"
},
"Next": "aaaaaa"
},
But when my step function gets to this stage it fails and I see the following in the logs:
{
"name": "API Gateway start",
"input": null,
"inputDetails": {
"truncated": false
}
}
And eventually:
{
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'API Gateway start' (entered at the event id #9). Unable to apply Path transformation to null or empty input."
}
What am I missing here? Note that part of the path is a value that I enter at the step function execution. ("Path": "/aaa/aaaa/aaaaa/aaaa/$.scenario")
EDIT:
As requested by #lynkfox, I am adding the lambda definition that comes before the API gateway step:
And to answer the question, yes its standard and I see no input.
"Run tasks": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"OutputPath": "$.Payload",
"Parameters": {
"Payload.$": "$",
"FunctionName": "arn:aws:lambda:aaaaaa-1:12345678910:function:aaaaaaa-aaa:$LATEST"
},
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Next": "API Gateway start"
},
So yes, as I commented, I believe the problem is the OutputPath of your lambda task definition. What this is saying is Take whatever comes out of this lambda (which is nothing!) and cut off everything other than the key Payload.
Well you are returning nothing, so this causes nothing to be sent to the next task.
I am assuming your incoming vent already has a key in the Json that is named Payload, so what you want to do is remove the OutputPath from your lambda. It doesn't need to return anything so it doesn't need an Output or Result path.
Next, on your API task, assuming again that your initializing event has a key of Payload, you would have "InputPath": "$.Payload" - if you have your headers or parameters in the initializing json Event then, you can reference those keys in the Parameters section of the definition.
Every AWS Service begins with an Event and ends with an Event. Each Event is a JSON object. (Which I'm sure you know). With State Machines, this continues - the State Machine/Step Function is just the controller for passing Events from one Task to the next.
So any given task can have an InputPath, OutputPath, or Result Path - These three definition parameters can decide what values go into the Task and what are sent onto the Next Task. State machines are, by definition, for maintaining State between Tasks, and these help control that 'State' (and there is pretty much only one 'state' at any given time, the event heading to the next Task(s)
The ResultPath is where, in that overall Event, the task puts the data. If you put ResultPath: "$.MyResult" by itself it appends this key to the incoming event
If you add OutputPath, it ONLY passes that key from the output event of the Task onto the next step in the Step Functions.
These three give you a lot of control.
Want to Take an Event into a Lambda and respond with something completely different - you don't need the incoming data - you combine OutputPath and ResultPath with the same value (and your Lambda needs to respond with a Json Object) then you can replace the event wholesale.
If you have ResultPath of some value and OutputPath: "$." you create a new json object with a single Key that contains the result of your task (the key being the definition set in ResultPath
InputPath allows you to set what goes into the Task. I am not 100% certain but I'm pretty sure it does not remove anything from the next Task in the chain.
More information can be found here but it can get pretty confusing.
My quick guide:
ResultPath by itself if you want to append the data to the event
ResultPath + OutputPath of the same value if you want to cut off the Input and only have the output of the task continue (and it returns a JSON style object)
i have a StepFunction, which triggers another StepFunction.
Works fine.
But, I would like to customize the Name of the Inner-Execution.
E.G. i have the following State in my outer StepFunction
"Trigger Inner StepFunction": {
"Type": "Task",
"Resource": "arn:aws:states:::states:startExecution.sync",
"Parameters": {
"StateMachineArn": "arn:aws:states:eu-west-1:xxxxx:stateMachine:InnerStepFunc",
"Name": "$.Name",
"Input": {
"StatePayload": "Hello from Step Functions!",
"AWS_STEP_FUNCTIONS_STARTED_BY_EXECUTION_ID.$": "$$.Execution.Id"
}
},
"Next": "NextStep"
}
Then i get the following error:
"Invalid Name: '$.Name' (Service: AWSStepFunctions; Status Code: 400; Error Code: InvalidName; Request ID: 6834b793-667b-4c58-ae19-a1b311a82ee9; Proxy: null)"
If i remove the "name" property below the StateMachineArn Property, then it works, and the inner stepfunction is triggered with a ramdom id as execution name.
But how can i manipulate the name dynamically? Either i would like define it via the Input of the outer step function or define a prefix followed by an random id.
Does anyone have a tip for me?
P.S. I already thought about using a lambda function which is called by the outer StepFunction, and triggers the InnerStepFunction.
But for me it's important that the outer StepFunction waits for complete, before it finished.
And the Inner-StepFunction takes longer then 15 min. (Therefore an sync-call from an Lambda is not an option)
You can set it by using this syntax below. Keep in mind that your step function will fail if the name that you are passing already exist in the step function.
"Name.$": "$.Name"
Is thegre anyway for me to pass custom variables as an input to AWS step function ?
processData:
name: ingest-data
StartAt: Execute
States:
Execute:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-ingestIntnlData"
Next: Check
Check:
Type: Choice
Choices:
- Variable: "$.results['finished']"
BooleanEquals: false
Next: Wait
- Variable: "$.results['finished']"
BooleanEquals: true
Next: Notify
Wait:
Type: Wait
SecondsPath: "$.waitInSeconds"
Next: Execute
Notify:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-sendEMail"
End: true
I have two different stepfunctions which call the same lambda. I'm looking to pass a custom variable to the lambda to differentiate the calls made from the two step functions.
Something like a flag variable or if even there is a way to find out the name of the function which is invoking the lambda, that should also suffice.
Please help me out
We can build an object in Pass state and pass as input to lambda
"Payload.$":"$" simply passes through all the input
{
"StartAt":"Dummy Step 1 Output",
"States":{
"Dummy Step 1 Output":{
"Type":"Pass",
"Result":{
"name":"xyz",
"testNumber":1
},
"ResultPath":"$.inputForMap",
"Next":"invoke-lambda"
},
"invoke-lambda":{
"End":true,
"Retry":[
{
"ErrorEquals":[
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds":2,
"MaxAttempts":6,
"BackoffRate":2
}
],
"Type":"Task",
"Resource":"arn:aws:states:::lambda:invoke",
"Parameters":{
"FunctionName":"arn:aws:lambda:us-east-1:111122223333:function:my-lambda",
"Payload.$":"$"
}
}
}
}
You can use the context object and pass the ExecutionId like
{
"Comment": "A Catch example of the Amazon States Language using an AWS Lambda Function",
"StartAt": "nextstep",
"States": {
"nextstep": {
"Type": "Task",
"Resource": "arn:aws:lambda:eu-central-1:1234567890:function:catcherror",
"Parameters": {
"executionId.$": "$$.Execution.Id"
},
"End": true
}
}
}
This give you
arn:aws:states:us-east-1:123456789012:execution:stateMachineName:executionName
As you can see it contains your state machine name.
You can take decisions on them
You can create Lambda functions to use as steps within an workflow created by using AWS Step Functions. For your first step (the Lambda function you use as the first step), you can pass values to it to meet your needs:
https://www.c-sharpcorner.com/article/passing-data-to-aws-lambda-function-and-invoking-it-using-aws-cli/
Then you can pass data between steps using Lambda functions as discussed here:
https://medium.com/#tturnbull/passing-data-between-lambdas-with-aws-step-functions-6f8d45f717c3
You can create Lambda functions in other supported programming languages as well such as Java, as discussed here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/usecases/creating_workflows_stepfunctions
AS you can see, there is a lot of development options when using Lambda and AWS Step Functions.
So I have this function I'm trying to declare and it works and deploys just dandy unless you uncomment the logRetention setting. If logRetention is specified the cdk deploy operation
adds additional parameters to the stack. And, of course, this behavior is completely unexplained in the documentation.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-readme.html#log-group
SingletonFunction.Builder.create(this, "native-lambda-s3-fun")
.functionName(funcName)
.description("")
// .logRetention(RetentionDays.ONE_DAY)
.handler("app")
.timeout(Duration.seconds(300))
.runtime(Runtime.GO_1_X)
.uuid(UUID.randomUUID().toString())
.environment(new HashMap<String, String>(){{
put("FILE_KEY", "/file/key");
put("S3_BUCKET", junk.getBucketName());
}})
.code(Code.fromBucket(uploads, functionUploadKey(
"formation-examples",
"native-lambda-s3",
lambdaVersion.getValueAsString()
)))
.build();
"Parameters": {
"lambdaVersion": {
"Type": "String"
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3BucketB030C8A8": {
"Type": "String",
"Description": "S3 bucket for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3VersionKey6A2AABD7": {
"Type": "String",
"Description": "S3 key for asset version \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aArtifactHashEDC522F0": {
"Type": "String",
"Description": "Artifact hash for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
}
},
It's a bug. They're Working On Itâ„¢. So, rejoice - we can probably expect a fix sometime within the next decade.
I haven't tried it yet, but I'm guessing the workaround is to manipulate the low-level CfnLogGroup construct, since it has the authoritative retentionInDays property. The relevant high-level Log Group construct can probably be obtained from the Function via its logGroup property. Failing that, the LogGroup can be created from scratch (which will probably be a headache all on its own).
I also encountered the problem described above. From what I can tell, we are unable to specify a log group name and thus the log group name is predictable.
My solution was to simply create a LogGroup with the same name as my Lambda function with the /aws/lambda/ prefix.
Example:
var function = new Function(
this,
"Thing",
new FunctionProps
{
FunctionName = $"{Stack.Of(this).StackName}-Thing",
// ...
});
_ = new LogGroup(
this,
"ThingLogGroup",
new LogGroupProps
{
LogGroupName = $"/aws/lambda/{function.FunctionName}",
Retention = RetentionDays.ONE_MONTH,
});
This does not create unnecessary "AssetParameters..." CF template parameters like the inline option does.
Note: I'm using CDK version 1.111.0 and 1.86.0 with C#
I'm configuring a lambda function's API gateway integration with the Serverless Framework version 0.4.2.
My problem is with defining an endpoint's request parameters. The AWS docs for API gateway entry says:
requestParameters
Represents request parameters that can be accepted by Amazon API Gateway. Request parameters are represented as a key/value map, with a source as the key and a Boolean flag as the value. The Boolean flag is used to specify whether the parameter is required. A source must match the pattern method.request.{location}.{name}, where location is either querystring, path, or header. name is a valid, unique parameter name. Sources specified here are available to the integration for mapping to integration request parameters or templates.
As I understand it, the config in the s-function.json is given directly to the AWS CLI, so I've specified the request parameters in the format:
"method.request.querystring.startYear": true. However, I'm receiving an Invalid mapping expression specified: true error. I've also tried specifying the config as "method.request.querystring.startYear": "true" with the same result.
s-function.json:
{
"name": "myname",
// etc...
"endpoints": [
{
"path": "mypath",
"method": "GET",
"type": "AWS",
"authorizationType": "none",
"apiKeyRequired": false,
"requestParameters": {
"method.request.querystring.startYear": true,
"method.request.querystring.startMonth": true,
"method.request.querystring.startDay": true,
"method.request.querystring.currentYear": true,
"method.request.querystring.currentMonth": true,
"method.request.querystring.currentDay": true,
"method.request.querystring.totalDays": true,
"method.request.querystring.volume": true,
"method.request.querystring.userId": true
},
// etc...
}
],
"events": []
}
Any ideas? Thanks in advance!
It looks like the requestParameters in the s-function.json file is meant for configuring the integration request section, so I ended up using:
"requestParameters": {
"integration.request.querystring.startYear" : "method.request.querystring.startYear",
"integration.request.querystring.startMonth" : "method.request.querystring.startMonth",
"integration.request.querystring.startDay" : "method.request.querystring.startDay",
"integration.request.querystring.currentYear" : "method.request.querystring.currentYear",
"integration.request.querystring.currentMonth" : "method.request.querystring.currentMonth",
"integration.request.querystring.currentDay" : "method.request.querystring.currentDay",
"integration.request.querystring.totalDays" : "method.request.querystring.totalDays",
"integration.request.querystring.volume" : "method.request.querystring.volume",
"integration.request.querystring.userId" : "method.request.querystring.userId"
},
This ended up adding them automatically to the method request section on the dashboard as well:
I could then use them in the mapping template to turn them into a method post that would be sent as the event into my Lambda function. Right now I have a specific mapping template that I'm using, but I may in the future use Alua K's suggested method for mapping all of the inputs in a generic way so that I don't have to configure a separate mapping template for each function.
You can pass query params to your lambda like
"requestTemplates": {
"application/json": {
"querystring": "$input.params().querystring"
}
}
In lambda function access querystring like this event.querystring
First, you need to execute a put-method command for creating the Method- Request with query parameters:
aws apigateway put-method --rest-api-id "yourAPI-ID" --resource-id "yourResource-ID" --http-method GET --authorization-type "NONE" --no-api-key-required --request-parameters "method.request.querystring.paramname1=true","method.request.querystring.paramname2=true"
After this you can execute the put-integration command then only this will work. Otherwise it will give invalid mapping error
"requestParameters": {
"integration.request.querystring.paramname1" : "method.request.querystring.paramname1",
"integration.request.querystring.paramname2" : "method.request.querystring.paramname2",
Make sure you're using the right end points as well. There are two types or some such in AWS.. friend of mine got caught out with that in the past.