I have created a Cloudwatch Event (EventBridge) Rule that triggers an AWS Batch Job and I want to specify an environment variable and parameters. I'm trying to do so with the following Configured Input (Constant [JSON text]), but when the job is submitted, then environment variables I'm trying to setup in the job with are not included and the parameters are the defaults. The parameters are working as expected.
{
"ContainerProperties": {
"Environment": [
{
"Name": "MY_ENV_VAR",
"Value": "MyVal"
}
]
},
"Parameters": {
"one": "1",
"two": "2",
"three": "3"
}
}
As I was typing out the question, I actually thought to look at the Submit Job API to see what I was doing wrong (I was referencing the CFTs for the Job Definition as my thought process above). For others it may help, I found that I needed to use ContainerOverrides rather than ContainerProperties to have it work properly.
{
"ContainerOverrides": {
"Environment": [
{
"Name": "MY_ENV_VAR",
"Value": "NorthAmerica"
}
]
},
"Parameters": {
"one": "1",
"two": "2",
"three": "3"
}
}
The preceding solution DIDN'T work for me. The correct answer can be found here:
https://aws.amazon.com/premiumsupport/knowledge-center/batch-parameters-trigger-cloudwatch/
I was only able to pass parameters to the job like so:
{
"Parameters": {
"customers": "tgc,localhost"
}
}
I wasn't able to get environment variables to work and didn't try ContainerOverrides.
Related
I have a situation where I need to filter out certain events using eventpatterns in eventbridge.
I want to run the rule for all events except those where username starts with abc or xyz.
I have tried below 2 syntax but none worked :
"userIdentity": {
"sessionContext": {
"sessionIssuer": {
"userName": [
{
"anything-but": {
"prefix": [
"abc-",
"xyz-"
]
}
}
]
}
}
}
"userIdentity": {
"sessionContext": {
"sessionIssuer": {
"userName": [
{
"anything-but": [{
"prefix": "abc-",
"prefix": "xyz-"
}]
}
]
}
}
}
Getting following error on saving the rule :
"Event pattern is not valid. Reason: Inside anything but list, start|null|boolean is not supported."
Am I missing something in the syntax or if this is a limitation then is there any alternative to this problem?
You can use prefix within an array in event pattern. Here is an example pattern:
{
"detail": {
"alarmName": [{
"prefix": "DemoApp1"
},
{
"prefix": "DemoApp2"
}
],
"state": {
"value": [
"ALARM"
]
},
"previousState": {
"value": [
"OK"
]
}
}
}
This event will match alarm that has name starting with either DemoApp1 or DemoApp2
TLDR: user #samtoddler is sort of correct.
Prefix matches only work on values as called out in https://docs.aws.amazon.com/eventbridge/latest/userguide/content-filtering-with-event-patterns.html#filtering-prefix-matching. They do not work with arrays. You can file a feature request with AWS support but if you'd like to unblock yourself; you it's probably best to either control the prefixes you have for userName (guessing this is something IAM related and in your control).
If that's not possible; consider filtering as much as you can via other properties before sending over to a compute (probably lambda) to performing additional filtering.
I'm creating a table in cloudformation:
"MyStuffTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "MyStuff"
"AttributeDefinitions": [{
"AttributeName": "identifier",
"AttributeType": "S"
]},
"KeySchema": [{
"AttributeName": "identifier",
"KeyType": "HASH",
}],
"ProvisionedThroughput": {
"ReadCapacityUnits": "5",
"WriteCapacityUnits": "1"
}
}
}
Then later on in the cloudformation, I want to insert records into that table, something like this:
identifier: Stuff1
data: {My list of stuff here}
And insert that into values in the code below. I had seen somewhere an example that used Custom::Install, but I can't find it now, or any documentation on it.
So this is what I have:
MyStuff: {
"Type": "Custom::Install",
"DependsOn": [
"MyStuffTable"
],
"Properties": {
"ServiceToken": {
"Fn::GetAtt": ["MyStuffTable","Arn"]
},
"Action": "fields",
"Values": [{<insert records into this array}]
}
};
When I run that, I'm getting this Invalid service token.
So I'm not doing something right trying to reference the table to insert the records into. I can't seem to find any documentation on Custom::Install, so I don't know for sure that it's the right way to go about inserting records through cloudformation. I also can't seem to find documentation on inserting records through cloudformation. I know it can be done. I'm probably missing something very simple. Any ideas?
Custom::Install is a Custom Resource in CloudFormation.
This is a special type of resource which you have to develop yourself. This is mostly done by means of Lambda Function (can also be SNS).
So to answer your question. To add data to your table, you would have to write your own custom resource in lambda. The lambda would put records into the table.
Action and fields are custom parameters which CloudFormation passes to the lambda in the example of Custom::Install. The parameters can be anything you want, as you are designing the custom resource tailored to your requirements.
The following CloudFormation script creates a task definition but does not seem to create the container definition correctly. Can anyone tell me why?
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Test stack for troubleshooting task creation",
"Parameters": {
"TaskFamily": {
"Description": "The task family to associate the task definition with.",
"Type": "String",
"Default": "Dm-Testing"
}
},
"Resources": {
"TaskDefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"Family": {
"Ref": "TaskFamily"
},
"RequiresCompatibilities": [
"EC2"
],
"ContainerDefinitions": [
{
"Name": "sample-app",
"Image": "nginx",
"Memory": 200,
"Cpu": 10,
"Essential": true,
"Environment": [
{
"Name": "SOME_ENV_VARIABLE",
"Value": "SOME_VALUE"
}
]
}
]
}
}
}
}
When I view the created task, there is no container listed in the builder view of task definition in aws.
The information is listed, however, under the json tab of the task definition:
Note that the above image is a subset of the info shown, not all of it.
The result of this is that, when the task is run in a cluster, it does run the image, but runs it without the environment variables applied. In addition, CF does not report any errors when creating this stack, or when running the created task.
Finally, the CloudFormation script is a cut down example of the 'real' script which has started exhibiting this same issue. That script has been working fine for around a year now, and, as far as I can see, there have been no changes to the script between it working and breaking.
I would greatly appreciate any thoughts or suggestions on this because my face is beginning to hurt from smashing it against this particular wall.
Turns out this was a bug in cloudformation that only occurred when creating a task definition using a script through the aws console. Amazon have now resolved this.
We have a generic lambda function that we're trying to execute the step function. The generic lambda function is looking for 2 values: clusterId and policyJsonName.
We're able to fetch the clusterId from an earlier state machine but now we would like to hard code policyJsonName within the state machine. So we have tried using Input and Parameter option of step function but that doesn't work and gives us a validation error.
https://docs.aws.amazon.com/step-functions/latest/dg/input-output-inputpath-params.html
{
"Comment": "Job Orchestration EMR Step",
"dataset1": {"policyJsonName": "lambdainput"},
"StartAt": "EMRFetchClusterId",
"States": {
"EMRFetchClusterId": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:XXXXXX-fetch-clusterId",
"ResultPath": "$.clusterId",
"Next": "EMRAutoScaling"
},
"EMRAutoScaling": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:XXXX-add-auto-scaling",
"Parameters": {
"comment": "Provide input to autoscaling in lambda function",
"InputPath": "$.dataset1",
}
}
In the link you mentioned, this portion:
{
"comment": "Example for InputPath.",
"dataset1": {
"val1": 1,
"val2": 2,
"val3": 3
},
"dataset2": {
"val1": "a",
"val2": "b",
"val3": "c"
}
}
This is actually the input to the state, it's not a part of the state definition. You can confirm this because they say the following:
For example, suppose the input to your state includes the following.
Instead, if you want to hardcode values, you have to pass it directly into a Parameters parameter, like this:
"Parameters": {
"policyJsonName": "lambdainput"
}
Feels like you want something like this,
{
"Comment": "Job Orchestration EMR Step",
"StartAt": "EMRFetchClusterId",
"States": {
"EMRFetchClusterId": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:XXXXXX-fetch-clusterId",
"Next": "EMRAutoScaling"
},
"EMRAutoScaling": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:XXXX-add-auto-scaling",
"Parameters": {
"comment": "Provide input to autoscaling in lambda function",
"clusterId.$": "$.clusterId",
"policyJsonName": "lambdainput"
}
}
The "clusterId.$": "$.clusterId" special .$ suffix syntax will a json path lookup of $.clusterId in the payload coming into EMRAutoScaling , as per the documentation here
If any field within the Payload Template (however deeply nested) has a name ending with the characters ".$", its value is transformed according to rules below and the field is renamed to strip the ".$" suffix.
And "lambdainput" is just hard-coded as you specified.
One note
I removed "dataset1": {"policyJsonName": "lambdainput"}, because the only top level fields allowed are "Comment", "StartAt", "States", "Version", "TimeoutSeconds" according to Top level fields .
I have an existing AWS Steps orchestration that is executing a AWS Batch job via lambdas. However AWS have recently added the ability to directly invoke other services like AWS Batch from a step. I am keen to use this new functionality but cannot get it working.
https://docs.aws.amazon.com/step-functions/latest/dg/connectors-batch.html
So my new step operation that I want to use to invoke Batch.
"File Copy": {
"Type": "Task",
"Resource": "arn:aws:states:::batch:submitJob.sync",
"Parameters": {
"JobName": "MyBatchJob",
"JobQueue": "MySecondaryQueue",
"ContainerOverrides.$": "$.lts_job_container_overrides",
"JobDefinition.$": "$.lts_job_job_definition",
},
"Next": "Upload Start"
}
Note that I am trying to use the $. JSONpath syntax in order to dynamically have parameters passed through the steps.
When given the following inputs
"lts_job_container_overrides": {
"environment": [
{
"name": "MY_ENV_VARIABLE",
"value": "XYZ"
},
],
"command": [
"/app/file_copy.py"
]
},
"lts_job_job_definition": "MyBatchJobDefinition"
I was expected that the environment and command values would be passed through to the corresponding parameter (ContainerOverrides) in AWS Batch. Instead, it appears that AWS Steps is trying to promote them up as top level parameters - and then complaining that they are not valid.
{
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'File Copy'
(entered at the event id #29). The Parameters
'{\"ContainerOverrides\":{\"environment\":
[{\"name\":\"MY_ENV_VARIALBE\",\"value\":\"XYZ\"}],\"command\":
[\"/app/file_copy.py\"]},\"JobDefinition\":\"MyBatchJobDefinition\"}'
could not be used to start the Task: [The field 'environment' is not
supported by Step Functions, The field 'command' is not supported by
Step Functions]"
}
How can I stop AWS Steps from attempting to interpret the values I am trying to pass through to AWS Batch?
I have tried taking JSON path out of the mix and just specifying the ContainerProperties statically (even though this long term won't be a solution). But even then I encounter issues.
"ContainerOverrides": {
"environment": [
{
"name": "RUN_ID",
"value": "xyz"
}
],
"command": "/app/file_copy.py"
}
In this case steps itself rejects the definition file on load.
Invalid State Machine Definition: 'SCHEMA_VALIDATION_FAILED: The field
'environment' is not supported by Step Functions at /States/File
Copy/Parameters, SCHEMA_VALIDATION_FAILED: The field 'command' is not
supported by Step Functions at /States/File Copy/Parameters'
So it just appears that ContainerOverrides is problematic fullstop? Have I misunderstood how it is intended to be used in this scenario?
The above issue has been resolved (as per the answer below) in the AWS Batch documentation - the following note has been added by AWS:
Note
Parameters in Step Functions are expressed in CamelCase, even when the native service API is pascalCase.
This should work, I've tested it seems to be working fine for me. Both Environment and its object keys and Command should be first letter capital.
{
"StartAt": "AWS Batch: Manage a job",
"States": {
"AWS Batch: Manage a job": {
"Type": "Task",
"Resource": "arn:aws:states:::batch:submitJob.sync",
"Parameters": {
"JobName": "test",
"JobDefinition": "jobdef",
"JobQueue": "testq",
"ContainerOverrides": {
"Command": [
"/app/file_copy.py"
],
"Environment": [
{
"Name": "MY_ENV_VARIABLE",
"Value": "XYZ"
}
]
}
},
"End": true
}
}
}