How can I create an RDS instance with the create-environment or another subcommand of aws elasticbeanstalk? I've tried several combinations of parameters to no avail. Below is an example.
APP_NAME="randall-railsapp"
aws s3api create-bucket --bucket "$APP_NAME"
APP_VERSION="$(git describe --always)"
APP_FILE="deploy-$APP_NAME-$APP_VERSION.zip"
git archive -o "$APP_FILE" HEAD
aws s3 cp "$APP_FILE" "s3://$APP_NAME/$APP_FILE"
aws --region us-east-1 elasticbeanstalk create-application-version \
--auto-create-application \
--application-name "$APP_NAME" \
--version-label "$APP_VERSION" \
--source-bundle S3Bucket="$APP_NAME",S3Key="$APP_FILE"
aws --region us-east-1 elasticbeanstalk create-environment \
--application-name "$APP_NAME" \
--version-label "$APP_VERSION" \
--environment-name "$APP_NAME-env" \
--description "randall's rails app environment" \
--solution-stack-name "64bit Amazon Linux 2014.03 v1.0.0 running Ruby 2.1 (Puma)" \
--cname-prefix "$APP_NAME-test" \
--option-settings file://test.json
And the contents of test.json:
[
{
"OptionName": "EC2KeyName",
"Namespace": "aws:autoscaling:launchconfiguration",
"Value": "a-key-is-here"
},
{
"OptionName": "EnvironmentType",
"Namespace": "aws:elasticbeanstalk:environment",
"Value": "SingleInstance"
},
{
"OptionName": "SECRET_KEY_BASE",
"Namespace": "aws:elasticbeanstalk:application:environment",
"Value": "HAHAHAHAHAHA"
},
{
"OptionName": "DBPassword",
"Namespace": "aws:rds:dbinstance",
"Value": "hunter2"
},
{
"OptionName": "DBUser",
"Namespace": "aws:rds:dbinstance",
"Value": "random"
},
{
"OptionName": "DBEngineVersion",
"Namespace": "aws:rds:dbinstance",
"Value": "9.3"
},
{
"OptionName": "DBEngine",
"Namespace": "aws:rds:dbinstance",
"Value": "postgres"
}
]
Anyone know why this is failing? Anything I specify with a aws:rds:dbinstance namespace seems to get removed from the configuration.
Just setting the aws:rds:dbinstance options does not create an RDS database.
Currently you can create an RDS instance using one of the following techniques:
Create using AWS Console
Use eb cli
Use Resources section of ebextensions to create an RDS resource
The first two approaches are the most convenient as they do all the heavy lifting for you but for the third one you have to do some extra work. The third approach is what you would want to use if you are not using the console or eb CLI.
You can create an RDS resource for your beanstalk environment using the following ebextension snippet. Create a file called 01-rds.config in the .ebextensions directory of your app source.
Resources:
AWSEBRDSDatabase:
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: 5
DBInstanceClass: db.t2.micro
DBName: myawesomeapp
Engine: postgres
EngineVersion: 9.3
MasterUsername: myAwesomeUsername
MasterUserPassword: myCrazyPassword
This file is in YAML format so indentation is important. You could also use JSON if you like.
These are not option settings so you cannot pass it as --option-settings test.json. You just need to bundle this file with your app source.
Read more about what properties you can configure on your RDS database here. On this page you can also find what properties are required and what properties are optional.
Let me know if the above does not work for you or if you have any further questions.
As of December 2017 we use the following ebextensions
$ cat .ebextensions/rds.config
Resources:
AWSEBRDSDBSecurityGroup:
Type: AWS::RDS::DBSecurityGroup
Properties:
EC2VpcId:
Fn::GetOptionSetting:
OptionName: "VpcId"
GroupDescription: RDS DB VPC Security Group
DBSecurityGroupIngress:
- EC2SecurityGroupId:
Ref: AWSEBSecurityGroup
AWSEBRDSDBSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: RDS DB Subnet Group
SubnetIds:
Fn::Split:
- ","
- Fn::GetOptionSetting:
OptionName: DBSubnets
AWSEBRDSDatabase:
Type: AWS::RDS::DBInstance
DeletionPolicy: Delete
Properties:
PubliclyAccessible: true
MultiAZ: false
Engine: mysql
EngineVersion: 5.7
BackupRetentionPeriod: 0
DBName: test
MasterUsername: toor
MasterUserPassword: 123456789
AllocatedStorage: 10
DBInstanceClass: db.t2.micro
DBSecurityGroups:
- Ref: AWSEBRDSDBSecurityGroup
DBSubnetGroupName:
Ref: AWSEBRDSDBSubnetGroup
Outputs:
RDSId:
Description: "RDS instance identifier"
Value:
Ref: "AWSEBRDSDatabase"
RDSEndpointAddress:
Description: "RDS endpoint address"
Value:
Fn::GetAtt: ["AWSEBRDSDatabase", "Endpoint.Address"]
RDSEndpointPort:
Description: "RDS endpoint port"
Value:
Fn::GetAtt: ["AWSEBRDSDatabase", "Endpoint.Port"]
AWSEBRDSDatabaseProperties:
Description: Properties associated with the RDS database instance
Value:
Fn::Join:
- ","
- - Ref: AWSEBRDSDatabase
- Fn::GetAtt: ["AWSEBRDSDatabase", "Endpoint.Address"]
- Fn::GetAtt: ["AWSEBRDSDatabase", "Endpoint.Port"]
With such custom options
$ cat .ebextensions/custom-options.config
option_settings:
"aws:elasticbeanstalk:customoption":
DBSubnets: subnet-1234567,subnet-7654321
VpcId: vpc-1234567
The only things - you have to explicitly pass RDS_* env variables to your application.
The other answers did not work in my environment as of Sept 2015. After much trial and error, the following worked for me:
config template snippet (YAML):
aws:rds:dbinstance:
DBAllocatedStorage: '5'
DBDeletionPolicy: Delete
DBEngine: postgres
DBEngineVersion: 9.3.9
DBInstanceClass: db.t2.micro
DBPassword: PASSWORD_HERE
DBUser: USERNAME_HERE
MultiAZDatabase: false
.ebextensions/rds.config file (JSON):
{
"Parameters": {
"AWSEBDBUser": {
"NoEcho": "true",
"Description": "The name of master user for the client DB Instance.",
"Default": "ebroot",
"Type": "String",
"ConstraintDescription": "must begin with a letter and contain only alphanumeric characters"
},
"AWSEBDBPassword": {
"NoEcho": "true",
"Description": "The master password for the DB instance.",
"Type": "String",
"ConstraintDescription": "must contain only alphanumeric characters"
},
"AWSEBDBName": {
"NoEcho": "true",
"Description": "The DB Name of the RDS instance",
"Default": "ebdb",
"Type": "String",
"ConstraintDescription": "must contain only alphanumeric characters"
}
},
"Resources": {
"AWSEBAutoScalingGroup": {
"Metadata": {
"AWS::ElasticBeanstalk::Ext": {
"_ParameterTriggers": {
"_TriggerConfigDeployment": {
"CmpFn::Insert": {
"values": [
{
"CmpFn::Ref": "Parameter.AWSEBDBUser"
},
{
"CmpFn::Ref": "Parameter.AWSEBDBPassword"
},
{
"CmpFn::Ref": "Parameter.AWSEBDBName"
}
]
}
}
},
"_ContainerConfigFileContent": {
"plugins": {
"rds": {
"Description": "RDS Environment variables",
"env": {
"RDS_USERNAME": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBUser"
}
},
"RDS_PASSWORD": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBPassword"
}
},
"RDS_DB_NAME": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBName"
}
},
"RDS_HOSTNAME": {
"Fn::GetAtt": [
"AWSEBRDSDatabase",
"Endpoint.Address"
]
},
"RDS_PORT": {
"Fn::GetAtt": [
"AWSEBRDSDatabase",
"Endpoint.Port"
]
}
}
}
}
}
}
}
},
"AWSEBRDSDatabase": {
"Type": "AWS::RDS::DBInstance",
"DeletionPolicy": "Delete",
"Properties": {
"DBName": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBName"
}
},
"AllocatedStorage": "5",
"DBInstanceClass": "db.t2.micro",
"Engine": "postgres",
"DBSecurityGroups": [
{
"Ref": "AWSEBRDSDBSecurityGroup"
}
],
"MasterUsername": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBUser"
}
},
"MasterUserPassword": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBPassword"
}
},
"MultiAZ": false
}
},
"AWSEBRDSDBSecurityGroup": {
"Type": "AWS::RDS::DBSecurityGroup",
"Properties": {
"DBSecurityGroupIngress": {
"EC2SecurityGroupName": {
"Ref": "AWSEBSecurityGroup"
}
},
"GroupDescription": "Enable database access to Beanstalk application"
}
}
}
}
I had the same problem, couldn't get it to work via .ebextensions, and I don't like the EB CLI tool.
EB CLI uses an undocumented API feature, and a customized version of the botocore library ('eb_botocore') to make this work. :(
So I went ahead and forked botocore, and merged in the API data file used by eb_botocore: https://github.com/boto/botocore/pull/396
Then I ran 'python setup.py install' on both my modified botocore and aws-cli (both at master), and aws-cli now accepts a --template-specification option on the 'aws elasticbeanstalk create-environment' command. Hooray!
Example usage:
aws elasticbeanstalk create-environment\
...various options...\
--option-settings file://option-settings.json
--template-specification file://rds.us-west-2.json
where rds.us-west-2.json is:
{
"TemplateSnippets": [{
"SnippetName": "RdsExtensionEB",
"Order": 10000,
"SourceUrl":
"https://s3.amazonaws.com/elasticbeanstalk-env-resources-us-west-2/eb_snippets/rds/rds.json"
}]
}
(it appears you must select a snippet specific to your EB region).
and option-settings.json contains RDS-related settings similar to ones listed in the question (DBEngine, DBInstanceClass, DBAllocatedStorage, DBPassword).
It works great. I hope the AWS CLI team allows us to use this feature in the official tool in the future. I'm guessing it's not a trivial change or they would have done it already, but it's a pretty major omission functionality-wise from the Elastic Beanstalk API and AWS CLI tool, so hopefully they take a crack at it.
Related
we use CloudFormation and SAM to deploy our Lambda (Node.js) functions. All our Lambda functions has a layer set through Globals. When we make breaking changes in the layer code we get errors during deployment because new Lambda functions are rolled out to production with old layer and after a few seconds (~40 seconds in our case) it starts using the new layer. For example, let's say we add a new class to the layer and we import it in the function code then we get an error that says NewClass is not found for a few seconds during deployment (this happens because new function code still uses old layer which doesn't have NewClass).
Is it possible to ensure new lambda function is always rolled out with the latest layer version?
Example CloudFormation template:
Globals:
Function:
Runtime: nodejs14.x
Layers:
- !Ref CoreLayer
Resources:
CoreLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: core-layer
ContentUri: packages/coreLayer/dist
CompatibleRuntimes:
- nodejs14.x
Metadata:
BuildMethod: nodejs14.x
ExampleFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: example-function
CodeUri: packages/exampleFunction/dist
Example CloudFormation deployment events, as you can see new layer (CoreLayer123abc456) is created before updating the Lambda function so it should be available to use in the new function code but for some reasons Lambda is updated and deployed with old layer version for a few seconds:
Timestamp
Logical ID
Status
Status reason
2022-05-23 16:26:54
stack-name
UPDATE_COMPLETE
-
2022-05-23 16:26:54
CoreLayer789def456
DELETE_SKIPPED
-
2022-05-23 16:26:53
v3uat-farthing
UPDATE_COMPLETE_CLEANUP_IN_PROGRESS
-
2022-05-23 16:26:44
ExampleFunction
UPDATE_COMPLETE
-
2022-05-23 16:25:58
ExampleFunction
UPDATE_IN_PROGRESS
-
2022-05-23 16:25:53
CoreLayer123abc456
CREATE_COMPLETE
-
2022-05-23 16:25:53
CoreLayer123abc456
CREATE_IN_PROGRESS
Resource creation Initiated
2022-05-23 16:25:50
CoreLayer123abc456
CREATE_IN_PROGRESS -
2022-05-23 16:25:41
stack-name
UPDATE_IN_PROGRESS
User Initiated
Example changeset:
{
"resourceChange": {
"logicalResourceId": "ExampleFunction",
"action": "Modify",
"physicalResourceId": "example-function",
"resourceType": "AWS::Lambda::Function",
"replacement": "False",
"moduleInfo": null,
"details": [
{
"target": {
"name": "Environment",
"requiresRecreation": "Never",
"attribute": "Properties"
},
"causingEntity": "ApplicationVersion",
"evaluation": "Static",
"changeSource": "ParameterReference"
},
{
"target": {
"name": "Layers",
"requiresRecreation": "Never",
"attribute": "Properties"
},
"causingEntity": null,
"evaluation": "Dynamic",
"changeSource": "DirectModification"
},
{
"target": {
"name": "Environment",
"requiresRecreation": "Never",
"attribute": "Properties"
},
"causingEntity": null,
"evaluation": "Dynamic",
"changeSource": "DirectModification"
},
{
"target": {
"name": "Code",
"requiresRecreation": "Never",
"attribute": "Properties"
},
"causingEntity": null,
"evaluation": "Static",
"changeSource": "DirectModification"
},
{
"target": {
"name": "Layers",
"requiresRecreation": "Never",
"attribute": "Properties"
},
"causingEntity": "CoreLayer123abc456",
"evaluation": "Static",
"changeSource": "ResourceReference"
}
],
"changeSetId": null,
"scope": [
"Properties"
]
},
"hookInvocationCount": null,
"type": "Resource"
}
I didn't understand why it has 2 target.name: Layers items in details array. One of them has causingEntity: CoreLayer123abc456 which is expected due to newly created layer and the other has causingEntity: null, not sure why this is there.
Originally posted on AWS re:Post here
Edit:
After a couple of tests it turns out that the issue is caused by the order of the changes from changeset. Looks like changes are applied one by one. For example for the following changeset it updates the old function code while still using the old layer and then updates the function layer with the latest version because Layers change item comes after Code change item.
{
"resourceChange":{
"logicalResourceId":"ExampleFunction",
"action":"Modify",
"physicalResourceId":"example-function",
"resourceType":"AWS::Lambda::Function",
"replacement":"False",
"moduleInfo":null,
"details":[
{
"target":{
"name":"Layers",
"requiresRecreation":"Never",
"attribute":"Properties"
},
"causingEntity":null,
"evaluation":"Dynamic",
"changeSource":"DirectModification"
},
{
"target":{
"name":"Code",
"requiresRecreation":"Never",
"attribute":"Properties"
},
"causingEntity":null,
"evaluation":"Static",
"changeSource":"DirectModification"
},
{
"target":{
"name":"Layers",
"requiresRecreation":"Never",
"attribute":"Properties"
},
"causingEntity":"CoreLayer123abc456",
"evaluation":"Static",
"changeSource":"ResourceReference"
}
],
"changeSetId":null,
"scope":[
"Properties"
]
},
"hookInvocationCount":null,
"type":"Resource"
}
But in some deployments the order is the other way around, such as:
{
"resourceChange":{
...
"details":[
...
{
"target":{
"name":"Layers",
"requiresRecreation":"Never",
"attribute":"Properties"
},
"causingEntity":"CoreLayer123abc456",
"evaluation":"Static",
"changeSource":"ResourceReference"
},
{
"target":{
"name":"Code",
"requiresRecreation":"Never",
"attribute":"Properties"
},
"causingEntity":null,
"evaluation":"Static",
"changeSource":"DirectModification"
}
],
...
}
In this case it updates the old function with the latest layer version and then updates the function code with the updated one. So for a couple of seconds old code is invoked with the latest layer version.
So does it possible to apply all these changes in only one single step? Similar to Atomicity in databases
I solved this by using Lambda versions as suggested here. For each deployment I create a new Lambda version, so old and new Lambda use the correct core layer.
Reposting my comment from the original post for reference:
I added a lambda version:
ExampleFunctionVersion:
Type: AWS::Lambda::Version
DeletionPolicy: Delete
Properties:
FunctionName: !Ref ExampleFunction
and replaced all function references with !Ref ExampleFunctionVersion.
I also tried AutoPublishAlias: live, it worked too but I had to add an AWS::Lambda::Version resource to be able to set DeletionPolicy: Delete
I am getting "Template contains errors.: [/Resources/CloudTrail/Type/EventSelectors] 'null' values are not allowed in templates" error when I am trying to validate my cloudformation template.
"Conditions":
"S3Enabled":
"Fn::Equals":
- "IsS3Enabled"
- "true"
"Parameters":
"IsS3Enabled":
"AllowedValues":
- "true"
- "false"
"Default": "true"
"Description": "whether you want cloudtrail enabled for S3"
"Type": "String"
"LambdaArns":
"Default": "arn:aws:lambda"
"Description": "The lambda arns of cloudtrail event selectors"
"Type": "CommaDelimitedList"
"S3Arns":
"Default": "'arn:aws:s3:::'"
"Description": "The S3 arns of cloudtrail event selectors"
"Type": "CommaDelimitedList"
"Resources":
"CloudTrail":
"DependsOn":
- "CloudTrailLogBucketPolicy"
"Properties":
"EnableLogFileValidation": "true"
"EventSelectors":
"DataResources": {"Fn::If" : ["S3Enabled", { "Type": "AWS::S3::Object", "Values": !Ref "S3Arns"}, {"Type": "AWS::Lambda::Function", "Values": !Ref "LambdaArns"}]}
"IncludeGlobalServiceEvents": "true"
"IsLogging": "true"
"IsMultiRegionTrail": "true"
"S3BucketName":
"Ref": "CloudTrailLogBucket"
"S3KeyPrefix": "sample"
"TrailName": "sample"
"Type": "AWS::CloudTrail::Trail"
Resources that I am using
CloudTrail CloudFormation :
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudtrail-trail.html
Fn::If documentation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-conditions.html#intrinsic-function-reference-conditions-if
I have gone through similar questions, both of them leads to indentation, but cannot find a fault with my template.
AWS Cloudformation [/Resources/PrivateGateway/Properties] 'null' values are not allowed in templates
AWS IAM Cloudformation YAML template errror: 'null' values are not allowed
The CloudFormation Linter catches this with:
E0000: Null value at line 31 column 24
DataResources isn't indented far enough and EventSelectors and DataResources both need to be lists
All members of a list are lines beginning at the same indentation
level starting with a "- " (a dash and a space)
I'd recommend getting that template snippet working without Fn::If first like this:
"Resources":
"CloudTrail":
"DependsOn":
- "CloudTrailLogBucketPolicy"
"Properties":
"EnableLogFileValidation": "true"
"EventSelectors":
- "DataResources":
- Type: AWS::S3::Object
Values: !Ref S3Arns
and then using Fn::If to set the first DataResource in the DataResources list
probably the yaml would be as :
cloudtrail:
Type: AWS::CloudTrail::Trail
Properties:
EnableLogFileValidation: Yes
EventSelectors:
- DataResources:
- Type: AWS::S3::Object
Values:
- arn:aws:s3:::s3-event-step-bucket/
IncludeManagementEvents: Yes
ReadWriteType: All
IncludeGlobalServiceEvents: Yes
IsLogging: Yes
IsMultiRegionTrail: Yes
S3BucketName: s3-event-step-bucket-storage
TrailName: xyz
I'm trying to deploy stepfunctions with CloudFormation, and I'd like to reference the actual stepfunction definition from an external file in S3.
Here's how the template looks like:
StepFunction1:
Type: "AWS::StepFunctions::StateMachine"
Properties:
StateMachineName: !Ref StepFunction1SampleName
RoleArn: !GetAtt StepFunctionExecutionRole.Arn
DefinitionString:
Fn::Transform:
Name: AWS::Include
Parameters:
Location:
Fn::Sub: 's3://${ArtifactsBucketName}/StepFunctions/StepFunction1/definition.json'
However, this doesn't seem to be supported, as we are getting error
Property validation failure: [Value of property {/DefinitionString} does not match type {String}]
I am doing something similar for APIs, referencing the actual API definition from an external swagger file, and that seems to be working fine.
Example:
SearchAPI:
Type: "AWS::Serverless::Api"
Properties:
Name: myAPI
StageName: latest
DefinitionBody:
Fn::Transform:
Name: AWS::Include
Parameters:
Location:
Fn::Sub: 's3://${ArtifactsBucketName}/ApiGateway/myAPI/swagger.yaml'
How can I make this work?
The trick is to escape the StepFunction DefinitionString property, and include the actual property, DefinitionString, in the external CloudFormation referenced file. Escaping only the stepfunction definition string would fail, CloudFormation complaining that the referenced Transform/Include template, is not a valid yaml/json.
Here's how it looks like:
Template:
StepFunction1:
Type: "AWS::StepFunctions::StateMachine"
Properties:
StateMachineName: !Ref StepFunction1SampleName
RoleArn: !GetAtt StepFunctionExecutionRole.Arn
Fn::Transform:
Name: AWS::Include
Parameters:
Location:
Fn::Sub: 's3://${ArtifactsBucketName}/StepFunctions/StepFunction1/definition.json'
External stepfunction definition file:
{
"DefinitionString" : {"Fn::Sub" : "{\r\n \"Comment\": \"A Retry example of the Amazon States Language using an AWS Lambda Function\",\r\n \"StartAt\": \"HelloWorld\",\r\n \"States\": {\r\n \"HelloWorld\": {\r\n \"Type\": \"Task\",\r\n \"Resource\": \"arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${HelloWorldLambdaFunctionName}\", \r\n \"End\": true\r\n }\r\n }\r\n}"}
}
Now, although this solves the problem, it's a bit more difficult to maintain the StepFunction definition, in this form, in source control.
So I've thought about using a CloudFormation custom resource backed by a lambda function. The lambda function would deal with the actual StepFunction DefinitionString escaping part.
Here's how it looks like:
Template:
StepFunctionParser:
Type: Custom::AMIInfo
Properties:
ServiceToken: myLambdaArn
DefinitionString:
Fn::Transform:
Name: AWS::Include
Parameters:
Location:
Fn::Sub: 's3://${ArtifactsBucketName}/StepFunctions/StepFunctionX/definition.json'
StepFunctionX:
Type: "AWS::StepFunctions::StateMachine"
Properties:
StateMachineName: StepFunction1SampleNameX
RoleArn: !GetAtt StepFunctionExecutionRole.Arn
DefinitionString: !GetAtt StepFunctionParser.DefinitionString
External StepFunction definition file:
{
"Comment": "A Retry example of the Amazon States Language using an AWS Lambda Function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": {"Fn::Sub" : "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${HelloWorldLambdaFunctionName}" },
"End": true
}
}
}
Here's the documentation for creating AWS Lambda-backed Custom Resources.
There's still a problem with this.
Transform/Include converts external template boolean properties into string properties.
Therefore, DefinitionString
"DefinitionString": {
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${HelloWorldLambdaFunctionName}",
**"End": true**
}
},
"Comment": "A Retry example of the Amazon States Language using an AWS Lambda Function",
"StartAt": "HelloWorld"
}
becomes
"DefinitionString": {
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": _realLambdaFunctionArn_,
**"End": "true"**
}
},
"Comment": "A Retry example of the Amazon States Language using an AWS Lambda Function",
"StartAt": "HelloWorld"
}
CloudFormation then complains about the StepFunction definition not being valid:
Invalid State Machine Definition: 'SCHEMA_VALIDATION_FAILED: Expected value of type Boolean at /States/HelloWorld/End'
Is this a CloudFormation Transform/Include issue? Can someone from AWS give a statement on this?
In the mean time, one solution could be to use yaml instead of json, to store the stepfunction definition externally. No string escaping:
DefinitionString:
Fn::Sub: |
{
"Comment": "A Retry example of the Amazon States Language using an AWS Lambda Function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${HelloWorldLambdaFunctionName}",
"End": true
}
}
}
Here you can find another approach to this problem: https://github.com/aws-samples/lambda-refarch-imagerecognition
And here is a brief summary:
As part of your CloudFormation template file (let's say it's named template.yml) you define the state machine as follows:
LambdaA:
...
LambdaB:
...
LambdaC:
...
MyStateMachine:
Type: AWS::StepFunctions::StateMachine
Properties:
StateMachineName: !Sub 'MyMachine-${SomeParamIfNeeded}'
DefinitionString:
!Sub
- |-
##{{STATEMACHINE_DEF}}
- {
lambdaArnA: !GetAtt [ LambdaA, Arn ],
lambdaArnB: !GetAtt [ LambdaB, Arn ],
lambdaArnC: !GetAtt [ LambdaC, Arn ]
}
RoleArn: !GetAtt [ StatesExecutionRole, Arn ]
Then you will need replacement script, here you can find one: inject_state_machine_cfn.py
Then you can create your state machine definition as a separate file, say state_machine.json with a content:
{
"Comment": "This is an example of Parallel State Machine type",
"StartAt": "Parallel",
"States": {
"Parallel": {
"Type": "Parallel",
"Next": "Final State",
"Branches": [
{
"StartAt": "Run Lambda A",
"States": {
"Run Lambda A": {
"Type": "Task",
"Resource": "${lambdaArnA}",
"ResultPath": "$.results",
"End": true
}
}
},
{
"StartAt": "Run Lambda B",
"States": {
"Run Login Functionality Test": {
"Type": "Task",
"Resource": "${lambdaArnB}",
"ResultPath": "$.results",
"End": true
}
}
}
]
},
"Final State": {
"Type": "Task",
"Resource": "${lambdaArnC}",
"End": true
}
}
}
Once you have all these elements you can invoke the transformation function:
python inject_state_machine_cfn.py -s state_machine.json -c template.yml -o template.complete.yml
Trying to create a cloud formation template to configure WAF with geo location condition. Couldnt find the right template yet. Any pointers would be appreciated.
http://docs.aws.amazon.com/waf/latest/developerguide/web-acl-geo-conditions.html
Unfortunately, the actual answer (as of this writing, July 2018) is that you cannot create geo match sets directly in CloudFormation. You can create them via the CLI or SDK, then reference them in the DataId field of a WAFRule's Predicates property.
Creating a GeoMatchSet with one constraint via CLI:
aws waf-regional get-change-token
aws waf-regional create-geo-match-set --name my-geo-set --change-token <token>
aws waf-regional get-change-token
aws waf-regional update-geo-match-set --change-token <new_token> --geo-match-set-id <id> --updates '[ { "Action": "INSERT", "GeoMatchConstraint": { "Type": "Country", "Value": "US" } } ]'
Now reference that GeoMatchSet id in the CloudFormation:
"WebAclGeoRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
...
"Predicates": [
{
"DataId": "00000000-1111-2222-3333-123412341234" // id from create-geo-match-set
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
There is no documentation for it, but it is possible to create the Geo Match in serverless/cloudformation.
Used the following in serverless:
Resources:
Geos:
Type: "AWS::WAFRegional::GeoMatchSet"
Properties:
Name: geo
GeoMatchConstraints:
- Type: "Country"
Value: "IE"
Which translated to the following in cloudformation:
"Geos": {
"Type": "AWS::WAFRegional::GeoMatchSet",
"Properties": {
"Name": "geo",
"GeoMatchConstraints": [
{
"Type": "Country",
"Value": "IE"
}
]
}
}
That can then be referenced when creating a rule:
(serverless) :
Resources:
MyRule:
Type: "AWS::WAFRegional::Rule"
Properties:
Name: waf
Predicates:
- DataId:
Ref: "Geos"
Negated: false
Type: "GeoMatch"
(cloudformation) :
"MyRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
"Name": "waf",
"Predicates": [
{
"DataId": {
"Ref": "Geos"
},
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
I'm afraid that your question is too vague to solicit a helpful response. The CloudFormation User Guide (pdf) defines many different WAF / CloudFront / R53 resources that will perform various forms of geo match / geo blocking capabilities. The link you provide seems a subset of Web Access Control Lists (Web ACL) - see AWS::WAF::WebACL on page 2540.
I suggest you have a look and if you are still stuck, actually describe what it is you are trying to achieve.
Note that the term you used: "geo location condition" doesn't directly relate to an AWS capability that I'm aware of.
Finally, if you are referring to https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/, then the latest Cloudformation User Guide doesn't seem to have been updated yet to reflect this.
I need to create Elastic Beanstalk Environment from boto3.
For which I guess the API sequence should be:
create_application()
From this API we get the "Application Name":
create_environment(kwargs)
Here i am passing below json as kwargs to api
{
"ApplicationName": "APP-NAME",
"EnvironmentName": "ABC-Nodejs",
"CNAMEPrefix": "ABC-Neptune",
"SolutionStackName": "64bit Amazon Linux 2016.03 v2.1.1 running Node.js"
}
Questions:
How to mention that the Environment EC2 should be attached to which
VPC and subnet
In which Subnet its ELB should be created
Any sample code will be helpful
Please Note: I have one public and one private subnet, we can control the creation of EC2 and ELB creation through subnet IDs
To set up dependent resources with your Environment you would have to use the Elastic Beanstalk Option Settings. Specifically for VPCs you can use the aws:ec2:vpc namespace, I've linked the documentation for those settings with that.
The code example would be something like this:
{
ApplicationName: "APP-NAME",
EnvironmentName: "ABC-Nodejs",
CNAMEPrefix: "ABC-Neptune",
SolutionStackName: "64bit Amazon Linux 2016.03 v2.1.1 running Node.js"
OptionSettings=[
{
'Namespace': 'aws:ec2:vpc',
'OptionName': 'VPCId',
'Value': 'vpc-12345678'
},
{
'Namespace': 'aws:ec2:vpc',
'OptionName': 'ELBSubnets',
'Value': 'subnet-11111111,subnet-22222222'
},
],
}
Thanks nbalas,
I am using below code to create EB.
Despite giving already created security group names in "aws:elb:loadbalancer" and "aws:autoscaling:launchconfiguration" it is creating new security groups and attaching them to EC2 instance and load balancer. So now both of the security groups new and old ones are attached to the resources. I don't want to create new security groups at all and want to use old ones only.
kwargs={
"ApplicationName": "Test",
"EnvironmentName": "ABC-Nodejs",
"CNAMEPrefix": "ABC-ABC",
"SolutionStackName": "64bit Amazon Linux 2016.03 v2.1.1 running Node.js",
"OptionSettings": [
{
"Namespace": "aws:ec2:vpc",
"OptionName": "Subnets",
"Value": "subnet-*******0"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "ELBSubnets",
"Value": "subnet-********1"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "SecurityGroups",
"Value": "sg-*********2"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "SecurityGroups",
"Value": "sg-**********3"
}
]
}
response = client.create_environment(**kwargs)