AWS + Cloudformation + Elasticbeanstalk - amazon-web-services

When I do a create stack with the following cloudformation template using the input parameter EnvironmentType "dev", it creates the ebs application, creates the environment inside the application and deploys the sample-app.war file from the S3 bucket.
Then I am doing an update stack using the same template with an input parameter EnvironmentType "stage", this time it removes the existing dev environment and creates the stage environment inside the application.
I also tried to create stack again using the sample template entering sample application name created in the first step and this time it shows application already exist.
My requirement is to retain the dev environment and stage environment to get created inside the sample application using cloudformation.
Any suggestions, please..
---
AWSTemplateFormatVersion: 2010-09-09
Description: 'Create an ElasticBeanstalk Application, Environment and deploy the war file from S3 bucket'
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
-
Label:
default: 'EBS Application Configuration'
Parameters:
- ApplicationName
- ApplicationDescription
- ApplicationVersion
-
Label:
default: 'EBS Environment Configuration'
Parameters:
- EnvironmentName
- EnvironmentType
- EnvironmentDescription
- EnvironmentCName
- MinInstances
- MaxInstances
Mappings:
PropertiesMap:
IntanceType:
dev: 'SingleInstance'
qa: 'SingleInstance'
stage: 'LoadBalanced'
prod: 'LoadBalanced'
Parameters:
ApplicationName:
Type: String
Description: 'Name of the ElasticBeanstalk Application'
ApplicationDescription:
Type: String
Description: 'ElasticBeanstalk Application Description'
ApplicationVersion:
Type: String
Description: 'Application version description'
EnvironmentName:
Type: String
Description: 'Name of the Environment'
AllowedPattern: '^([A-Za-z]|[0-9]|-)+$'
EnvironmentType:
Type: String
Description: 'Type of the Environment (dev, qa, stage, prod)'
AllowedValues:
- 'dev'
- 'qa'
- 'stage'
- 'prod'
EnvironmentCName:
Type: String
Description: 'CName Prefix for the ElasticBeanstalk environment'
AllowedPattern: '^([A-Za-z]|[0-9]|-)+$'
EnvironmentDescription:
Type: String
Description: 'Description of the ElasticBeanstalk environment'
MinInstances:
Type: Number
Description: 'Minimum load balanced instances (Mandatory for stage/prod)'
Default: 2
MinValue: 2
MaxValue: 10
MaxInstances:
Type: Number
Description: 'Maximum load balanced instances (Mandatory for stage/prod)'
Default: 2
MinValue: 2
MaxValue: 10
Conditions:
IsStageOrProdEnvironment:
!Or [!Equals [stage, !Ref EnvironmentType], !Equals [prod, !Ref EnvironmentType]]
Resources:
EBSApplication:
Type: AWS::ElasticBeanstalk::Application
Properties:
ApplicationName: !Ref ApplicationName
Description: !Ref ApplicationDescription
EBSApplicationVersion:
Type: AWS::ElasticBeanstalk::ApplicationVersion
Properties:
ApplicationName: !Ref EBSApplication
Description: !Ref ApplicationVersion
SourceBundle:
S3Bucket: deployable
S3Key: artifacts/sample-app.war
EBSApplicationConfigurationTemplate:
Type: AWS::ElasticBeanstalk::ConfigurationTemplate
Properties:
ApplicationName: !Ref EBSApplication
Description: 'ElasticBeanstalk Configuration Template'
SolutionStackName: '64bit Amazon Linux 2018.03 v3.0.2 running Tomcat 8.5 Java 8'
OptionSettings:
- Namespace: aws:elasticbeanstalk:environment
OptionName: EnvironmentType
Value: !FindInMap [PropertiesMap, IntanceType, !Ref EnvironmentType]
- Namespace: aws:autoscaling:asg
OptionName: MinSize
Value: !If [IsStageOrProdEnvironment, !Ref MinInstances, !Ref 'AWS::NoValue']
- Namespace: aws:autoscaling:asg
OptionName: MaxSize
Value: !If [IsStageOrProdEnvironment, !Ref MaxInstances, !Ref 'AWS::NoValue']
EBSEnvironment:
Type: AWS::ElasticBeanstalk::Environment
Properties:
ApplicationName: !Ref EBSApplication
CNAMEPrefix: !Ref EnvironmentCName
Description: !Ref EnvironmentDescription
EnvironmentName: !Ref EnvironmentName
TemplateName: !Ref EBSApplicationConfigurationTemplate
VersionLabel: !Ref EBSApplicationVersion
Outputs:
ApplicationURL:
Description: 'ElasticBeanstalk environment endpoint'
Value: !Join
- ''
- - 'http://'
- !GetAtt EBSEnvironment.EndpointURL

Related

ReadEndpoint.Address was not found for DBCluster

I am adding route 53 to my DBCluster and keep running into the error: Attribute: ReadEndpoint.Address was not found for resource: <DBCluster-name>
The entire stack is created via cloudformation.
Also, it should be noted that this is for Serverless Aurora in case that matters?
Here is my code:
AWSTemplateFormatVersion: 2010-09-09
Description: RDS Aurora serverless template
Parameters:
CustomFunctionArn:
Default: arn:aws:lambda:us-west-2:123456789:function:vault-secrets-read-lambda-prod
Description: The ARN of the lambda function to retrieve password from Vault
Type: String
DBName:
AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
Description: Name of the database
Type: String
DBMasterUsername:
AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
Description: The master user name for the DB instance
Type: String
DBScalingAutoPauseEnabled:
AllowedValues:
- 'true'
- 'false'
Default: 'true'
Description: Pause all DB instances after some inactivity
Type: String
DBScalingMaxCapacity:
AllowedValues:
- 2
- 4
- 8
- 16
- 32
- 64
- 192
- 384
Default: 8
Description: The maximum capacity for an Aurora DB cluster in serverless DB engine mode
Type: Number
DBScalingMinCapacity:
AllowedValues:
- 2
- 4
- 8
- 16
- 32
- 64
- 192
- 384
Default: 2
Description: The minimum capacity for an Aurora DB cluster in serverless DB engine mode
Type: Number
DBScalingSecondsUntilAutoPause:
Default: 300
Description: Auto pause after consecutive seconds of inactivity
MinValue: 300
MaxValue: 86400
Type: Number
Env:
AllowedValues:
- prod
- qa
- dev
Type: String
Description: Environment
VaultPath:
Default: secret/dev/dbPassword
Type: String
SnapshotId:
Description: snapshot ID to restore DB cluster from
Type: String
Conditions:
EnableAutoPause:
!Equals [!Ref DBScalingAutoPauseEnabled, 'true']
DoNotUseSnapshot: !Equals
- !Ref SnapshotId
- ''
Mappings:
Configuration:
prod:
HostedZoneEnv: mydomain.com
HostedZoneId: 'XXX'
SecurityGroup: sg-123321
SubnetGroups:
- subnet-123
- subnet-456
- subnet-789
VPCId: vpc-555
Tags:
- Key: Name
Value: my-db
- Key: environment
Value: prod
- Key: component
Value: rds-aurora
- Key: classification
Value: internal
qa:
HostedZoneEnv: mydomain-qa.com
HostedZoneId: 'XXX'
SecurityGroup: sg-321123
SubnetGroups:
- subnet-098
- subnet-765
- subnet-432
VPCId: vpc-345543
Tags:
- Key: Name
Value: my-db
- Key: environment
Value: qa
- Key: component
Value: rds-aurora
- Key: classification
Value: internal
dev:
HostedZoneEnv: mydomain-dev.com
HostedZoneId: 'XXX'
SecurityGroup: sg-f3453f
SubnetGroups:
- subnet-dsf24327
- subnet-82542gsda
- subnet-casaf2344
VPCId: vpc-23dfsf
Tags:
- Key: Name
Value: my-db
- Key: environment
Value: dev
- Key: component
Value: rds-aurora
- Key: classification
Value: internal
Resources:
AuroraSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allows access to RDS
GroupName: !Sub '${AWS::StackName}-aurora-rds-${Env}'
SecurityGroupIngress:
- IpProtocol: -1
CidrIp: 0.0.0.0/0
FromPort: 5432
ToPort: 5432
Tags: !FindInMap [Configuration, !Ref Env, Tags]
VpcId: !FindInMap [Configuration, !Ref Env, VPCId]
GetValuefromVault:
Type: Custom::CustomResource
Properties:
ServiceToken: !Ref CustomFunctionArn
VaultKeyPath: !Ref VaultPath
DBCluster:
Type: 'AWS::RDS::DBCluster'
DeletionPolicy: Snapshot
UpdateReplacePolicy: Snapshot
Properties:
BackupRetentionPeriod: 7
DBClusterParameterGroupName: default.aurora-postgresql10
DBSubnetGroupName: !Ref DBSubnetGroup
DatabaseName: !Ref DBName
DeletionProtection: false
# EnableHttpEndpoint: true
Engine: aurora-postgresql
EngineMode: serverless
EngineVersion: '10.7'
KmsKeyId: !If [DoNotUseSnapshot, !Ref KMSkey, !Ref 'AWS::NoValue']
MasterUserPassword: !If [DoNotUseSnapshot, !GetAtt 'GetValuefromVault.ValueFromVault', !Ref 'AWS::NoValue']
MasterUsername: !If [DoNotUseSnapshot, !Ref DBMasterUsername, !Ref 'AWS::NoValue']
Port: 5432
ScalingConfiguration:
AutoPause: !If [EnableAutoPause, true, false]
MaxCapacity: !Ref DBScalingMaxCapacity
MinCapacity: !Ref DBScalingMinCapacity
SecondsUntilAutoPause: !Ref DBScalingSecondsUntilAutoPause
SnapshotIdentifier: !If [DoNotUseSnapshot, !Ref 'AWS::NoValue', !Ref SnapshotId]
StorageEncrypted: true
Tags: !FindInMap [Configuration, !Ref Env, Tags]
VpcSecurityGroupIds:
- !GetAtt [AuroraSG, GroupId]
- !FindInMap [Configuration, !Ref Env, SecurityGroup]
DBSubnetGroup:
Type: 'AWS::RDS::DBSubnetGroup'
Properties:
DBSubnetGroupDescription: !Sub '${AWS::StackName}-${Env}'
SubnetIds: !FindInMap [Configuration, !Ref Env, SubnetGroups]
Tags: !FindInMap [Configuration, !Ref Env, Tags]
KmsAlias:
Type: AWS::KMS::Alias
Properties:
AliasName: !Sub 'alias/${AWS::StackName}-${Env}-aurora-rds'
TargetKeyId: !Ref KMSkey
KMSkey:
Type: AWS::KMS::Key
Properties:
KeyPolicy:
Id: key-consolepolicy-3
Version: 2012-10-17
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
Action: 'kms:*'
Resource: '*'
RecordSet:
Type: AWS::Route53::RecordSet
DependsOn: DBCluster
Properties:
HostedZoneId: !FindInMap [Configuration, !Ref Env, HostedZoneId]
Name: !Join ['', [!Ref DBName, -writer-db, ., !FindInMap [Configuration, !Ref Env, HostedZoneEnv], .]]
ResourceRecords:
- !GetAtt DBCluster.Endpoint.Address
TTL: '60'
Type: CNAME
ReadRecordSet:
Type: 'AWS::Route53::RecordSet'
DependsOn:
- DBCluster
Properties:
HostedZoneId: !FindInMap [Configuration, !Ref Env, HostedZoneId]
Name: !Join ['', [!Ref DBName, -reader-db, ., !FindInMap [Configuration, !Ref Env, HostedZoneEnv], .]]
ResourceRecords:
- !GetAtt DBCluster.ReadEndpoint.Address
TTL: '60'
Type: CNAME
Outputs:
AuroraHost:
Value: !GetAtt [DBCluster, Endpoint.Address]
Export:
Name: !Join [":", [ !Ref "AWS::StackName", 'Host' ]]
AuroraSG:
Value: !GetAtt AuroraSG.GroupId
Export:
Name: !Join [":", [ !Ref "AWS::StackName", AuroraSG ]]
KMS:
Value: !GetAtt [KMSkey, Arn]
Export:
Name: !Join [":", [ !Ref "AWS::StackName", 'KMS' ]]
DNSName:
Description: 'The connection endpoint for the DB cluster.'
Value: !GetAtt 'DBCluster.Endpoint.Address'
Export:
Name: !Sub '${AWS::StackName}-DNSName'
ReadDNSName:
Description: 'The reader endpoint for the DB cluster.'
Value: !GetAtt 'DBCluster.ReadEndpoint.Address'
Export:
Name: !Sub '${AWS::StackName}-ReadDNSName'
Some things i have tried:
Create new stack: FAIL
Create new stack without ReadRecordSet: FAIL
Create new stack without RecordSet (old name for read recordset): FAIL
Create new stack without RecordSet (new name for read recordset): FAIL
Add a DependsOn to ReadRecordSet (for first RecordSet): FAIL
Enabling HTTP endpoint on Cluster: FAIL
Update TTL to 60: FAIL Update TTL to 0: FAIL
The RecordSet seems to be creating okay (I tested that by adding a DependsOn: - RecordSet in the ReadRecordSet to allow RecordSet to create first), so it's the ReadRecordSet that is failing and can't find ReadEndpoint.Address
Not sure what I am missing here, been googling like mad and don't see much about this error. Any help is appreciated!
It turns out that Aurora Serverless doesn't require ReadRecordSet, so that entire section is only applicable to provisioned DB, so ReadEndpoint doesn't exist indeed. Unfortunately AWS documentation doesn't mention that explicitly.

Cloudformation Unable to Use Outputted Parameters with Nested Stacks

I'm trying my hand at Cloudformation nested stacks. The idea is that I create a VPC, S3 bucket, Codebuild project, and Codepipeline pipeline using Cloudformation.
My Problem: Cloudformation is saying that the following parameters (outputted by child stacks) require values:
Vpc
PrivateSubnet1
PrivateSubnet2
PrivateSubnet3
BucketName
These params should have values as the value exists when I look at a completed child stack in the console.
I'll just show the templates for the parent, s3, and codepipeline. With regards to these three templates the problem is that I am unable to use an output BucketName from S3Stack in my CodePipelineStack
My Code:
cfn-main.yaml
AWSTemplateFormatVersion: 2010-09-09
Description: root template for codepipeline poc
Parameters:
BucketName:
Type: String
VpcName:
Description: name of the vpc
Type: String
Default: sandbox
DockerUsername:
Type: String
Description: username for hub.docker
Default: seanturner026
DockerPassword:
Type: String
Description: password for hub.docker
Default: /codebuild/docker/password
Environment:
Type: String
Description: environment
AllowedValues:
- dev
- prod
Default: dev
Vpc:
Type: AWS::EC2::VPC::Id
PrivateSubnet1:
Type: AWS::EC2::Subnet::Id
PrivateSubnet2:
Type: AWS::EC2::Subnet::Id
PrivateSubnet3:
Type: AWS::EC2::Subnet::Id
GithubRepository:
Type: String
Description: github repository
Default: aws-codepipeline-poc
GithubBranch:
Type: String
Description: github branch
Default: master
GithubOwner:
Type: String
Description: github owner
Default: SeanTurner026
GithubToken:
Type: String
Description: github token for codepipeline
NoEcho: true
Resources:
VpcStack:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
VpcName: !Ref VpcName
TemplateURL: resources/vpc.yaml
S3Stack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: resources/s3.yaml
CodeBuildStack:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
Environment: !Ref Environment
DockerUsername: !Ref DockerUsername
DockerPassword: !Ref DockerPassword
Vpc: !GetAtt VpcStack.Outputs.VpcId
PrivateSubnet1: !GetAtt VpcStack.Outputs.PrivateSubnetId1
PrivateSubnet2: !GetAtt VpcStack.Outputs.PrivateSubnetId2
PrivateSubnet3: !GetAtt VpcStack.Outputs.PrivateSubnetId3
TemplateURL: resources/codebuild.yaml
CodePipelineStack:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
Environment: !Ref Environment
GithubRepository: !Ref GithubRepository
GithubBranch: !Ref GithubBranch
GithubOwner: !Ref GithubOwner
GithubToken: !Ref GithubToken
S3: !GetAtt S3Stack.Outputs.BucketName
TemplateURL: resources/codepipeline.yaml
s3.yaml
AWSTemplateFormatVersion: 2010-09-09
Description: s3 bucket for aws codepipeline poc
Resources:
S3:
Type: "AWS::S3::Bucket"
Properties:
BucketName: "aws-sean-codepipeline-poc"
Outputs:
BucketName:
Description: S3 bucket name
Value: !Ref S3
codepipeline.yaml -- Please see ArtifactStore. This is where cloudformation is seeing my parameter BucketName as value-less.
AWSTemplateFormatVersion: 2010-09-09
Description: codepipeline for aws codepipeline poc
Parameters:
BucketName:
Type: String
Environment:
Type: String
Description: environment
AllowedValues:
- dev
- prod
Default: dev
GithubRepository:
Type: String
Description: github repository
Default: aws-codepipeline-poc
GithubBranch:
Type: String
Description: github branch
Default: master
GithubOwner:
Type: String
Description: github owner
Default: SeanTurner026
GithubToken:
Type: String
Description: github token for codepipeline
NoEcho: true
Resources:
CodePipelineRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: !Join
- ""
- - !Ref AWS::StackName
- "-code-pipeline-role-"
- !Ref Environment
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: "Allow"
Principal:
Service: "codepipeline.amazonaws.com"
Action: "sts:AssumeRole"
CodePipelinePolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: !Join
- ""
- - !Ref AWS::StackName
- "-code-pipeline-policy-"
- !Ref Environment
PolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- s3:putObject
- s3:getObject
- codebuild:*
Resource:
- "*"
Roles:
- !Ref CodePipelineRole
Pipeline:
Type: "AWS::CodePipeline::Pipeline"
Properties:
Name: !Join
- ""
- - "code-pipeline-poc-"
- !Ref AWS::StackName
ArtifactStore:
Location: !Ref BucketName
Type: S3
RestartExecutionOnUpdate: true
RoleArn: !Join
- ""
- - "arn:aws:iam::"
- !Ref AWS::AccountId
- ":role/"
- !Ref CodePipelineRole
Stages:
- Name: checkout-source-code
Actions:
- Name: SourceAction
RunOrder: 1
ActionTypeId:
Category: Source
Owner: ThirdParty
Provider: GitHub
Version: 1
Configuration:
Owner: !Ref GithubOwner
Repo: !Ref GithubRepository
Branch: !Ref GithubBranch
PollForSourceChanges: true
OAuthToken: !Ref GithubToken
OutputArtifacts:
- Name: source-code
- Name: docker-build-push
Actions:
- Name: build-push-job
RunOrder: 1
InputArtifacts:
- Name: source-code
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: 1
Configuration:
ProjectName: !Ref BuildPushJob
OutputArtifacts:
- Name: build-push-job
Sorry if this is too verbose. If missed above, the problem is that ArtifactStore in the codepipeline.yaml is seeing my parameter BucketName as value-less, despite the value being outputted by S3Stack.
You pass the parameter as S3 but the template is expecting it as BucketName.

ComputeEnvironment went INVALID with error: The security group 'XXXX' does not exist

Below my cloudformation template:
I have added all the resource code, please excuse the indentation issue(copy paste thing), I assured you the template is running.
---
AWSTemplateFormatVersion: '2010-09-09'
Description: Sets up your AWS Batch Environment for running workflows
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: Compute Environment Config
Parameters:
- ComputeEnvironmentName
- VpcId
- SubnetIds
- MinvCpus
- MaxvCpus
- DesiredvCpus
- Label:
default: Job Definition
Parameters:
- JobDefinitionName
- DockerImage
- Vcpus
- Memory
- Command
- RetryNumber
- Label:
default: Job Queue
Parameters:
- JobQueueName
Parameters:
VpcId:
Type: 'AWS::EC2::VPC::Id'
Description: >-
VpcId of where the whole batch should be deployed. The VPC should have
2 private subnets.
SubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: Subnets you want your batch compute environment to launch in. Recommend private subnets
MinvCpus:
Type: String
Description: Minimum number of CPUs in the compute environment. Default 0.
Default: 0
AllowedPattern: "[0-9]+"
DesiredvCpus:
Type: String
Description: Desired number of CPUs in the compute environment to launch with. Default 0.
Default: 0
AllowedPattern: "[0-9]+"
MaxvCpus:
Type: String
Description: Maximum number of CPUs in the compute environment. Should be >= than MinCpus
Default: 256
AllowedPattern: "[0-9]+"
RetryNumber:
Type: String
Default: "1"
Description: Number of retries for each AWS Batch job. Integer required.
MaxLength: 1
AllowedPattern: "[1-9]"
ConstraintDescription: Value between 1 and 9
DockerImage:
Type: String
Description: Docker image used to run your jobs
Vcpus:
Type: Number
Description: vCPUs available to Jobs. Default is usually fine
Default: 2
Memory:
Type: Number
Description: Memory (in MB) available to Jobs. Default is usually fine
Default: 2000
JobQueueName:
Type: String
Description: Enter job queue Name
JobDefinitionName:
Type: String
Description: Enter JobDefinition Name for the batch
ComputeEnvironmentName:
Type: String
Description: Enter name of the Compute Environment
VPCCidr:
Type: String
Description: 'Cidr Block of the VPC, allows for ssh access internally.'
Default: '10.0.0.0/8'
MinLength: "9"
MaxLength: "18"
AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
ConstraintDescription: "Must be valid CIDR notation (i.e. x.x.x.x/x)."
Command:
Type: CommaDelimitedList
Description: The command that is passed to the container
CreateNewRepository:
Default: false
Description: >-
Set this to true if you want to create a new Repository, else
it will not create a new one
Type: String
AllowedValues:
- true
- false
RepositoryName:
Type: String
Description: Enter name of the new Repository.
Conditions:
CreateRepository: !Equals
- !Ref CreateNewRepository
- true
isCommandPresent: !Not [!Equals [!Ref CreateNewRepository, '']]
Resources:
JobDefinition:
Type: AWS::Batch::JobDefinition
Properties:
Type: container
JobDefinitionName: !Ref JobDefinitionName
ContainerProperties:
Image: !Ref DockerImage
Vcpus: !Ref Vcpus
Memory: !Ref Memory
Command: !Ref Command
ReadonlyRootFilesystem: true
Privileged: true
RetryStrategy:
Attempts: !Ref RetryNumber
JobQueue:
Type: AWS::Batch::JobQueue
Properties:
ComputeEnvironmentOrder:
- Order: 1
ComputeEnvironment: !Ref MyComputeEnv
State: ENABLED
Priority: 10
JobQueueName: !Ref JobQueueName
myVPCSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: Security group for batch process.
SecurityGroupEgress:
- CidrIp: 0.0.0.0/0
IpProtocol: '-1'
SecurityGroupIngress:
- CidrIp: !Ref VPCCidr
IpProtocol: tcp
FromPort: '22'
ToPort: '22'
VpcId: !Ref VpcId
MyComputeEnv:
Type: AWS::Batch::ComputeEnvironment
Properties:
Type: MANAGED
ServiceRole: !GetAtt awsBatchServiceRole.Arn
ComputeEnvironmentName: !Ref ComputeEnvironmentName
ComputeResources:
MinvCpus: !Ref MinvCpus
MaxvCpus: !Ref MaxvCpus
DesiredvCpus: !Ref DesiredvCpus
SecurityGroupIds: [!GetAtt myVPCSecurityGroup.GroupId]
Type: EC2
Subnets: !Ref SubnetIds
InstanceRole: !GetAtt InstanceProfile.Arn
InstanceTypes:
- optimal
State: ENABLED
awsBatchServiceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- "batch.amazonaws.com"
Action:
- "sts:AssumeRole"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSBatchServiceRole
ecsInstanceRole:
Type: AWS::IAM::Role
Properties:
RoleName: InstanceRole
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/AmazonEC2FullAccess"
- "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
InstanceProfileName: InstanceProfile
Roles:
- !Ref ecsInstanceRole
MyRepository:
Type: AWS::ECR::Repository
Condition: CreateRepository
Properties:
RepositoryName: !Ref RepositoryName
RepositoryPolicyText:
Version: "2012-10-17"
Statement:
-
Sid: AllowPushPull
Effect: Allow
Principal: "*"
Action:
- "ecr:*"
I am getting this error:
Operation failed, ComputeEnvironment went INVALID with error: CLIENT_ERROR - The security group 'sg-d9b85d91' does not exist
I don't know what is wrong with the code but strangely, the SecurityGroupIds created by myVPCSecurityGroup is sg-2869f263 but ComputeEnvironment is trying to find sg-d9b85d91.
taking a stab in the dark here just working for my mobile phone but I think it's because you don't have a V PC to your computer environment possibly
Disabling the Compute Environment in the UI and enabling it back again fixed the issue.

AWS Elastic Beanstalk launching into default VPC

I am trying to launch a Tomcat Beanstalk Instance into my VPC. But for some reason, the Instance does not consider my Configuration Template.
What I have done :
Created a VPC in a separate script and manually launched the requested instance and connected to it via SSH
Problem 1: If I use cloud-formation for some reason the script (below) creates a new stack for the Beanstalk
Problem 2: Upon successful launch, the beanstalk is created into the default VPC or crashes if I delete the default VPC.
How can I debug this?
Why is this happening, since I pass the correct parameters to the script?
SampleApplication:
Type: 'AWS::ElasticBeanstalk::Application'
Properties:
ApplicationName: !Ref ApplicationName
Description: OCAP's AWS Elastic Beanstalk Sample Application
SampleApplicationVersion:
Type: 'AWS::ElasticBeanstalk::ApplicationVersion'
Properties:
Description: Version 1.0
ApplicationName: !Ref SampleApplication
SourceBundle:
S3Bucket: !Ref AppS3Bucket
S3Key: !Ref AppS3Key
SampleIdentityEnvironment:
Type: 'AWS::ElasticBeanstalk::Environment'
Properties:
ApplicationName: !Ref SampleApplication
EnvironmentName : OCAPSampleIdentityManager
VersionLabel: !Ref SampleApplicationVersion
SolutionStackName: !FindInMap [ StackMap, !Ref StackType, stackName ]
DependsOn:
- ConfigurationTemplate
- SampleApplicationVersion
ConfigurationTemplate:
Type: AWS::ElasticBeanstalk::ConfigurationTemplate
Properties:
ApplicationName: !Ref SampleApplication
Description: 64bit Amazon Linux running Tomcat 7
SolutionStackName: !FindInMap [ StackMap, !Ref StackType, stackName ]
OptionSettings:
- Namespace: aws:autoscaling:launchconfiguration
OptionName: EC2KeyName
Value: !Ref KeyName
- Namespace: aws:ec2:vpc
OptionName: VPCId
Value : vpc-0123456789
- Namespace: 'aws:ec2:vpc'
OptionName: Subnets
Value: subnet-0123456789, subnet-0123456789
- Namespace: 'aws:ec2:vpc'
OptionName: ELBSubnets
Value: subnet-0123456789, subnet-0123456789
- Namespace: 'aws:ec2:vpc'
OptionName: AssociatePublicIpAddress
Value: 'true'
Solved Problem 2 :
You need to have this defined in "SampleIdentityEnvironment:"
TemplateName: !Ref ConfigurationTemplate

Cloudformation + EBS. How to create a static IP and route all outbound app server traffic through it?

I have the following Cloudformation config, it does the following:
Creates and Elasticbeanstalk app
Links a domain name to its loadbalancer
I need to be able to access an FTP server but they only allow whitelisted IP addresses.
How would I go about creating a static (elastic?) IP within configuration, route traffic through it, and have the IP remain the same if I run this Cloudformation multiple times?
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
S3Bucket:
Type: String
Description: S3 Bucket containing zip file
RolePath:
Type: String
Description: RolePath
HostedZoneName:
Type: String
Description: HostedZoneName
QueueNamePrefix:
Type: String
Description: QueueNamePrefix
AppDebug:
Type: String
Description: Debug
Default: 'false'
AppDnsCname:
Type: String
Description: AppDnsCname
Environment:
Type: String
Description: Environment
AppName:
Type: String
Description: AppName
AWSRegion:
Type: String
Description: AWSRegion
AppHealthCheckPath:
Type: String
Description: Path for container health check
Description: Elastic Beanstalk application & IAM policies
Resources:
ElasticBeanstalkProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: !Ref 'RolePath'
Roles:
- !Ref 'ElasticBeanstalkRole'
ElasticBeanstalkRole:
Type: AWS::IAM::Role
Properties:
Path: !Ref 'RolePath'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess
- arn:aws:iam::aws:policy/AmazonEC2ContainerServiceFullAccess
- arn:aws:iam::aws:policy/AWSElasticBeanstalkMulticontainerDocker
- arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier
- arn:aws:iam::aws:policy/AmazonSQSFullAccess
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- sts:AssumeRole
Policies: []
ElasticBeanstalkApplication:
Type: AWS::ElasticBeanstalk::Application
Properties:
Description: !Ref 'AppName'
ElasticBeanstalkVersion:
Type: AWS::ElasticBeanstalk::ApplicationVersion
Properties:
ApplicationName: !Ref 'ElasticBeanstalkApplication'
Description: Source Code
SourceBundle:
S3Bucket: !Ref 'S3Bucket'
S3Key: !Ref 'S3ZipKey'
ElasticBeanstalkConfigurationTemplate:
Type: AWS::ElasticBeanstalk::ConfigurationTemplate
DependsOn:
- ElasticBeanstalkProfile
Properties:
Description: my-app Configuration Template
ApplicationName: !Ref 'ElasticBeanstalkApplication'
SolutionStackName: 64bit Amazon Linux 2017.09 v2.8.4 running Multi-container Docker 17.09.1-ce (Generic)
OptionSettings:
- Namespace: aws:elasticbeanstalk:environment
OptionName: EnvironmentType
Value: LoadBalanced
- Namespace: aws:elasticbeanstalk:application
OptionName: Application Healthcheck URL
Value: !Ref 'AppHealthCheckPath'
- Namespace: aws:elasticbeanstalk:cloudwatch:logs
OptionName: StreamLogs
Value: true
- Namespace: aws:elasticbeanstalk:cloudwatch:logs
OptionName: DeleteOnTerminate
Value: false
- Namespace: aws:elasticbeanstalk:cloudwatch:logs
OptionName: RetentionInDays
Value: 180
- Namespace: aws:autoscaling:launchconfiguration
OptionName: IamInstanceProfile
Value: !GetAtt 'ElasticBeanstalkProfile.Arn'
- Namespace: aws:elasticbeanstalk:application:environment
OptionName: DEBUG
Value: !Ref 'AppDebug'
- Namespace: aws:elasticbeanstalk:application:environment
OptionName: AWS_REGION
Value: !Ref 'AWSRegion'
- Namespace: aws:elasticbeanstalk:application:environment
- Namespace: aws:autoscaling:launchconfiguration
OptionName: InstanceType
Value: "t2.small"
- Namespace: aws:elasticbeanstalk:healthreporting:system
OptionName: SystemType
Value: "enhanced"
MyAppDNS:
Type: AWS::Route53::RecordSetGroup
DependsOn: ElasticBeanstalkEnvironment
Properties:
HostedZoneName: !Ref 'HostedZoneName'
RecordSets:
- Name: !Ref 'AppDnsCname'
Type: CNAME
TTL: '60'
ResourceRecords:
- !GetAtt 'ElasticBeanstalkEnvironment.EndpointURL'
ElasticBeanstalkEnvironment:
Type: AWS::ElasticBeanstalk::Environment
Properties:
Description: !Ref 'Environment'
ApplicationName: !Ref 'ElasticBeanstalkApplication'
TemplateName: !Ref 'ElasticBeanstalkConfigurationTemplate'
VersionLabel: !Ref 'ElasticBeanstalkVersion'
Tier:
Type: Standard
Name: WebServer
Use Elastic IP resource association through CloudFormation.
Create the Elastic IP resource:
Type: AWS::EC2::EIP
Properties:
InstanceId: String
Domain: String
Associate the Elastic IP resource with your EC2 instance resource:
Type: AWS::EC2::EIPAssociation
Properties:
AllocationId: String
EIP: String
InstanceId: String
NetworkInterfaceId: String
PrivateIpAddress: String
Don't forget to join these two using !Ref
and finally, here's an official example on how to do this.
Assigning an Amazon EC2 Elastic IP Using AWS::EC2::EIP Snippet