i created a RDS in AWS through serverless.
First it is don't have encryption so i took a manual snapshot of rds instance and created encrypted snapshot from that.
Now in serverless i changed like below.
1. i removed rds instance from aws stack (added a condition on RDS instance and re-design the template without rds)
2. Now i am re-deploying data base with DBSnapshotIdentifier.
3. Now if i try to re-run the stack with out DBSnapshotIdentifier it is failing.
Here my question is, Once if we use DBSnapshotIdentifier every time we need to provide that?
Note : i created a fresh stack with out providing DBSnapshotIdentifier created fine.
i re-run the deployment with out providing DBSnapshotIdentifier it's working.
The above case also similar i guess.
Please go through below yaml you will understand what i did on rds
createRDSInstance : it is used to create or remove rds from template.(from stack)
dbSnapshot: it is used to the way of deployement snapshot or simple update
testCrid:
Type: "AWS::RDS::DBInstance"
DependsOn: "testVPC"
Condition: createRDSInstance
Properties:
DBInstanceIdentifier: "Crid-${self:custom.arguments.stage}"
DBName:
Fn::If: [ dbSnapshot, "${self:custom.arguments.database}", Ref: "AWS::NoValue" ]
Engine: "MySQL"
EngineVersion: "5.7.16"
DBInstanceClass: ${self:custom.arguments.dbInstanceClass}
MasterUsername: test
MasterUserPassword: ${self:custom.arguments.password}
PubliclyAccessible: ${self:custom.arguments.public}
DBParameterGroupName:
Ref: "CridParameterGroup"
VPCSecurityGroups:
- Ref: "PrivateSubnetDBSecurityGroup"
DBSubnetGroupName:
Ref: "CridSubnetGroup"
AllocatedStorage: '5'
StorageType: "gp2"
StorageEncrypted: true
MultiAZ: true
DBSnapshotIdentifier:
Fn::If: [ dbSnapshot, Ref: "AWS::NoValue","${self:custom.arguments.dBSnapshotIdentifier}"]
Related
Unable to create aurora postgresSql database using cloudformat yaml template.
Please help me on this.
From AWS::RDS::DBCluster - AWS CloudFormation:
The following example creates an Amazon Aurora PostgreSQL DB cluster that exports logs to Amazon CloudWatch Logs. For more information about exporting Aurora DB cluster logs to Amazon CloudWatch Logs.
AWSTemplateFormatVersion: 2010-09-09
Description: >-
AWS CloudFormation Sample Template for sending Aurora DB cluster logs to
CloudWatch Logs: Sample template showing how to create an Aurora PostgreSQL DB
cluster that exports logs to CloudWatch Logs. **WARNING** This template
enables log exports to CloudWatch Logs. You will be billed for the AWS
resources used if you create a stack from this template.
Parameters:
DBUsername:
NoEcho: 'true'
Description: Username for PostgreSQL database access
Type: String
MinLength: '1'
MaxLength: '16'
AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
ConstraintDescription: must begin with a letter and contain only alphanumeric characters.
DBPassword:
NoEcho: 'true'
Description: Password for PostgreSQL database access
Type: String
MinLength: '8'
MaxLength: '41'
AllowedPattern: '[a-zA-Z0-9]*'
ConstraintDescription: must contain only alphanumeric characters.
Resources:
RDSCluster:
Type: 'AWS::RDS::DBCluster'
Properties:
MasterUsername: !Ref DBUsername
MasterUserPassword: !Ref DBPassword
DBClusterIdentifier: aurora-postgresql-cluster
Engine: aurora-postgresql
EngineVersion: '10.7'
DBClusterParameterGroupName: default.aurora-postgresql10
EnableCloudwatchLogsExports:
- postgresql
RDSDBInstance1:
Type: 'AWS::RDS::DBInstance'
Properties:
DBInstanceIdentifier: aurora-postgresql-instance1
Engine: aurora-postgresql
DBClusterIdentifier: !Ref RDSCluster
PubliclyAccessible: 'true'
DBInstanceClass: db.r4.large
RDSDBInstance2:
Type: 'AWS::RDS::DBInstance'
Properties:
DBInstanceIdentifier: aurora-postgresql-instance2
Engine: aurora-postgresql
DBClusterIdentifier: !Ref RDSCluster
PubliclyAccessible: 'true'
DBInstanceClass: db.r4.large
I have just created RDS Proxy by Cloud Formation
In Proxies dashboard, it showed RDS proxy is available, but Target groups are unavailable, I can't debug this and got stuck in Cloud Formation update state
This is my Cloud formation config,
I used all in-out bound traffic security group for both rds proxy and rds instance, but it doesn't seem to work...
So do I have any wrong config? I have stuck at this all day
RDSInstance:
DependsOn: DBSecurityGroup
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: '20'
AllowMajorVersionUpgrade: false
AutoMinorVersionUpgrade: true
AvailabilityZone: ${self:provider.region}a
DBInstanceClass: db.t2.micro
DBName: mydb
VPCSecurityGroups:
- "Fn::GetAtt": [ DBSecurityGroup, GroupId ]
Engine: postgres
EngineVersion: '11.9'
MasterUsername: postgres
MasterUserPassword: Fighting001
PubliclyAccessible: true
DBSubnetGroupName:
Ref: DBSubnetGroup
# VPCSecurityGroups:
# Ref: VPC
DBSecretsManager:
Type: AWS::SecretsManager::Secret
Properties:
Description: 'Secret Store for database connection'
Name: postgres
SecretString:
'password'
RDSProxy:
DependsOn: DBSecurityGroup
Type: AWS::RDS::DBProxy
Properties:
Auth:
- AuthScheme: SECRETS
SecretArn:
Ref: DBSecretsManager
IAMAuth: DISABLED
DBProxyName: ${self:provider.stackName}-db-proxy
DebugLogging: true
EngineFamily: 'POSTGRESQL'
RoleArn: 'my role arn'
VpcSecurityGroupIds:
- "Fn::GetAtt": [ DBSecurityGroup, GroupId ]
VpcSubnetIds:
- Ref: PublicSubnetA
- Ref: PublicSubnetB
RDSProxyTargetGroup:
Type: AWS::RDS::DBProxyTargetGroup
Properties:
DBProxyName:
Ref: RDSProxy
DBInstanceIdentifiers: [Ref: RDSInstance]
TargetGroupName: "default"
ConnectionPoolConfigurationInfo:
MaxConnectionsPercent: 45
MaxIdleConnectionsPercent: 40
ConnectionBorrowTimeout: 120
A likely reason why your template fails is that your AWS::SecretsManager::Secret is not used and has incorrect values.
Your DB uses:
MasterUsername: postgres
MasterUserPassword: Fighting001
But your DBSecretsManager is:
SecretString:
'password'
which is incorrect. I would suggest setting up manually everything in the AWS console first. Then you can check what is the correct form of the SecretString for your use-case.
While this isnt the cause of the original issue mentioned above, it may help someone who reaches this post in future.
Make sure your RDS instance and the security group associated with it are using the same port.
I experienced the same outcome because my RDS security group was configured using a different port than the RDS instance.
By default, Aurora Postgres will use port 3306, however my security group was using 5432 (because it was copied from an old Postgres non-Aurora RDS instance). I updated my RDS instance to use port 5432 by specifying the Port property which resolved this issue.
I created my RDS Subnet Group via CloudFormation referencing a parameter ProjectName
DB:
Type: AWS::RDS::DBInstance
Properties:
DBSubnetGroupName: !Ref RDSSubnetGroup
Problem now is CloudFormation says it cannot find my subnet group:
DB subnet group 'AbcDef' does not exist because its actually abcdef ... how can I resolve this?
I tried looking for a toLower function but seems like theres none?
The other option appears to be recreate the stack?
Unfortunately everything you do in CloudFormation templates is case-sensitive including property names and parameter values. You may have to recreate the stack.
As you correctly pointed out, there is no Fn::ToLower function. If you really want to achieve what you are trying to, the only way to do it as of now is create Lambda backed custom resource which basically will convert your string to lower case and return it but it is not worth doing it as there are plenty of challenges you will come across when dealing with custom resources.
I have also found that DB Subnet Groups have their name forcibly changed to lowercase when viewed in the RDS console. Very unusual behavior.
However, I have created them in CloudFormation and it has not caused the error you describe. Here are the bits from my CloudFormation template:
###########
# DB Subnet Group
###########
DBSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: Lab DB Subnet Group
DBSubnetGroupName: Lab DB Subnet Group
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
Tags:
-
Key: Name
Value: DBSubnetGroup
###########
# RDS Database
###########
RDSDatabase:
Type: AWS::RDS::DBInstance
Properties:
DBName: inventory
DBInstanceIdentifier: inventory-db
AllocatedStorage: 5
DBInstanceClass: db.t2.micro
Engine: MySQL
MasterUsername: master
MasterUserPassword: lab-password
MultiAZ: false
DBSubnetGroupName: !Ref DBSubnetGroup
VPCSecurityGroups:
- !Ref DBSecurityGroup
Tags:
-
Key: Name
Value: inventory-db
I would suggest rewriting the function_name and the name of the DBSubnetGroup to dbsubnetgroup
This will fix the issue I suppose.
Had the same issue, tried all possible , the only way to fix , was to create a new DB subnet group with the name lower :
rdssubnetgrouplower:
Type: "AWS::RDS::DBSubnetGroup"
Properties:
DBSubnetGroupDescription: "Private subnet group to keep the cluster private"
DBSubnetGroupName: rdssubnetgrouplower
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
Tags:
-
Key: Name
Value: rdssubnetgrouplower
and then in RDS definition used it :
MySQLABC:
Type: "AWS::RDS::DBCluster"
Properties:
DBSubnetGroupName: !Ref rdssubnetgrouplower
...
...
this worked and had the cluster up. in TF there is the lower function :)
I have a working elastic beanstalk environment, which is launched by boto3. Unfortunately, when I tried to launch an RDS instance with the environment, it fails and terminates with the error InvalidParameterValue: null, but no indication which parameter is invalid.
The only thing I changed was adding the file 01_rds.config to .ebextensions:
Resources:
AWSEBRDSDatabase:
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: 5
CopyTagsToSnapshot: true
DBInstanceClass: db.t2.micro
DBSnapshotIdentifier: arn:aws:rds:us-west-2:xxxxxxxxxxxx:snapshot:env-qa-seed
DBSubnetGroupName: "env-qa-staging"
Based on the documentation, this should be all I need.
I also tried with these additional properties, with the same result:
DBInstanceIdentifier: env-db
DBName: site
Engine: MySQL
EngineVersion: 5.6.19b
PubliclyAccessible: false
MasterUsername: dbuser
MasterUserPassword: xxxxxxxxxxxx
I am trying to create a read replica in west region for an RDS data base in east through cloud formation template.
I am getting an error:
Cannot create a cross region unencrypted read replica from encrypted source.
However, I have tried to provide kms key id and marked CopyTagsToSnapshot as true . Here is how my cloud formation looks like:
Resources:
MyDB:
Type: AWS::RDS::DBInstance
Properties:
SourceDBInstanceIdentifier: !Ref ReadReplicaURL
AllocatedStorage: !Ref DBAllocatedStorage
CopyTagsToSnapshot: true
DBSubnetGroupName: !Ref DBSubnetGroup
VPCSecurityGroups:
- !Ref DBSG1
KmsKeyId: !Ref DBEncryptionKey
StorageEncrypted: true
DBInstanceClass: !Ref DBInstanceClass
DBInstanceIdentifier: !Ref DBInstanceIdentifier
Iops: !Ref DBIops
MonitoringInterval: !Ref DBMonitoringInterval
Engine: !Ref Engine
MonitoringRoleArn: !Ref DBMonitoringRoleARN
Port: !Ref DBPort
PreferredMaintenanceWindow: !Ref DBPreferredMaintenanceWindow
StorageType: io1
Answer I got from AWS rep:
Unfortunately, creation of encrypted RDS cross-region read replicas is not possible through CloudFormation currently. There is an active feature request to implement this functionality to which I have added your voice. Once the feature is implemented, it will be announced on this page:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/ReleaseHistory.html