I'm trying to create read replica using the following command:
aws rds create-db-instance-read-replica --db-instance-identifier dbname-read --source-db-instance-identifier dbname --availability-zone us-east-1c
I'm getting the following error:
A client error (InvalidDBInstanceState) occurred when calling the CreateDBInstanceReadReplica operation: Automated backups are not enabled for this database instance. To enable automated backups, use ModifyDBInstance to set the backup retention period to a non-zero value.
I checked and the cluster is configured with automatic backups:
{
"DBInstances": [
{
"PubliclyAccessible": false,
"MasterUsername": "root",
"LicenseModel": "general-public-license",
"VpcSecurityGroups": [
{
"Status": "active",
"VpcSecurityGroupId": "sg"
}
],
"InstanceCreateTime": "2015-12-20T02:38:26.179Z",
"CopyTagsToSnapshot": false,
"OptionGroupMemberships": [
{
"Status": "in-sync",
"OptionGroupName": "default:aurora-5-6"
}
],
"PendingModifiedValues": {},
"Engine": "aurora",
"MultiAZ": false,
"DBSecurityGroups": [],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.aurora5.6",
"ParameterApplyStatus": "in-sync"
}
],
"AutoMinorVersionUpgrade": true,
"PreferredBackupWindow": "03:44-04:14",
"DBSubnetGroup": {
"Subnets": [
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet",
"SubnetAvailabilityZone": {
"Name": "us-east-1a"
}
},
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet",
"SubnetAvailabilityZone": {
"Name": "us-east-1c"
}
}
],
"DBSubnetGroupName": "dev-subnet-group",
"VpcId": "vpc",
"DBSubnetGroupDescription": "dev-subnet-group",
"SubnetGroupStatus": "Complete"
},
"ReadReplicaDBInstanceIdentifiers": [],
"AllocatedStorage": 1,
*"BackupRetentionPeriod": 7,*
"PreferredMaintenanceWindow": "mon:10:11-mon:10:41",
"Endpoint": {
"Port": 3306,
"Address": "dbname.us-east-1.rds.amazonaws.com"
},
"DBInstanceStatus": "available",
"EngineVersion": "5.6.10a",
"AvailabilityZone": "us-east-1a",
"DBClusterIdentifier": "dbname",
"StorageType": "aurora",
"DbiResourceId": "db-**********",
"CACertificateIdentifier": "rds-ca-2015",
"StorageEncrypted": false,
"DBInstanceClass": "db.r3.large",
"DbInstancePort": 0,
"DBInstanceIdentifier": "dbname"
}
]
}
Any idea?
Thanks,
Roey
Aurora engine doesn't support
create-db-instance-read-replica
instead just creating another instance using
create-db-instance
with the option --db-cluster-identifier.
So the newly created instance will automatically sync with the writer/master will be promoted to read only automatically.
Related
I created a redshift cluster in EU region. I also created a VPC and other artifacts including route tables with igw (using GUI with VPC and More option). Then attached this VPC to redshift cluster while creation.
However, I’’m unable to connect with this redshift from CD or local machine. I'm using postgres CLI.
Ideally it should have worked. Any ideas?
I'm able to connect from query editor from AWS Redshift console
Edit Troubleshooting done so far
This postgress command times out psql -h redshift-cluster-1.xxxxx.eu-west-1.redshift.amazonaws.com -U awsuser -d dev -p 5497. postgres installation is correct. As I'm able to connect with one of other redshift installations
psql: could not connect to server: Connection timed out
Is the server running on host "redshift-cluster-1.xxxx.eu-west-1.redshift.amazonaws.com" (xx.xx.xx.xx) and accepting
TCP/IP connections on port 5497?
dig works but telnet fails
Adding describes.
VPC
{
"Vpcs": [
{
"CidrBlock": "10.0.0.0/16",
"DhcpOptionsId": "dopt-0f7cfde8258b431f5",
"State": "available",
"VpcId": "vpc-0a673f3e2399e0904",
"OwnerId": "xx",
"InstanceTenancy": "default",
"CidrBlockAssociationSet": [
{
"AssociationId": "vpc-cidr-assoc-0f738813e1a319934",
"CidrBlock": "10.0.0.0/16",
"CidrBlockState": {
"State": "associated"
}
}
],
"IsDefault": false,
"Tags": [
{
"Key": "Name",
"Value": "test-vpc"
}
]
}
]
}
Security Groups
{
"SecurityGroups": [
{
"Description": "default VPC security group",
"GroupName": "default",
"IpPermissions": [
{
"IpProtocol": "-1",
"IpRanges": [],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": [
{
"GroupId": "sg-07c16a51da213b9a8",
"UserId": "xx"
}
]
}
],
"OwnerId": "xx",
"GroupId": "sg-07c16a51da213b9a8",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"VpcId": "vpc-0a673f3e2399e0904"
},
{
"Description": "SG1",
"GroupName": "SG1",
"IpPermissions": [
{
"FromPort": 5497,
"IpProtocol": "tcp",
"IpRanges": [
"IpRanges": [
{
"CidrIp": "52.27.190.0/23"
},
{
"CidrIp": "64.39.96.0/20"
},
{
"CidrIp": "10.189.32.85/32"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [
{
"PrefixListId": "pl-6fa54006"
}
],
"ToPort": 6000,
"UserIdGroupPairs": []
}
],
"OwnerId": "xxxxx",
"GroupId": "sg-0fccd6f7706900e54",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"VpcId": "vpc-0a673f3e2399e0904"
}
]
}
Redshift cluster
{
"Clusters": [
{
"ClusterIdentifier": "redshift-cluster-1",
"NodeType": "dc2.large",
"ClusterStatus": "available",
"ClusterAvailabilityStatus": "Available",
"MasterUsername": "awsuser",
"DBName": "dev",
"Endpoint": {
"Address": "redshift-cluster-1.xxx.eu-west-1.redshift.amazonaws.com",
"Port": 5497
},
"ClusterCreateTime": "2022-08-09T14:51:58.527000+00:00",
"AutomatedSnapshotRetentionPeriod": 1,
"ManualSnapshotRetentionPeriod": -1,
"ClusterSecurityGroups": [],
"VpcSecurityGroups": [
{
"VpcSecurityGroupId": "sg-07c16a51da213b9a8",
"Status": "active"
},
{
"VpcSecurityGroupId": "sg-0fccd6f7706900e54",
"Status": "active"
}
],
"ClusterParameterGroups": [
{
"ParameterGroupName": "default.redshift-1.0",
"ParameterApplyStatus": "in-sync"
}
],
"ClusterSubnetGroupName": "cluster-subnet-group-1",
"VpcId": "vpc-0a673f3e2399e0904",
"AvailabilityZone": "eu-west-1a",
"PreferredMaintenanceWindow": "mon:03:30-mon:04:00",
"PendingModifiedValues": {},
"ClusterVersion": "1.0",
"AllowVersionUpgrade": true,
"NumberOfNodes": 2,
"PubliclyAccessible": true,
"Encrypted": false,
"ClusterPublicKey": "<>",
"ClusterNodes": [
{
"NodeRole": "LEADER",
"PrivateIPAddress": "10.0.2.114",
"PublicIPAddress": "52.208.40.55"
},
{
"NodeRole": "COMPUTE-0",
"PrivateIPAddress": "10.0.6.171",
"PublicIPAddress": "46.51.199.140"
},
{
"NodeRole": "COMPUTE-1",
"PrivateIPAddress": "10.0.8.205",
"PublicIPAddress": "18.200.92.114"
}
],
"ClusterRevisionNumber": "40496",
"Tags": [],
"EnhancedVpcRouting": false,
"IamRoles": [],
"MaintenanceTrackName": "current",
"ElasticResizeNumberOfNodeOptions": "[4]",
"DeferredMaintenanceWindows": [],
"NextMaintenanceWindowStartTime": "2022-08-15T03:30:00+00:00",
"AvailabilityZoneRelocationStatus": "disabled",
"ClusterNamespaceArn": "<>",
"TotalStorageCapacityInMegaBytes": 800000,
"AquaConfiguration": {
"AquaStatus": "disabled",
"AquaConfigurationStatus": "auto"
}
}
]
}
According to the AWS CLI docs, aws rds stop-db-cluster command returns an output containing the attribute "AutomaticRestartTime". But when I run the command, the returned output does not contain that attribute.
Command executed:
aws rds stop-db-cluster --db-cluster-identifier xxxxxxxxxxxxxxx --output json
Returned output:
{
"DBCluster": {
"AllocatedStorage": 1,
"AvailabilityZones": [
"us-east-1c",
"us-east-1b",
"us-east-1a"
],
"BackupRetentionPeriod": 7,
"DBClusterIdentifier": "xxxxxxxxxxxxxxx",
"DBClusterParameterGroup": "jjjjjjjjjjjjj",
"DBSubnetGroup": "xxxxxx-subnets-4839849389098",
"Status": "available",
"EarliestRestorableTime": "2022-08-04T05:02:13.522000+00:00",
"Endpoint": "xxxxxxxxxxxxx.cluster-cjdlcwljcnljwd.us-east-1.rds.amazonaws.com",
"ReaderEndpoint": "xxxxxxxxxx.cluster-ro-hjdhjhjhjhj.us-east-1.rds.amazonaws.com",
"MultiAZ": false,
"Engine": "aurora-mysql",
"EngineVersion": "5.7.mysql_aurora.2.10.2",
"LatestRestorableTime": "2022-08-11T06:27:19.824000+00:00",
"Port": 3306,
"MasterUsername": "yyyyyyyy",
"PreferredBackupWindow": "05:00-06:30",
"PreferredMaintenanceWindow": "sun:07:00-sun:09:30",
"ReadReplicaIdentifiers": [],
"DBClusterMembers": [
{
"DBInstanceIdentifier": "xxxxxxxxx",
"IsClusterWriter": true,
"DBClusterParameterGroupStatus": "in-sync",
"PromotionTier": 0
}
],
"VpcSecurityGroups": [
{
"VpcSecurityGroupId": "sg-0aj909bc",
"Status": "active"
}
],
"HostedZoneId": "JSKDJLKDKLDLK",
"StorageEncrypted": true,
"KmsKeyId": "arn:aws:kms:us-east-1:000000000000:key/hcdkjchjdhckjwhckj",
"DbClusterResourceId": "cluster-gggggggggg",
"DBClusterArn": "arn:aws:rds:us-east-1:000000000000:cluster:xxxxxxxxx",
"AssociatedRoles": [],
"IAMDatabaseAuthenticationEnabled": true,
"ClusterCreateTime": "2019-02-19T17:29:52.223000+00:00",
"EngineMode": "provisioned",
"DeletionProtection": false,
"HttpEndpointEnabled": false,
"CopyTagsToSnapshot": false,
"CrossAccountClone": false,
"DomainMemberships": []
}
]
}
What am I doing wrong here?
I am launching an EMR cluster using the step function. I need to change the timezone of the cluster from UTC to IST. While launching the cluster through AWS console, we can specify the configurations using the json format. But in the case of step functions, it is unclear that which parameter would be required to specify configurations like time zone, hadoop heapsize etc.
The basic code to launch the EMR cluster that I am using is as follows -
{
"StartAt": "Create an EMR cluster",
"States": {
"Create an EMR cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name": "Sample_Cluster",
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.26.0",
"Applications": [
{ "Name": "Hive" },
{
"Name": "Hadoop"
}
],
"Tags": [
{
"Key": "Name",
"Value": "Testcluster_emr"
}
],
"Instances": {
"KeepJobFlowAliveWhenNoSteps": true,
"InstanceFleets": [
{
"Name": "MyMasterFleet",
"InstanceFleetType": "MASTER",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m5.xlarge"
}
]
},
{
"Name": "MyCoreFleet",
"InstanceFleetType": "CORE",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m5.xlarge"
}
]
}
],
}
},
"ResultPath": null,
"End": true
}
}
}
I have an t2.2xlarge AWS EC2 instance that i need to change it's type to t3.2xlarge.
But when i try to start it i get an
"Error starting instances The requested configuration is currently not
supported. Please check the documentation for supported
configurations."
When i run the check script everything is fine
https://github.com/awslabs/aws-support-tools/tree/master/EC2/NitroInstanceChecks
OK NVMe Module is installed and available on your instance
OK ENA Module with version is installed and available on your instance
OK fstab file looks fine and does not contain any device names.
And i also did all the checks described here
https://aws.amazon.com/premiumsupport/knowledge-center/boot-error-linux-nitro-instance/
aws ec2 describe-instances --instance-ids my-instance-id --query "Reservations[].Instances[].EnaSupport"
[
true
]
Is there anything else i should change to be able to start it as t3.2xlarge?
To reproduce:
Create an t2.2xlarge instance with default settings
Stop it and change type to t3.2xlarge
Try to start it
More detailed info about instance
aws ec2 describe-instances
{
"Reservations": [
{
"Groups": [],
"Instances": [
{
"AmiLaunchIndex": 0,
"ImageId": "ami-***********",
"InstanceId": "i-***********",
"InstanceType": "t2.2xlarge",
"KeyName": "***********",
"LaunchTime": "2020-11-24T06:11:41+00:00",
"Monitoring": {
"State": "disabled"
},
"Placement": {
"AvailabilityZone": "us-east-1e",
"GroupName": "",
"Tenancy": "default"
},
"PrivateDnsName": "ip-***********.ec2.internal",
"PrivateIpAddress": "***********",
"ProductCodes": [],
"PublicDnsName": "ec2-***********.compute-1.amazonaws.com",
"PublicIpAddress": "***********",
"State": {
"Code": 16,
"Name": "running"
},
"StateTransitionReason": "",
"SubnetId": "subnet-***********",
"VpcId": "vpc-***********",
"Architecture": "x86_64",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"AttachTime": "2020-10-06T05:07:35+00:00",
"DeleteOnTermination": true,
"Status": "attached",
"VolumeId": "vol-***********"
}
}
],
"ClientToken": "",
"EbsOptimized": false,
"EnaSupport": true,
"Hypervisor": "xen",
"NetworkInterfaces": [
{
"Association": {
"IpOwnerId": "amazon",
"PublicDnsName": "***********.compute-1.amazonaws.com",
"PublicIp": "***********"
},
"Attachment": {
"AttachTime": "2020-10-06T05:07:34+00:00",
"AttachmentId": "eni-attach-***********",
"DeleteOnTermination": true,
"DeviceIndex": 0,
"Status": "attached",
"NetworkCardIndex": 0
},
"Description": "",
"Groups": [
{
"GroupName": "launch-wizard-1",
"GroupId": "sg-***********"
}
],
"Ipv6Addresses": [],
"MacAddress": "***********",
"NetworkInterfaceId": "eni-***********",
"OwnerId": "***********",
"PrivateDnsName": "ip-***********.ec2.internal",
"PrivateIpAddress": "***********",
"PrivateIpAddresses": [
{
"Association": {
"IpOwnerId": "amazon",
"PublicDnsName": "ec2-***********.compute-1.amazonaws.com",
"PublicIp": "***********"
},
"Primary": true,
"PrivateDnsName": "ip-***********.ec2.internal",
"PrivateIpAddress": "***********"
}
],
"SourceDestCheck": true,
"Status": "in-use",
"SubnetId": "subnet-***********",
"VpcId": "vpc-***********",
"InterfaceType": "interface"
}
],
"RootDeviceName": "/dev/sda1",
"RootDeviceType": "ebs",
"SecurityGroups": [
{
"GroupName": "launch-wizard-1",
"GroupId": "sg-***********"
}
],
"SourceDestCheck": true,
"Tags": [
{
"Key": "Name",
"Value": ""
}
],
"VirtualizationType": "hvm",
"CpuOptions": {
"CoreCount": 8,
"ThreadsPerCore": 1
},
"CapacityReservationSpecification": {
"CapacityReservationPreference": "open"
},
"HibernationOptions": {
"Configured": false
},
"MetadataOptions": {
"State": "applied",
"HttpTokens": "optional",
"HttpPutResponseHopLimit": 1,
"HttpEndpoint": "enabled"
},
"EnclaveOptions": {
"Enabled": false
}
}
],
"OwnerId": "***********",
"ReservationId": "r-***********"
}
]
}
I tried to launch a t3.2xlarge in us-east-1e and got the following error:
Your requested instance type (t3.2xlarge) is not supported in your requested Availability Zone (us-east-1e). Please retry your request by not specifying an Availability Zone or choosing us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1f.
AWS probably doesn't have t3.2xlarge instances available in this AZ.
I have a VPC ( say vpc-a ) with CIDR range 192.170.0.0/16 .
I have created 3 subnets in the VPC which are as follows:
> aws ec2 describe-subnets --filters Name=vpc-id,Values=vpc-05d932bbfd4bfe3c5
{
"Subnets": [
{
"AvailabilityZone": "ap-south-1b",
"AvailabilityZoneId": "aps1-az3",
"AvailableIpAddressCount": 57,
"CidrBlock": "192.170.80.0/26",
"DefaultForAz": false,
"MapPublicIpOnLaunch": true,
"State": "available",
"SubnetId": "subnet-0a4c7cc6faa094318",
"VpcId": "vpc-05d932bbfd4bfe3c5",
"OwnerId": "336282279309",
"AssignIpv6AddressOnCreation": false,
"Ipv6CidrBlockAssociationSet": [],
"Tags": [
...
],
"SubnetArn": "arn:aws:ec2:ap-south-1:336282279309:subnet/subnet-0a4c7cc6faa094318"
},
{
"AvailabilityZone": "ap-south-1a",
"AvailabilityZoneId": "aps1-az1",
"AvailableIpAddressCount": 48,
"CidrBlock": "192.170.0.0/26",
"DefaultForAz": false,
"MapPublicIpOnLaunch": true,
"State": "available",
"SubnetId": "subnet-0b6e7a1e1840713a9",
"VpcId": "vpc-05d932bbfd4bfe3c5",
"OwnerId": "336282279309",
"AssignIpv6AddressOnCreation": false,
"Ipv6CidrBlockAssociationSet": [],
"Tags": [
...
],
"SubnetArn": "arn:aws:ec2:ap-south-1:336282279309:subnet/subnet-0b6e7a1e1840713a9"
},
{
"AvailabilityZone": "ap-south-1c",
"AvailabilityZoneId": "aps1-az2",
"AvailableIpAddressCount": 49,
"CidrBlock": "192.170.160.0/26",
"DefaultForAz": false,
"MapPublicIpOnLaunch": true,
"State": "available",
"SubnetId": "subnet-0e45e8fc489794ea9",
"VpcId": "vpc-05d932bbfd4bfe3c5",
"OwnerId": "336282279309",
"AssignIpv6AddressOnCreation": false,
"Ipv6CidrBlockAssociationSet": [],
"Tags": [
...
],
"SubnetArn": "arn:aws:ec2:ap-south-1:336282279309:subnet/subnet-0e45e8fc489794ea9"
}
]
}
So basically 3 subnets are:
subnet-0 CIDR: 192.170.0.0/26 Zone: ap-south-1a
subnet-1 CIDR: 192.170.80.0/26 Zone: ap-south-1b
subnet-2 CISR: 192.170.160.0/26 Zone: ap-south-1c
The route tables are as follows:
aws ec2 describe-route-tables --filters Name=vpc-id,Values=vpc-05d932bbfd4bfe3c5
{
"RouteTables": [
{
"Associations": [
{
"Main": true,
"RouteTableAssociationId": "rtbassoc-02f438a98c50824f2",
"RouteTableId": "rtb-04a14541aaf44b1d1",
"AssociationState": {
"State": "associated"
}
}
],
"PropagatingVgws": [],
"RouteTableId": "rtb-04a14541aaf44b1d1",
"Routes": [
{
"DestinationCidrBlock": "192.170.0.0/16",
"GatewayId": "local",
"Origin": "CreateRouteTable",
"State": "active"
}
],
"Tags": [],
"VpcId": "vpc-05d932bbfd4bfe3c5",
"OwnerId": "336282279309"
},
{
"Associations": [
{
"Main": false,
"RouteTableAssociationId": "rtbassoc-047cce5bf22b50a76",
"RouteTableId": "rtb-08371ccc1f79ebfe6",
"SubnetId": "subnet-0e45e8fc489794ea9",
"AssociationState": {
"State": "associated"
}
},
{
"Main": false,
"RouteTableAssociationId": "rtbassoc-0fbf237d4b7af1b57",
"RouteTableId": "rtb-08371ccc1f79ebfe6",
"SubnetId": "subnet-0a4c7cc6faa094318",
"AssociationState": {
"State": "associated"
}
},
{
"Main": false,
"RouteTableAssociationId": "rtbassoc-066c66d94f1aa32a5",
"RouteTableId": "rtb-08371ccc1f79ebfe6",
"SubnetId": "subnet-0b6e7a1e1840713a9",
"AssociationState": {
"State": "associated"
}
}
],
"PropagatingVgws": [],
"RouteTableId": "rtb-08371ccc1f79ebfe6",
"Routes": [
{
"DestinationCidrBlock": "192.168.0.0/24",
"TransitGatewayId": "tgw-065d7ae5e846681b0",
"Origin": "CreateRoute",
"State": "active"
},
{
"DestinationCidrBlock": "192.170.0.0/16",
"GatewayId": "local",
"Origin": "CreateRouteTable",
"State": "active"
},
{
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": "igw-0d37c7db290bf696c",
"Origin": "CreateRoute",
"State": "active"
}
],
"Tags": [
{
"Key": "Name",
"Value": "wqw"
}
],
"VpcId": "vpc-05d932bbfd4bfe3c5",
"OwnerId": "336282279309"
}
]
}
I have 2 ec2 instances :
instance-1 Subnet: subnet-0 , IP : 192.170.0.57
instance-2 Subnet: subnet-1 , IP : 192.170.80.6
I am unable to do ssh from instance-1 to instance-2 or vice-varsa. However I am able to ssh to both of them from another instance in another vpc with cidr 192.168.0.0/16 using transit gateway, which you may find in the routing information above.
Do I need to add additional routing info between the subnets subnet-0 & subnet-1 ? If so what would be the "target" of such route ? I tried enabling flow-log on the vpc but nothing came in cloud-watch logs.
Appreciate some help here.
The local VPC route will always be allowed so this is not a routing issue.
Check the following:
Security Groups
NACLs.
Also take a look at VPC Flow Logs and enable on both subnets. Look for REJECTs.