I am trying to create a rds aurora global database with autoscaling enabled based on CPU threshold but when I try to destroy the autoscaled instances that are not deleted by terraform is this a bug? Is there a workaround or am I missing something?
I used this module https://github.com/umotif-public/terraform-aws-rds-aurora/blob/master/main.tf
line 391 create auto-scaling of aurora rds
But the autoscaled instance state is not maintained by terraform and deleting the autoscale resource didn't work
Error message: Error: error deleting RDS Cluster InvalidDBClusterStateFault: Cluster cannot be deleted, it still contains DB instances in non-deleting state. status code: 400, request id: b62f33ee-57d8-4887-9cad-3cbf6229b4ac
Error: Error deleting DB parameter group: InvalidDBParameterGroupState: One or more database instances are still members of this parameter group my-parameter-group, so the group cannot be deleted status code: 400, request id: 8a501e66-39e5-4365-ba33-7667894b9cf6
The only way I have done this is by manually deleting it before executing the command from terraform, but it does not make sense
Related
I have an RDS DatabaseCluster set up using CDK that's deployed and running with the default instance type. I want to upgrade the instance type to something larger and deploy it with (hopefully) no downtime. The RDS docs has a Scaling Your Amazon RDS Instance Vertically and Horizontally blog post but it only specifies steps to modify it using the console and not Cloudformation/CDK.
I tried modifying the instance type in the console and then made the changes in CDK and deployed but still got the following error:
The specified DB Instance is a member of a cluster. Modify the DB engine version for the DB Cluster using the ModifyDbCluster API (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: [RequestID]; Proxy: null)
How do I update the instance types for an RDS cluster defined using Cloudformation/CDK?
I used Terraform to bring up an AWS RDS SQL Server DB with deletion_protection set to true. Now, I am trying to delete the database and hence I tried to first run {terraform apply} with deletion_protection set to false, and I got the following error:
Error: error deleting Database Instance "awsworkerdb-green": InvalidParameterCombination: Cannot delete protected DB Instance, please disable deletion protection and try again.
status code: 400, request id: 7e787deb-af03-4016-9baa-471ab9c0ae1c
Then I tried to directly do {terraform destroy} using the same TF code with deletion_protection set to false, I got the following error:
Error: error deleting Database Instance "awsworkerdb-green": InvalidParameterCombination: Cannot delete protected DB Instance, please disable deletion protection and try again.
status code: 400, request id: 9a95ef70-8738-4a31-b0cd-cf10ef05bdec
How does one go about deleting this database instance using terraform?
This would be two distinct API invocations, and therefore two consecutive Terraform executions with two different config modifications:
Modify deletion_protection to be false in your config, and apply your changes to the RDS instance.
Remove the RDS from the config and apply, or destroy the RDS resource directly. Either action will delete the RDS instance.
You can't. You have to do it manually using AWS console or AWS CLI with modify-db-instance. The entire point of deletion protection is so that the rds instance is not easy to delete, and you have to explicitly modify it for that.
From cli use below
aws rds modify-db-instance --db-instance-identifier <DB_IDENTIFIER> --region <DB_REGION> --no-deletion-protection --apply-immediately
AWS EKS Cluster 1.18 with AWS CSI EBS driver. Some pods had EBS volumes statically provisioned and everything was working.
Next. At some point all the pods using EBS volumes stopped responding, services had infinite waiting time and the proxy pod was killing the connection because of the timeout.
Logs (CloudWatch) for kube-controller-manager were filled with such messages:
kubernetes.io/csi: attachment for vol-00c1763<removed-by-me> failed:
rpc error:
code = NotFound desc = Instance "i-0c356612<removed-by-me>" not found
and
event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"podname-65df9bc5c4-2vtj8", UID:"ad4d30b7-<removed-by-me>", APIVersion:"v1", ResourceVersion:"4414194", FieldPath:""}):
type: 'Warning'
reason: 'FailedAttachVolume' AttachVolume.Attach failed for volume "ebs-volumename" :
rpc error: code = NotFound desc = Instance "i-0c356<removed-by-me>" not found
The instance is there, we checked it like 20 times. We tried to kill the instance, so the CloudFormation creates a new one for us, the error persists, just the instance ID is different.
Next. We started deleting pods and unmounting volumes / deleting sc/pvc/pv.
kubectl stuck at the end of deleting pv.
We were only able to get them out of this state by patching no finalizers to both pv and volumemounts.
The logs contain the following:
csi_attacher.go:662] kubernetes.io/csi: detachment for VolumeAttachment for volume [vol-00c176<removed-by-me>] failed:
rpc error: code = Internal desc = Could not detach volume "vol-00c176<removed-by-me>" from node "i-0c3566<removed-by-me>":
error listing AWS instances:
"WebIdentityErr: failed to retrieve credentials\n
caused by: ExpiredTokenException: Token expired: current date/time 1617384213 must be before the expiration date/time1616724855\n\
tstatus code: 400, request id: c1cf537f-a14d<removed-by-me>"
I've read about the tokens for Kubernetes, but in our case we have everything being managed by EKS. Googling the ExpiredTokenException brings us to the pages of how you should solve the issues with your own applications, again, we manage everything on AWS using kubectl.
I currently want to deploy a simple Django app in AWS using Elastic Beanstalk and RDS, following this tutorial: http://www.1strategy.com/blog/2017/05/23/tutorial-django-elastic-beanstalk/. To create the Beanstalk app I use the command eb create --scale 1 -db -db.engine postgres -db.i db.t2.micro.
In the creation process, the tool fails to create the [AWSEBRDSDBSecurityGroup]. Here is the output:
2018-07-28 06:07:51 ERROR Stack named 'awseb-e-ygq5xuvccr-stack' aborted
operation. Current state: 'CREATE_FAILED' Reason: The following resource(s)
failed to create: [AWSEBRDSDBSecurityGroup].
2018-07-28 06:07:51 ERROR Creating RDS database security group named:
awseb-e-ygq5xuvccr-stack-awsebrdsdbsecuritygroup-oj71kkwnaaag failed Reason:
Either the resource does not exist, or you do not have the required permissions.
I am using an access token with full administrator rights.
How can I solve this issue?
Are you sure you want to use a DB Security group and not a VPC Security group? AFAIK, db security groups should no longer be needed in new accounts, you should just be able to attach an existing VPC security group directly to your instance.
If you do need it, then its most likely one of these:
A badly worded error for hitting the limits for max db security groups
You actually don't have the admin permissions as you claimed.
Do try out and let us know what you find.
I am trying to set up an ALB using Terraform and a spot instance, for a non-prod development workspace. The spot instance is created, but upon attempting to use the instance in the aws_alb_arget_group_attachment, I receive an error:
* aws_alb_target_group_attachment.ui_servers: Error registering targets with target group: InvalidTarget: The following targets are not in a running state and cannot be registered: '[id]'
status code: 400, request id: [id]
This persists even if I add a depends_on directive to the attachment:
depends_on = ["data.aws_instance.workspace_gz"]
If I re-run the terraform apply, it works, so it really is just a lifecycle problem. How can I instruct the attachment to wait until the instance is healthy?
You don't. What you ought to do is create the spot instances within an autoscaling group for the spot instances and attach the ASG to the target group.