Have an RDS database cluster. The deployed version in AWS has the following attributes:
Engine: aurora-postgresql
EngineVersion: '10.11'
My cloudformation template specified 'EngineVersion 10.7', but I believe the minor version was updated automatically on the deployed cluster. When I tried to deploy my Cloudformation stack, i ran into this error(Something very similar, i don't have that error message available right now):
The specified new engine version is different current version: 10.11 (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination
I tried updating my CF template to match the deployed engine version, and now I get:
The specified new engine version is same as current version: 10.11 (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination
I can't figure out what the InvalidParameterCombination means.
How do I get out of this predicament?
One option i see to work around this issue, is to attach a retention policy (retain) to the cluster, update stack, remove the cluster from the template, update stack and finally import the DB Cluster into the template with the correct version.
Can be difficult with dependencies, for those !Ref calls one could hard code the arn or custer id as a mapping, replace the references with the static mapping and finally follow the steps above. At the end replace hardcoded IDs with newly imported DB cluster and !Ref.
Related
I'm trying to deploy a conformance pack stack via cloudformation for AWS Config. I'm using https://github.com/awslabs/aws-config-rules/blob/master/aws-config-conformance-packs/Operational-Best-Practices-for-NIST-CSF.yaml for my template and I'm getting an error saying "The sourceIdentifier AWS_CONFIG_PROCESS_CHECK is invalid. Please refer to the documentation for a list of valid sourceIdentifiers that can be used when AWS is the Owner. (Service: AmazonConfig; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: cbaa077f-f932-4918-84a9-b38cecf8b1df; Proxy: null)" which is causing a rollback and deletion of resources. I deployed this same template through AWS Config and it worked just fine. I also used a NIST CSF sample pack template through AWS Config and it worked as well. My question is why it doesn't deploy via cloudformation with the script. Thank you.
I have an RDS DatabaseCluster set up using CDK that's deployed and running with the default instance type. I want to upgrade the instance type to something larger and deploy it with (hopefully) no downtime. The RDS docs has a Scaling Your Amazon RDS Instance Vertically and Horizontally blog post but it only specifies steps to modify it using the console and not Cloudformation/CDK.
I tried modifying the instance type in the console and then made the changes in CDK and deployed but still got the following error:
The specified DB Instance is a member of a cluster. Modify the DB engine version for the DB Cluster using the ModifyDbCluster API (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: [RequestID]; Proxy: null)
How do I update the instance types for an RDS cluster defined using Cloudformation/CDK?
I have a django app running on AWS using elastic beanstalk. Has been running for quite a while without any problems.
Just now upon deploying via CLI (eb deploy) I'm running into the following error:
ERROR: ...Reason: The following resource(s) failed to update: [AWSEBRDSDatabase].
ERROR: Updating RDS database named: [...] failed. Invalid storage size for engine name postgres and storage type standard: 15 (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: [...])
The error message contains the value 15, and indeed, in the django application database configuration it says:
When I try to change that number to an arbitrary 25 GB and apply those changes, I run into yet another error:
... Cannot upgrade postgres from 9.5.10 to 9.5.4. ...
So there are two things I don't understand:
Why is the 15GB a problem all of a sudden?
Why would it attempt to "upgrade" from 9.5.10 to 9.5.4?
Explanations and solution suggestions much appreciated!
UPDATE
There seems to be a configuration mismatch in the database engine version.
Elastic Beanstalk config:
RDS instance details:
I'm try to transfer data between s3 and dynamodb with AWSDataPipeline.
error message below...
Unable to create resource for #EmrClusterForLoad_2017-05-15T18:51:19
due to: The supplied ami version is invalid. (Service:
AmazonElasticMapReduce; Status Code: 400; Error Code:
ValidationException; Request ID: 7ebf0367-399f-11e7-b1d7-29efc4730e41)
but, i cannot solve the problem.
help me
aws datapipeline error
ami 3.9.0 is not supported in all regions.
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-3x.html
Also make sure to select a supported EC2 instance type
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-emr-supported-instance-types.html
Finally you need to set "Resize Cluster Before Running" as false in the Table Load activity.
I made it run after doing all these changes. Hopefully it will help you too.
I've just followed the example process as described in the predictionio docs to create a PredictionIO Cluster on AWS CloudFormation but my stack rolled back right after creation.
Did any of you successfully follow the docs?
I've looked through the error logs and found the self explaining error message:
Value (us-east-1a) for parameter availabilityZone is invalid. Subnets can currently only be created in the following availability zones: us-west-2b, us-west-2a, us-west-2c.