Connection from Lambda to Aurora database fails - amazon-web-services

I've set up an Aurora PostgreSQL compatible database. I can connect to the database via the public address but I'm not able to connect via a Lambda function which is placed in the same VPC.
This is just a test environment and the security settings are weak. In the network settings I tried to use "no VPC" and I tried my default VPC where the database and the lambda are placed. But this doesn't make a difference.
This is my nodejs code to create a simple Select statement:
var params = {
awsSecretStoreArn: '{mySecurityARN}',
dbClusterOrInstanceArn: 'myDB-ARN',
sqlStatements: 'SELECT * FROM userrole',
database: 'test',
schema: 'user'
};
const aurora = new AWS.RDSDataService();
let userrightData = await aurora.executeSql(params).promise();
When I start my test in the AWS GUI I get following errors:
"errorType": "UnknownEndpoint",
"errorMessage": "Inaccessible host: `rds-data.eu-central- 1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
"trace": [
"UnknownEndpoint: Inaccessible host: `rds-data.eu-central-1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
I've already checked the tutorial of Amazon but I can't find a point I did't try.

The error message "This service may not be available in the `eu-central-1' region." is absolutely right because in eu-central-1 an Aurora Serverless database is not available.
I configured an Aurora Postgres and not an Aurora Serverless DB.
"AWS.RDSDataService()", what I wanted to use to connect to the RDB, is only available in relation to an Aurora Serverless instance. It's described here: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/RDSDataService.html.
This is the reason why this error message appeared.

Related

AWS CDK Athena Data Source

How I can create an Athena data source in AWS CDK which is a JDBC connection to a MySQL database using the AthenaJdbcConnector?
I believe I can use aws-sam's CfnApplication to create the AthenaJdbcConnector Lambda, but how can I connect it to Athena?
I notice a lot of Glue support in CDK which would transfer to Athena (data catalog), and there are several CfnDataSource types in other modules such as QuickSight, but I'm not seeing anything under Athena in CDK.
See the image and references below.
References:
https://docs.aws.amazon.com/athena/latest/ug/athena-prebuilt-data-connectors-jdbc.html
https://github.com/awslabs/aws-athena-query-federation/tree/master/athena-jdbc
https://serverlessrepo.aws.amazon.com/applications/us-east-1/292517598671/AthenaJdbcConnector
I have been playing with the same issue. Here is what I did to create the Lambda for federated queries (Typescript):
const vpc = ec2.Vpc.fromLookup(this, 'my-project-vpc', {
vpcId: props.vpcId
});
const cluster = new rds.ServerlessCluster(this, 'AuroraCluster', {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql10'),
defaultDatabaseName: 'MyDB',
vpc,
vpcSubnets: {
onePerAz: true
},
scaling: {autoPause: cdk.Duration.seconds(0)} // Optional. If not set, then instance will pause after 5 minutes
});
let password = cluster.secret!.secretValueFromJson('password').toString()
let spillBucket = new Bucket(this, "AthenaFederatedSpill")
let lambdaApp = new CfnApplication(this, "MyDB", {
location: {
applicationId: "arn:aws:serverlessrepo:us-east-1:292517598671:applications/AthenaJdbcConnector",
semanticVersion: "2021.42.1"
},
parameters: {
DefaultConnectionString: `postgres://jdbc:postgresql://${cluster.clusterEndpoint.hostname}/MyDB?user=postgres&password=${password}`,
LambdaFunctionName: "crossref_federation",
SecretNamePrefix: `${cluster.secret?.secretName}`,
SecurityGroupIds: `${cluster.connections.securityGroups.map(value => value.securityGroupId).join(",")}`,
SpillBucket: spillBucket.bucketName,
SubnetIds: vpc.privateSubnets[0].subnetId
}
})
This creates the lambda with a default connection string like you would have it, if you used the AWS Console wizard in Athena to connect to a DataSource. Unfortunately it is NOT possible to add a Athena-catalog specific connection string via CDK. It should be set as an Environment Variable on the Lambda, and I found no way to do that. The Application template simply don't allow it, so this is a post-process by hand. I would sure like to hear from anybody if they have a solution for that!
Also notice that I add the user/password in the jdbc URL directly. I wanted to use SecretsManager, but because the Lambda is deployed in a VPC, it simply refuses to connect to the secretsmanager. I think this might be solvable by added a private VPN connection to SSM. Again - I would like to hear from anybody have tried that.

RDS Proxy: PENDING_PROXY_CAPACITY and "DBProxy Target unavailable due to an internal error"

When deploying an RDS database via Terraform, my Default target is unavailable.
Running the following command:
aws rds describe-db-proxy-targets --db-proxy-name <my_proxy_name_here>
I get two errors:
initially its in state: PENDING_PROXY_CAPACITY
eventually that times out with the following error: DBProxy Target unavailable due to an internal error
Following extensive research, a two hour call with AWS support and very few search results for the error: PENDING_PROXY_CAPACITY
I stumbled across the following discussion: https://github.com/hashicorp/terraform-provider-aws/issues/16379
I had a couple of issues with my config:
Outbound rules for my RDS proxy security group was limited to internal traffic only. This causes problems as you need public internet access to access AWS Secrets manager!
At the time of writing the Terraform documentation here suggests you can pass a "username" option to the Auth block for the rds_proxy resource (see: https://registry.terraform.io/providers/hashicorp/aws/4.26.0/docs/resources/db_proxy). This does not work, and returns an error stating the username option is not expected. This is because the rds_proxy expects all the information for Auth to be contained in one json object within the secret arn provided. For this reason I created a 2nd secret containing all the auth information like so:
resource "aws_secretsmanager_secret_version" "lambda_rds_test_proxy_creds" {
secret_id = aws_secretsmanager_secret.lambda_rds_test_proxy_creds.id
secret_string = jsonencode({
"username" = aws_db_instance.lambda_rds_test.username
"password" = module.lambda_rds_secret.secret
"engine" = "postgres"
"host" = aws_db_instance.lambda_rds_test.address
"port" = 5432
"dbInstanceIdentifier" = aws_db_instance.lambda_rds_test.id
})
}
Fixing both issues still gave me an Auth error for credentials, this required the IAM permissions fixing (this is discussed in the above github issue). But by creating the new Secret to contain all the info required both the proxy, It no longer had access to the new secret so I updated my IAM role for the newly created resource
I am posting this here as the Github issue is archived and I am unable to update the comments to include some of my search terms to assist those searching for the same issue to come across the issue quicker as there is very little info out there regarding RDS_PROXY errors experienced here.

Rds_instance failing to create RDS databases inside Aurora Cluster using Ansible

I am trying to create an Aurora DB cluster with 1 writer / reader node.
It does not appear that Ansible currently supports Cluster creation for Aurora, so I am creating this using the AWS CLI.
#NOTE - Currently, Ansible does not support creating an RDS cluster in the official documentation. This may change in the future.
- name: Create the DB cluster
command: >
aws rds create-db-cluster
--db-cluster-identifier production-db
--engine aurora-mysql
--db-subnet-group-name webserver-connections
--vpc-security-group-ids sg-dja17283
--storage-encrypted
--db-cluster-parameter-group-name my-parameter-group
--master-username "my_username"
--master-user-password "My_Password"
--backup-retention-period 7
when: aurora_cluster == ''
- name: Create instances inside of cluster
rds_instance:
engine: aurora
engine_version: "5.7.mysql_aurora.2.07.2"
db_instance_identifier: ansible-test-aurora-db-instance
instance_type: db.t2.small
cluster_id: production-db
multi_az: yes
storage_encrypted: yes
# backup_retention_period: 7
tags:
Environment: "Production"
This returns -
"msg": "Unable to create DB instance: An error occurred (InvalidParameterCombination) when calling the CreateDBInstance operation: Cannot find version 5.7.mysql_aurora.2.07.2 for aurora",
If I set the engine to be aurora-mysql, I see the following -
"msg": "Unable to create DB instance: An error occurred (InvalidParameterCombination) when calling the CreateDBInstance operation: VPC Multi-AZ DB Instances are not available for engine: aurora-mysql"
When uncommenting backup retention period (it is defined both in the initial cluster creation CLI call, as well as the play), I see the following -
"msg": "Unable to create DB instance: An error occurred (InvalidParameterCombination) when calling the CreateDBInstance operation: The requested DB Instance will be a member of a DB Cluster. Set backup retention period for the DB Cluster.
Is it possible to use Ansible to create an Aurora-Mysql Multi-AZ RDS cluster? From reading the documentation, it doesn't appear to be supported yet.
Is it possible to use Ansible to manage the DB instances inside of a cluster, such as the reader / writer nodes in a multi-az aurora-mysql deployment? If so, how can I do this? All of my testing has returned similar results as above.
Thanks.
I'm not sure if Ansible supports Aurora yet or not, but all those error messages are valid.
You need to change engine to aurora-mysql, and remove multi-az or set it to false since multi-az is not an available Aurora feature.
Multi-az creates a 2nd "backup" instance of an RDS server in another availability zone. Since Aurora is a cluster instead of a single instance system, you would just create a second instance yourself instead of specifying multi-az.

Create a new DB instance in Amazon CloudFormation

I have developed the application using Java and I also used the Amazon PostgreSQL database for data management. I hosted the application in Elastic beanstalk. Now, Someone suggested me to use the Amazon CloudFormation. So I created the Infrastructure code in JSON Format that also include Amazon RDS but I have some doubts.
When I use CloudFormation then that will automatically creates the new DB instance for my application but I specified another DB instance name in Java code then how it will communicate?
Please help me to clarify the doubts.
Thanks in advance...
You can configure DB URL in outputs section of CFN so that you get the required URL
CFN outputs
To get endpoint url for your AWS::RDS::DBInstance is returned using Return values:
Endpoint.Address The connection endpoint for the database. For example: mystack-mydb-1apw1j4phylrk.cg034hpkmmjt.us-east-2.rds.amazonaws.com
Endpoint.Port The port number on which the database accepts connections. For example: 3306
To get the Endpoint.Address out of your stack, you have to add Outputs section to your template. En example would be:
"Outputs": {
"DBEndpoint": {
"Description": "Endpoint for my RDS Instance",
"Value": {
"Fn::GetAtt" : [ "MyDB", "Endpoint.Address" ]}
}
}
}
Then using AWS SDK for Java you can query the Outputs of your CFN Stack to use in your Java application.

Unable to add an RDS instance to Elastic Beanstalk

Suddenly I can't add an RDS to my EB environment, not sure why. Here's the full error message:
Unable to retrieve RDS configuration options.
Configuration validation exception: Invalid option value: 'db.t1.micro' (Namespace: 'aws:rds:dbinstance', OptionName: 'DBInstanceClass'): DBInstanceClass db.t1.micro not supported for mysql db
I am not sure if this is due to the default AMI that I am using or something else.
Note that I didn't choose to launch t1.micro RDS instance. Seems like eb is trying to get that but this type has been eliminated from RDS instance class.
Just found this link in the community forum. https://forums.aws.amazon.com/ann.jspa?annID=4840, looks like elastic Beanstalk has not updated cloudformation templates yet.
I think it's resolved now. But as a side note, AWS should not make things like this a community announcement.