I am fairly new to aws.
I am trying to create a simple app using Aurora and AppSync. So far, I have been able to create aurora database, connected to it using MySQL workbench, created the tables that I need.
I have also made the AppSync APIs. And done the resolver (connected the resolver to the RDS Aurora DB).
Here is the problem I am facing, when I try to run queries from the AppSync Queries Tab, it gives me the following error and message:
"errorType": "400 Bad Request",
"message": "RDSHttp:{\"message\":\"HttpEndPoint is not enabled for
arn:aws:rds:us***:cluster:***\"}" (I replaced some details with ***)
I have made my Aurora accessible to the public, and I have tried to add a few incoming rules to the security group (i.e. allow all).
However, this error still persists. I have spent a few days on it and will appreciate any help I can get to resolve this.
Thanks in advance
AWS AppSync can connect to Aurora Serverless clusters. First, make sure that your Aurora cluster has an engine-mode of serverless. You can verify this via the CLI by using aws rds describe-db-clusters.
Once you've got a cluster that is serverless, enable the Data API for that cluster, which will allow queries via HTTP.
Keep in mind that as of now these features are in beta and not recommended for production usage.
Related
I have a Node.js Express app that uses Sequelize to connect to the database. I want to deploy my app on Lambda (with API Gateway) and use an RDS Postgres database (serverless)
I created an RDS instance and a server-less setup. From an EC2 instance, I am able to connect to both the RDS instance and the server-less DB without any issues.
However, when I deploy the same code on Lambda, I am unable to connect to either DB instance. In fact, I do not see any error messages anywhere.
sequelize = new Sequelize(process.env.POSTGRES_DBNAME, process.env.POSTGRES_USERNAME, process.env.POSTGRES_PASSWORD, {
host: process.env.POSTGRES_HOST,
dialect: 'postgres',
logging: false,
operatorsAliases: false
});
// Test connection
(async function() {
try {
console.log('Connecting to: ', process.env.POSTGRES_DBNAME, process.env.POSTGRES_USERNAME, process.env.POSTGRES_PASSWORD, process.env.POSTGRES_HOST);
await sequelize.authenticate();
console.log('Connection has been established successfully.');
} catch (error) {
console.error('Unable to connect to the database:', error);
}
})();
I even tried using a MySQL instance with RDS proxy, but it's the same - The test connection part doesn't execute, and neither success nor error messages appear in the logs. I wanted to understand if I am missing something. The DB has been configured to be accessible from outside.
My guess is that you have not configured the Lambda IAM permissions correctly. In order for Lambda to be able to access RDS, you can use the AWSLambdaVPCAccessExecutionRole, for CloudWatch logs to work you can add the AWSLambdaBasicExecutionRole to you Lambda function.
The AWS Lambda developer guide has a tutorial that gives an example of how this can be done.
For more details, please read the Configuring a Lambda function to access resources in a VPC chapter in the developer guide.
To connect to an Amazon RDS instance from a Lambda function, refer to this Amazon document: How do I configure a Lambda function to connect to an RDS instance?.
The problem turned out to be with my express package. My AWS configuration was correct, and replacing the Lambda entry code with vanilla DB connection and printing a list of values worked, but plugging it with the Express code worked. I am not sure what the issue was - I found that upgrading the express version fixed my problem.
Thank you everyone for taking the time out to answer my question.
Always watch out for the VPC security group where your DB runs , if your RDS DB is not public you have to put your lambda function inside of the same security group and same subnets.
Here you can find more details Connect to a MySQL - AWS RDS from an AWS Lambda hosted Express JS App
I've spun up an aurora serverless posgres-compatible database and I'm trying to connect to it from a lambda function, but I am getting AccessDenied errors:
AccessDeniedException:
Status code: 403, request id: 2b19fa38-af7d-4f4a-aaa5-7d068e92c901
Details:
I can connect to and query the database manually via the query editor if I use the same secret-arn and database name that the lambda is trying to use. I've triple-checked that the arns are correct
My lambdas are not in the vpc but are using the data api. The RDS cluster is in the default vpc
I've temporarily given my lambdas administrator access so that I know it's not a policy-based issue on the lambda side of things
Cloudwatch does not contain any additional details on the error
I am able to query the database from the command line of my personal computer (not on the vpc)
Any suggestions? Perhaps there is a way to get better details out of the error?
Aha! After trying to connect via the command line and being able to do so successfully, I realized this had to be something non-network related. Digging into my code a bit I eventually realized there wasn't anything wrong with the connection portions of the code, but rather with the user permissions being used to create the session/service that attempted to access the data. In hindsight I suppose the explicit AccessDenied (instead of a timeout) should have been a clue that I was able to reach the database just not able to do anything with it.
After digging in I discovered these two things are very different:
AmazonRDSFullAccess
AmazonRDSDataFullAccess
If you want to use the data api, you have to have the AmazonRDSDataFullAccess (or similar) policy. AmazonRDSFullAccess is not a superset of the AmazonRDSDataFullAccess permissions as one might assume. (If you look at the json for the AmazonRDSFullAccess policy you'll notice the permissions cover rds:* while the other policy covers rds-data:*, so apparently these are just different permissions spaces entirely)
TLDR: Use the AmazonRDSDataFullAccess policy (or similar) to access the data api. AmazonRDSFullAccess will not work.
I think you need to put your lambda in the same VPC as your serverless db. I did a quick test and able to connect to it from an EC2 in the same VPC.
ubuntu#ip-172-31-5-146:~$ telnet database-11.cluster-ckuv4ugsg77i.ap-northeast-1.rds.amazonaws.com 5432
Trying 172.31.14.180...
Connected to vpce-0403cfe830963dfe9-u0hmgbbx.vpce-svc-0445a873575e0c4b1.ap-northeast-1.vpce.amazonaws.com.
Escape character is '^]'.
^CConnection closed by foreign host.
This is my security group.
My DMS replication instance (which is in same VPC as of Aurora serverless DB instance) is not able to find DB while creating endpoint in DMS.
However, I am able to create a cloud9 instance in same VPC as aurora serverless instance and connect to it from there.
Am I missing something here or it is not possible to use AWS DMS for migrating data from Aurora serverless as source?
Above issue was resolved by explicitly specifying the connection details for aurora serverless cluster (instead of dropdown selection). But the answer to original question of using Aurora serverless DB as source in DMS replication -
Yes, if only one time replication is required
No, If ongoing replication is required. For ongoing replication, It is required to change the values of binlog_format parameter for source database. Although, Aurora serverless allows changing value for this parameter but it has no impact in actual. Only a few parameters are supported for change which are listed here
According to Moving data from S3 -> RDS using AWS Glue
I found that an instance is required to add a connection to a data target. However, my RDS is a serverless, so there is no instance available. Does Glue support this case?
I have tried to connect Aurora MySql Serverless with AWS glue recently, and I failed. And I got a timeout error.
Check that your connection definition references your JDBC database with
correct URL syntax, username, and password. Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago.
The driver has not received any packets from the server.
I think the reason was Aurora serverless doesn't have any continuously running instances so in the connection URL you cannot give any instances, and that's why Glue cannot connect.
So, you need to make sure that DB instance is running. Only then your JDBC connection works.
If your DB runs in a private VPC, you can follow this link:
Nat Creation
EDIT:
Instead of NAT GW, you can also use the VPC endpoint for S3.
Here is a really good blog that explains step by step.
Or AWS documentation
AWS Glue supports the scenario, i.e., it works well to load data from S3 into Aurora Serverless using an AWS Glue job. The engine version I'm currently using is 8.0.mysql_aurora.3.02.0
Note: if you get an error saying Data source rejected establishment of connection, message from server: "Too many connections", you can increase ACUs (currently mine is set to min 4 - max 8 ACUs for your reference), as the maximum number of connections depends on the capacity of ACUs.
I can use JDBC build connection,
There is one thing very important is you should have at least one subnet open ALL TCP port, but you can point the port to the subnet.
With the setting, connection test pass, crawler also can create tables.
I created an AWS RDS MSSQL instance using Management Console but I cannot create a new database. Creating a table works fine though.
Did I miss anything in the configuration? Do I need to execute a special schema?
According to the documentation, you can create up to 30 databases per RDS instances.
http://aws.amazon.com/rds/faqs/#2
We would need more details to debug your particular issue. (Parameters used to create the RDS instance, exact error message etc )