I have a Node.js Express app that uses Sequelize to connect to the database. I want to deploy my app on Lambda (with API Gateway) and use an RDS Postgres database (serverless)
I created an RDS instance and a server-less setup. From an EC2 instance, I am able to connect to both the RDS instance and the server-less DB without any issues.
However, when I deploy the same code on Lambda, I am unable to connect to either DB instance. In fact, I do not see any error messages anywhere.
sequelize = new Sequelize(process.env.POSTGRES_DBNAME, process.env.POSTGRES_USERNAME, process.env.POSTGRES_PASSWORD, {
host: process.env.POSTGRES_HOST,
dialect: 'postgres',
logging: false,
operatorsAliases: false
});
// Test connection
(async function() {
try {
console.log('Connecting to: ', process.env.POSTGRES_DBNAME, process.env.POSTGRES_USERNAME, process.env.POSTGRES_PASSWORD, process.env.POSTGRES_HOST);
await sequelize.authenticate();
console.log('Connection has been established successfully.');
} catch (error) {
console.error('Unable to connect to the database:', error);
}
})();
I even tried using a MySQL instance with RDS proxy, but it's the same - The test connection part doesn't execute, and neither success nor error messages appear in the logs. I wanted to understand if I am missing something. The DB has been configured to be accessible from outside.
My guess is that you have not configured the Lambda IAM permissions correctly. In order for Lambda to be able to access RDS, you can use the AWSLambdaVPCAccessExecutionRole, for CloudWatch logs to work you can add the AWSLambdaBasicExecutionRole to you Lambda function.
The AWS Lambda developer guide has a tutorial that gives an example of how this can be done.
For more details, please read the Configuring a Lambda function to access resources in a VPC chapter in the developer guide.
To connect to an Amazon RDS instance from a Lambda function, refer to this Amazon document: How do I configure a Lambda function to connect to an RDS instance?.
The problem turned out to be with my express package. My AWS configuration was correct, and replacing the Lambda entry code with vanilla DB connection and printing a list of values worked, but plugging it with the Express code worked. I am not sure what the issue was - I found that upgrading the express version fixed my problem.
Thank you everyone for taking the time out to answer my question.
Always watch out for the VPC security group where your DB runs , if your RDS DB is not public you have to put your lambda function inside of the same security group and same subnets.
Here you can find more details Connect to a MySQL - AWS RDS from an AWS Lambda hosted Express JS App
Related
I have a simple C# Lambda function that inserts a record into a table using Entity Framework. When I run the test locally (from my desktop machine) I can connect to the remote database just fine and the record gets inserted into the table at AWS just fine. When I upload the lambda to AWS and then send it data the function times out after 15 seconds. Since the code runs on my (external) desktop machine I am assuming that Lambda does not have permissions to connect to the internal RDS database from inside aws.
I have added AmazonRDSFullAccess to the permissions of the Lambda function. The Lambda function still times out.
What am I missing?
The Lambda function needs to be deployed to the same VPC as the RDS server.
It does not need the AmazonRDSFullAccess IAM policy attached.
The security group for the RDS server needs to allow inbound connections from the security group assigned to the Lambda function.
What I'm trying to do
I am working on a lambda function which will simply register some metadata about files which are uploaded onto an s3 bucket. This is not about actually processing the data in the files yet. To start with, I just want to register the fact that certain files have been uploaded or not. Then I want to connect that metadata to QuickSight just so that we can have a nice visual about which files have been uploaded.
What I've done so far
This part is fairly easy:
Some simply python code with the pymysql module
Chalice to manage the process of creating and updating the lambda function
I created the database
Where I'm stuck
QuickSight is somehow external to AWS in general. So I had to create the RDS (mysql) in the DMZ of our VPC.
I have configured the security group so that the DB is accessible both from QuickSight and from my own laptop.
But the lambda function can't connect.
I configured the right policy for the role, so that the lambda can connect with IAM
I tested that policy with the simulator
But of course the lambda function is going to have some kind of dynamic IP and that needs to be in the security group
Any Ideas ??
I am even thinking about this right?
Two things.
You shouldn't have to put your RDS in a DMZ. See this article about granting QuickSight access to your RDS: https://docs.aws.amazon.com/quicksight/latest/user/enabling-access-rds.html
In order for a lambda to access something in a VPC (like and RDS instance) the lambda must have a VPC configuration. https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
I am fairly new to aws.
I am trying to create a simple app using Aurora and AppSync. So far, I have been able to create aurora database, connected to it using MySQL workbench, created the tables that I need.
I have also made the AppSync APIs. And done the resolver (connected the resolver to the RDS Aurora DB).
Here is the problem I am facing, when I try to run queries from the AppSync Queries Tab, it gives me the following error and message:
"errorType": "400 Bad Request",
"message": "RDSHttp:{\"message\":\"HttpEndPoint is not enabled for
arn:aws:rds:us***:cluster:***\"}" (I replaced some details with ***)
I have made my Aurora accessible to the public, and I have tried to add a few incoming rules to the security group (i.e. allow all).
However, this error still persists. I have spent a few days on it and will appreciate any help I can get to resolve this.
Thanks in advance
AWS AppSync can connect to Aurora Serverless clusters. First, make sure that your Aurora cluster has an engine-mode of serverless. You can verify this via the CLI by using aws rds describe-db-clusters.
Once you've got a cluster that is serverless, enable the Data API for that cluster, which will allow queries via HTTP.
Keep in mind that as of now these features are in beta and not recommended for production usage.
I am getting this error when I execute my lambda, it
raises DaxClientError('Failed to configure cluster endpoints from {}'.format(seeds), DaxErrorCode.NoRoute)
I am trying to connect to my DAX cluster from Amazon Lambda (written in python).
I have installed amazon-dax-client to a folder, placed my lambda file there, made package and uploaded as zip file when i test lambda it throws above error.
Your lamda function needs to be in the same VPC as the DAX cluster. Otherwise, it won't be able to connect.
There is a blog post on using Amazon DAX from AWS Lambda. The sample code is for NodeJS but the Lambda/VPC configuration applies no matter what language is used.
Don't forget to add to your "security groups" inbound rules to allow TCP port 8111
I am trying to make Serverless work with Elasticache. I wrote a custom CloudFormation file based on serverless-examples/serverless-infrastructure repo. I managed to put Elasticache and Lambda in one subnet (checked with the cli). I retrieve the host and the port from the Outputs, but whenever I am trying to connect with node-redis the connection times out. Here are the relevant parts:
Resources
Serverless Config
I ran into this issue as well, but with Python. For me, there were a few problems that had to be ironed out
The lambda needs VPC permissions.
The ElastiCache security group needs an inbound rule from the Lambda security group that allows communication on the Redis port. I thought they could just be in the same security group.
And the real kicker: I had turned on encryption in-transit. This meant that I needed to pass redis.RedisClient(... ssl=True). The redis-py page mentions that ssl_cert_reqs needs to be set to None for use with ElastiCache, but that didn't seem to be true in my case. I did however need to pass ssl=True.
It makes sense that ssl=True needed to be set but the connection was just timing out so I went round and round trying to figure out what the problem with the permissions/VPC/SG setup was.
As Tolbahady pointed out, the only solution was to create an NAT within the VPC.
In my case I had TransitEncryptionEnabled: "true" with AuthToken: xxxxx for my redis cluster.
I ensured that both my lambda and redis cluster belonged to the same "private subnet".
I also ensured that my "securityGroup" allowed traffic to flow on the desired ports.
The major issue I faced was that my lambda was unable to fetch data from my redis cluster. And whenever it attempted to get the data it would throw timeout error.
I used Node.Js with "Node-Redis" client.
Setting option tls: true worked for me. This is a mandatory setting if you have encryption at transit enabled.
Here is my config:
import { createClient } from 'redis';
import config from "../config";
let options: any = {
url: `redis://${config.REDIS_HOST}:${config.REDIS_PORT}`,
password: config.REDIS_PASSWORD,
socket: { tls: true }
};
const redisClient = createClient(options);
Hope this answer is helpful to those who are using Node.Js with "Node-Redis" dependency in their lambda.