I want to connect my RDS Database table with my lambda function, for this, I have created a lambda function and used knex.js and postgres database in rds, I got the knex object, but I cannot work with any query.
To give some more information about the services,
RDS database server security group can be access from anywhere
I have given the vpc in the serverless.yml file in the function.
Region of both lambda and rds are different, but not sure whether it is the problem.
My serverless function
note: this knex code is working when I tried this separately.
module.exports.storeTransaction = async (event) => {
...
knex('Transactions')
.select('*')
.then(response => {
console.log('response is ');
console.log(response);
})
...
};
Serverless.yml file
service: <service-name>
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
package:
exclude:
- node_modules/**
plugins:
- serverless-plugin-include-dependencies
functions:
storeEmail:
handler: handler.storeTransaction
vpc:
securityGroupIds:
- <security-group-id-of-rds>
subnetIds:
- <subnet-id-of-rds>
- <subnet-id-of-rds>
...
region:
- us-east-1a
events:
- http:
path: email/store
method: post
cors: true
So can you identify my issue on why I can't connect my rds db with lambda function, and let me know what I did wrong or what is missing.
I think the problem is that RDS and Lambda are in different regions, which means they are also in different VPCs, as a VPC cannot span across multiple regions. Although you can enable Inter VPC Peering (https://aws.amazon.com/vpc/faqs/#Peering_Connections).
Consider that when you deploy a lambda function in a VPC, it won't have internet access as long as you don't attach a NAT Gateway to that VPC/subnet.
If the RDS is open to the world (and does it really need to be??), you can try to deploy in the same region (without a VPC) and verify if that works.
Related
I am using AWS CDK to create a CloudFormation Stack with a RDS Aurora Cluster Database, VPC, Subnet, RouteTable and Security Group resources. And another Stack with a couple of Lambdas, API Gateway, IAM Roles and Policies and many other resources.
The CDK deployment works fine and I can see both stack created in CloudFormation with all the resources. But I had issues trying to connect with the RDS Database so I added a CfnOutput to check the connection string and realised that the RDS port was not resolved from it's original number-encoded token, while the hostname is resolved properly? So, I'm wondering why this is happening...
This is how I'm setting the CfnOutput:
new CfnOutput(this, "mysql-messaging-connstring", {
value: connectionString,
description: "Mysql connection string",
exportName: `${prefix}-mysqlconnstring`
});
The RDS Aurora Database Cluster is created in a method called createDatabaseCluster:
const cluster = new rds.DatabaseCluster(scope, 'Database', {
engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_5_7_12 }),
credentials: dbCredsSecret,
instanceProps: {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.SMALL),
vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED },
vpc: vpc,
publiclyAccessible: true,
securityGroups: [ clusterSG ]
},
instances: 1,
instanceIdentifierBase: dbInstanceName,
});
This createDatabaseCluster method returns the connection string:
return `server=${cluster.instanceEndpoints[0].hostname};user=${username};password=${password};port=${cluster.instanceEndpoints[0].port};database=${database};`;
In this connection string, the DB credentials are retrieved from a secret in AWS Secrets Manager and stored in username and password variables to be used in the return statement.
The actual observed value of the CfnOutput is as follow:
As a workaround, I can just specify the port to be used but I want to understand what's the reason why this number-encoded token is not being resolved.
I have two Cloudformation templates
one which creates a VPC, ALB and any other shared resources etc.
one which creates an elastic beanstalk environment and relevant listener rules to direct traffic to this environment using the imported shared load balancer (call this template Environment)
The problem I'm facing is the Environment template creates a AWS::ElasticBeanstalk::Environment which subsequently creates a new CFN stack which contains things such as the ASG, and Target Group (or process as it is known to elastic beanstalk). These resources are not outputs of the AWS owned CFN template used to create the environment.
When setting
- Namespace: aws:elasticbeanstalk:environment
OptionName: LoadBalancerIsShared
Value: true
In the optionsettings for my elastic beanstalk environment, a load balancer is not created which is fine. I then try to attach a listener rule to my load balancer listener.
ListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Priority: 1
ListenerArn:
Fn::ImportValue: !Sub '${NetworkStackName}-HTTPS-Listener'
Actions:
- Type: forward
TargetGroupArn: WHAT_GOES_HERE
Conditions:
- Field: host-header
HostHeaderConfig:
Values:
- mywebsite.com
DependsOn:
- Environment
The problem here is that I don't have access as far as I can tell to the ARN of the target group created by the elastic beanstalk environment resource. If I create a target group then it's not linked to elastic beanstalk and no instances are present.
I found the this page which states
The resources that Elastic Beanstalk creates for your environment have names. You can use these names to get information about the resources with a function, or modify properties on the resources to customize their behavior.
But because they're in a different stack (of which i don't know the name in advance), not ouputs of the template, I have no idea how to get hold of them.
--
Edit:
Marcin pointed me in the direction of a custom resource in their answer. I have expanded on it slightly and got it working. The implementation is slightly different in a couple of ways
it's in Node instead of Python
the api call describe_environment_resources in the example provided returns a list of resources, but seemingly not all of them. In my implementation I grab the auto scaling group, and use the Physical Resource ID to look up the other resources in the stack to which it belongs using the Cloudformation API.
const AWS = require('aws-sdk');
const cfnResponse = require('cfn-response');
const eb = new AWS.ElasticBeanstalk();
const cfn = new AWS.CloudFormation();
exports.handler = (event, context) => {
if (event['RequestType'] !== 'Create') {
console.log(event[RequestType], 'is not Create');
return cfnResponse.send(event, context, cfnResponse.SUCCESS, {
Message: `${event['RequestType']} completed.`,
});
}
eb.describeEnvironmentResources(
{ EnvironmentName: event['ResourceProperties']['EBEnvName'] },
function (err, { EnvironmentResources }) {
if (err) {
console.log('Exception', e);
return cfnResponse.send(event, context, cfnResponse.FAILED, {});
}
const PhysicalResourceId = EnvironmentResources['AutoScalingGroups'].find(
(group) => group.Name
)['Name'];
const { StackResources } = cfn.describeStackResources(
{ PhysicalResourceId },
function (err, { StackResources }) {
if (err) {
console.log('Exception', e);
return cfnResponse.send(event, context, cfnResponse.FAILED, {});
}
const TargetGroup = StackResources.find(
(resource) =>
resource.LogicalResourceId === 'AWSEBV2LoadBalancerTargetGroup'
);
cfnResponse.send(event, context, cfnResponse.SUCCESS, {
TargetGroupArn: TargetGroup.PhysicalResourceId,
});
}
);
}
);
};
The Cloudformation templates
LambdaBasicExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
- arn:aws:iam::aws:policy/AWSElasticBeanstalkReadOnly
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
GetEBLBTargetGroupLambda:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Description: 'Get ARN of EB Load balancer'
Timeout: 30
Role: !GetAtt 'LambdaBasicExecutionRole.Arn'
Runtime: nodejs12.x
Code:
ZipFile: |
... code ...
ListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Priority: 1
ListenerArn:
Fn::ImportValue: !Sub '${NetworkStackName}-HTTPS-Listener'
Actions:
- Type: forward
TargetGroupArn:
Fn::GetAtt: ['GetEBLBTargetGroupResource', 'TargetGroupArn']
Conditions:
- Field: host-header
HostHeaderConfig:
Values:
- mydomain.com
Things I learned while doing this which hopefully help others
using async handlers in Node is difficult with the default cfn-response library which is not async and results in the Cloudformation creation (and deletion) process hanging for many hours before rolling back.
the cfn-response library is included automatically by cloudformation if you use ZipFile. The code is available on the AWS Docs if you were so inclined to include it manually (you could also wrap it in a promise then and use async lambda handlers). There are also packages on npm to achieve the same effect.
Node 14.x couldn't run, Cloudformation threw up an error. I didn't make note of what it was, unfortunately.
The policy AWSElasticBeanstalkFullAccess used in the example provided no longer exists and has been replaced with AdministratorAccess-AWSElasticBeanstalk.
My example above needs less permissive policies attached but I've not yet addressed that in my testing. It'd be better if it could only read the specific elastic beanstalk environment etc.
I don't have access as far as I can tell to the ARN of the target group created by the elastic beanstalk environment resource
That's true. The way to overcome this is through custom resource. In fact I developed fully working, very similar resource for one of my previous answers, thus you can have a look at it and adopt to your templates. The resource returns ARN of the EB load balancer, but you could modify it to get the ARN of EB's target group instead.
I am trying to connect AppSync to an Aurora serverless data source but it shows this when I try to create the data source through the AWS console:
My AppSync API is in ap-southeast-1 (Singapore) and my Aurora Serverless database is also in the same region. According to the AWS docs, the Data API is avaiable in that region. Here is my cloudformation template to deploy the DB Cluster:
DbCluster:
Type: AWS::RDS::DBCluster
DependsOn: DbSecret
Properties:
DatabaseName: !Ref DatabaseName
DBClusterIdentifier: !Ref DbClusterId
DeletionProtection: false
EnableHttpEndpoint: true
Engine: aurora
EngineMode: serverless
EngineVersion: 5.6.10a
MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref DbSecret, ':SecretString:username}}']]
MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref DbSecret, ':SecretString:password}}']]
ScalingConfiguration:
AutoPause: true
MinCapacity: 1
MaxCapacity: 2
SecondsUntilAutoPause: 300
StorageEncrypted: true
The CloudFormation template deploys fine, and, as you can see, the EnableHttpEndpoint has been set to true, which means that the Data API is enabled. I have also checked that it is enabled by going into the AWS console to try and modify the database:
I have tried searching the internet for any clues but I could not find anything. I'm not sure if this is a bug or I am doing something wrong. How do I get pass this error to create my data source?
After creating a support case, I have found that the Data API is available in the region, just that AppSync is not integrated to that region. In other words, Data API is available but AppSync can't use it in that region.
As an alternative, I am planning to run AWS AppSync Lambda resolvers which will call the Data API. This is only because I need the database to be ap-southeast-1.
If you do not require your database to be in an unsupported region, you can try using AppSync in ap-southeast-1 while having your database in a supported region (us-east-1 will most likely work).
I have a lambda function using the serverless framework. I created a security group and associated it with the lambda function, I also included all 3 subnetIDs that the RDS uses in the serverless.yml. I then associated the new lambda security group with the security group that the RDS is saying it's using. I have it set for MySQL\Aurora on the inbound tab and the source being the new lambda sg.
When I run the lambda I get the handshake inactivity timeout when trying to connect my pool to the instance. I use this same security group to give myself access to the database via IP, so I know it has to be the right instance. What am I missing?
Here is the definition for the function in serverless.yml
consumer:
handler: consumer/handler.testconsumer
timeout: 29
memorySize: 512
vpc:
securityGroupIds:
- sg-lambda
subnetIds:
- subnet-1
- subnet-2
- subnet-3
I also have the following IAM role statement to allow the networking. Though serverless does bark at this:
Warned - iamRoleStatement granting Resource='*'. Wildcard resources in iamRoleStatements are not permitted.
iamRoleStatements:
- Effect: "Allow"
Action:
- "ec2:CreateNetworkInterface"
- "ec2:DescribeNetworkInterfaces"
- "ec2:DeleteNetworkInterface"
Resource: "*"
Is that causing an issue? I'm beating my head against a wall about this.
I would like to perform the following operations in order with CloudFormation.
Start up an EC2 instance.
Give it privileges to access the full internet using security group A.
Download particular versions of Java and Python
Remove its internet privileges by removing security group A and adding a security group B.
I observe that there is a DependsOn attribute for specifying the order in which to create resources, but I was unable to find a feature that would allow me to update the security groups on the same EC2 instance twice over the course of creating a stack.
Is this possible with CloudFormation?
Not in CloudFormation natively, but you could launch the EC2 instance with a configured userdata script that itself downloads Java/Python and the awscli, as necessary, and then uses the awscli to switch security groups for the current EC2 instance.
However, if all you need is Java and Python pre-loaded then why not simply create an AMI with them already installed and launch from that AMI?
The best way out is to utilise a Cloudformation custom resource here. You can create a lambda function that does exactly what you need. This lambda function can then be called as a custom resource function in the cloud formation template.
You can pass your new security group ID and instance ID to the lambda function and code the lambda function to use AWS SDK and do the modifications that you need.
I have leveraged it to post an update to my web server about the progress of the cloud formation template. Below is the sample code of the template.
EC2InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles: [!Ref 'EC2Role']
MarkInstanceProfileComplete:
Type: 'Custom::EC2InstanceProfileDone'
Version: '1.0'
DependsOn: EC2InstanceProfile
Properties:
ServiceToken: !Ref CustomResourceArn
HostURL: !Ref Host
LoginType: !Ref LoginType
SecretId: !Ref SecretId
WorkspaceId: !Ref WorkspaceId
Event: 2
Total: 3
Here the resource MarkInstanceProfileComplete is a custom resource that calls a Lambda function. It takes the event count and total count as input and processes them to calculate percentage progress. Based on that it sends out a request to my web server. For all we care, this Lambda function can do potentially anything you want it to do.