I am trying to connect to RDS Database from an AWS Lambda (Java).
Which IP should I enable from the RDS Security group rules?
You can't enable this via IP. First you will need to enable VPC access for the Lambda function, during which you will assign it a Security Group. Then, within the Security Group assigned to the RDS instance you will enable access for the Security Group assigned to the Lambda function.
You can configure Lambda to access your RDS instance.
You can enable this using Lambda management console.
Select Lambda function which need access to RDS instance and then go to Configuration -> Advanced settings and select the VPC (which is your RDS instance is in) you need it to access.
find out more here
http://docs.aws.amazon.com/lambda/latest/dg/vpc.html
For anyone else searching for a more detailed solution, or lambda config provisioned via AWS SAM / Cloudformation, what worked for me was:
i. create a Security Group (SG) allowing outbound traffic on the desired port you'd like to connect over (eg: 5432 or 3306. Note, inbound rules have no affect on lambda I believe, currently) Apply that SG to your lambda.
ii. create an SG allowing inbound traffic on the same port (say 5432 or 3306) which references the lambda SG, so traffic is locked down to only the lambda. And outbound on the same port (5432 or 3306). Apply that SG to your RDS instance.
Further detail:
Lambda SG:
Direction Protocol Port Source
Outbound TCP 5432 ALL
RDS SG:
Direction Protocol Port Source
Inbound TCP 5432 Lambda SG
Outbound TCP 5432 ALL
SAM template.yaml to provision the main resources you'll probably require including: an RDS cluster (Aurora Postgres serverless to minimise running costs is shown in this example), a Postgres master user password stored in secrets manager, a lambda, an SG that is applied to the lambda allowing outbound traffic on port 5432, an SG that is applied to the RDS cluster referencing the lambda SG (locking down traffic to the lambda) and I have also shown optionally how you may wish to connect to the RDS from your local desktop machine using a desktop DB client (eg DBeaver) over SSH tunnel via a bastion (eg a nano EC2 instance with an EIP attached so it can be stopped and all config remain the same) to admin the RDS from your local machine.
(Please note for a production system you may wish to provision your RDS into a private subnet for security. Provisioning of subnets not covered here for brevity. Also please note for a production system passing a secure secret as an env variable is not best practice the lambda should really resolve the secret each time - however shown passed as an env var for brevity)
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Provisions stack with Aurora Serverless
Parameters:
AppName:
Description: "Application Name"
Type: String
Default: RDS-example-stack
DBClusterName:
Description: "Aurora RDS cluster name"
Type: String
Default: rdsexamplecluster
DatabaseName:
Description: "Aurora RDS database name"
Type: String
Default: examplerdsdbname
DBMasterUserName:
AllowedPattern: "[a-zA-Z0-9_]+"
ConstraintDescription: must be between 1 to 16 alphanumeric characters.
Description: The database admin account user name, between 1 to 16 alphanumeric characters.
MaxLength: '16'
MinLength: '1'
Type: String
Default: aurora_admin_0
Resources:
# lambdas
someLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub '${AWS::StackName}-someLambda'
# Role: !GetAtt ExecutionRole.Arn # if you require a custom execution role and permissions
VpcConfig:
SubnetIds: [subnet-90f79cd8, subnet-9743e6cd, subnet-8bf962ed]
SecurityGroupIds: [!Ref lambdaOutboundSGToRDS]
Handler: index.handler
CodeUri: ./dist/someLambda
Runtime: nodejs14.x
Timeout: 5 # ensure matches your PG/ mySQL connection pool timeout
ReservedConcurrentExecutions: 5
MemorySize: 128
Environment: # optional env vars useful for your DB connection
Variables:
pgDb: !Ref DatabaseName
# dbUser: '{{resolve:secretsmanager:some-stackName-AuroraDBCreds:SecretString:username}}'
# dbPw: '{{resolve:secretsmanager:some-stackName-AuroraDBCreds:SecretString:password}}'
# SGs
lambdaOutboundSGToRDS: # Outbound access for lambda to access Aurora Postgres DB
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: !Sub ${AWS::StackName} access to Aurora PG DB
GroupName: !Sub ${AWS::StackName} lambda to Aurora access
SecurityGroupEgress:
-
CidrIp: '0.0.0.0/0'
Description: lambda to Aurora access over 5432
FromPort: 5432
IpProtocol: TCP
ToPort: 5432
VpcId: vpc-f6c4ea91
RDSSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: !Sub ${AWS::StackName} RDS ingress and egress
SecurityGroupEgress:
-
CidrIp: '0.0.0.0/0'
Description: lambda RDS access over 5432
FromPort: 5432
IpProtocol: TCP
ToPort: 5432
SecurityGroupIngress:
-
SourceSecurityGroupId: !Ref lambdaOutboundSGToRDS # ingress SG for lambda to access RDS
Description: lambda to Aurora access over 5432
FromPort: 5432
IpProtocol: TCP
ToPort: 5432
- # optional
CidrIp: '172.12.34.217/32' # private IP of your EIP/ bastion instance the EIP is assigned to. /32 ie a single IP address
Description: EC2 bastion host providing access to Aurora RDS via SSH tunnel for DBeaver desktop access over 5432
FromPort: 5432
IpProtocol: TCP
ToPort: 5432
VpcId: vpc-f6c4ea91
DBSubnetGroup: # just a logical grouping of subnets that you can apply as a group to your RDS
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: CloudFormation managed DB subnet group.
SubnetIds:
- subnet-80f79cd8
- subnet-8743e6cd
- subnet-9bf962ed
AuroraDBCreds: # provisions a password for the DB master username, which we set in Parameters
Type: AWS::SecretsManager::Secret
Properties:
Name: !Sub ${AWS::StackName}-AuroraDBCreds
Description: RDS database auto-generated user password
GenerateSecretString:
SecretStringTemplate: !Sub '{"username": "${DBMasterUserName}"}'
GenerateStringKey: "password"
PasswordLength: 30
ExcludeCharacters: '"#/\'
Tags:
-
Key: AppName
Value: !Ref AppName
RDSCluster:
Type: AWS::RDS::DBCluster
Properties:
DBClusterIdentifier: !Ref DBClusterName
MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref AuroraDBCreds, ':SecretString:username}}' ]]
MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref AuroraDBCreds, ':SecretString:password}}' ]]
DatabaseName: !Ref DatabaseName
Engine: aurora-postgresql
EngineMode: serverless
EngineVersion: '10' # currently provisions '10.serverless_14' 10.14
EnableHttpEndpoint: true
ScalingConfiguration:
AutoPause: true
MaxCapacity: 2
MinCapacity: 2
SecondsUntilAutoPause: 300 # 5 min
DBSubnetGroupName:
Ref: DBSubnetGroup
VpcSecurityGroupIds:
- !Ref RDSSG
# optional outputs useful for importing into another stack or viewing in the terminal on deploy
Outputs:
StackName:
Description: Aurora Stack Name
Value: !Ref AWS::StackName
Export:
Name: !Sub ${AWS::StackName}-StackName
DatabaseName:
Description: Aurora Database Name
Value: !Ref DatabaseName
Export:
Name: !Sub ${AWS::StackName}-DatabaseName
DatabaseClusterArn:
Description: Aurora Cluster ARN
Value: !Sub arn:aws:rds:${AWS::Region}:${AWS::AccountId}:cluster:${DBClusterName}
Export:
Name: !Sub ${AWS::StackName}-DatabaseClusterArn
DatabaseSecretArn:
Description: Aurora Secret ARN
Value: !Ref AuroraDBCreds
Export:
Name: !Sub ${AWS::StackName}-DatabaseSecretArn
DatabaseClusterID:
Description: Aurora Cluster ID
Value: !Ref RDSCluster
Export:
Name: !Sub ${AWS::StackName}-DatabaseClusterID
AuroraDbURL:
Description: Aurora Database URL
Value: !GetAtt RDSCluster.Endpoint.Address
Export:
Name: !Sub ${AWS::StackName}-DatabaseURL
DatabaseMasterUserName:
Description: Aurora Database User
Value: !Ref DBMasterUserName
Export:
Name: !Sub ${AWS::StackName}-DatabaseMasterUserName
Here is what I did
I assigned same Subnets and VPCs to both services Lambda and RDS.
Now I created a NAT Gateway choosing one of the subnet so that Lambda can use that NAT Gateway to interact with the outside world.
Last thing is to add inbound entry in the security group that is attached to RDS as well as Lambda functions. Whitelist DB port 5432 in my case for postgresql and add security group name in the source.
Security group is somehow whitelisting itself by adding an entry in inbound rules.
This worked for me pretty well.
The recommended way is still (1) VPC and data-api, however you can also go with (2) which is RDS proxy (https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/) that supports both MySQL and PostgreSQL since June 30 2020.
You don't need to use IP.
I assume that your RDS is in private subnet of the VPC. This means that your lambda should also be in VPC to communicate with database.
Let's assume that the credentials for your RDS is in secret manager. You can give necessary permission to the secret so that your lambda can access the decrypt secret within lambda function.
Add proper ingress rule to the database. Make sure your security groups are configured properly. You can also use RDS proxy to re-use the db connections to give better performance.
This post talks about how to communicate to RDS from lambda https://www.freecodecamp.org/news/aws-lambda-rds/
Related
I am now getting a Failure for CodeBuild on the DOWNLOAD_SOURCE phase.
CLIENT_ERROR: RequestError: send request failed caused by: Get "https://codepipeline-us-east-1-215861945190.s3.amazonaws.com/diag-upload-pipe/SourceArti/jiUJWyf": dial tcp 52.217.106.244:443: i/o timeout for primary source and source version arn:aws:s3:::codepipeline-us-east-1-215861945190/diag-upload-pipe/SourceArti/jiUJWyf
I have tried adding S3 permissions for full access to no avail. I've also tried following the advice from Ryan Williams in the comments here: DOWNLOAD_SOURCE Failed AWS CodeBuild
Still unable to get past this error.
I have my VPC
Main route table for the VPC(rtb05b) Routes - 10.0.0.0/16 with a local target and 0.0.0.0/0 with nat-0ad target
Subnet associations - subnet-0a7
subnet-0a7 routes 10.0.0.0/16 with a local target and 0.0.0.0/0 with nat-0ad target
Mixed route route table - rtb-026 routes 10.0.0.0/16 with a local target and 0.0.0.0/0 with internet gateway igw-0305 target
Associated subnets for the mixed route table are a Private and Public subnet
I feel like there has to be a problem with the routing since there's an i/o timeout but I can't for the life of me figure out where I went wrong.
I faced exactly the same problem.
In my case, it was due to the Security Group Egress setting in CodeBuild.
Here is what I did when I built the resource using CloudFormation.
Step 1: Create a SecurityGroup for CodeBuild
CodeBuildSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref VPC
Step 2: Set up an Egress to allow all outbound traffic to the SecurityGroup created in Step 1.
CodeBuildEgressAllAccess:
Type: AWS::EC2::SecurityGroupEgress
Properties:
GroupId: !Ref CodeBuildSecurityGroup
CidrIp: '0.0.0.0/0'
FromPort: -1
ToPort: -1
IpProtocol: '-1'
Step 3: Set up an egress to allow outbound traffic to connect to RDS MySQL.
CodeBuildEgressToMySQL:
Type: AWS::EC2::SecurityGroupEgress
Properties:
GroupId: !Ref CodeBuildSecurityGroup
DestinationSecurityGroupId: !Ref RdsMySQLSecurityGroup
FromPort: 3306
ToPort: 3306
IpProtocol: tcp
When I deployed the stack with this content, the only outbound traffic allowed to the SecurityGroup for CodeBuild is RDS MySQL.
All allowed Egress Rule created in Step 2 was ignored. So outbound traffic such as Internet, S3 and others will be denied.
Your build project environment should belongs to ONLY private subnet, which has 0.0.0.0/0 route to NAT in the route table. Also check their security group to allow https requests.
I am trying to set up EC2 Instance Connect for an EC2 instance:
AWSTemplateFormatVersion: 2010-09-09
Description: Part 1 - Spawn Ec2 instance with CloudFormation
Resources:
WebAppInstance:
Type: AWS::EC2::Instance
Properties:
AvailabilityZone: us-east-2a
ImageId: ami-074cce78125f09d61
InstanceType: t2.micro
Although the template above allows me to create an EC2 instance, it does not allow me to access it using EC2 Instance Connect.
How do I configure EC2 Instance Connect within the CloudFormation template?
Solution
AWSTemplateFormatVersion: 2010-09-09
Description: Part 1 - Build a webapp stack with CloudFormation
Resources:
WebAppInstance:
Type: AWS::EC2::Instance
Properties:
AvailabilityZone: us-east-2a
ImageId: ami-074cce78125f09d61
InstanceType: t2.micro
SecurityGroupIds:
- !Ref WebAppSecurityGroup
WebAppSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Join ["-", [webapp-security-group, dev]]
GroupDescription: "Allow HTTP/HTTPS and SSH inbound and outbound traffic"
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
WebAppEIP:
Type: AWS::EC2::EIP
Properties:
Domain: vpc
InstanceId: !Ref WebAppInstance
Tags:
- Key: Name
Value: !Join ["-", [webapp-eip, dev]]
Outputs:
WebsiteURL:
Value: !Sub http://${WebAppEIP}
Description: WebApp URL
On Amazon Linux 2 (any version) and Ubuntu 16.04 or later EC2 Instance Connect is installed and working by default. So you don't have to do anything.
For other AMIs, you have to use user_data to install and setup
the connect yourself.
Ensure you have a public IP assigned.
As per docs:
To connect using the Amazon EC2 console (browser-based client), the instance must have a public IPv4 address.**
You can also
connect to the EC2 instance via other methods if you do not want to / cannot assign a public IPv4 address:
If the instance does not have a public IP address, you can connect to the instance over a private network using an SSH client or the EC2 Instance Connect CLI. For example, you can connect from within the same VPC or through a VPN connection, transit gateway, or AWS Direct Connect.
FYI: for other AMIs with Linux distributions other than Amazon Linux 2 or Ubuntu 16.04+, you will need extra configuration as Marcin's answer points out.
ami-074cce78125f09d61 in us-east-2 is coming up for me as Amazon Linux 2 AMI (HVM), SSD Volume Type which supports EC2 Instance Connect by default, so your AMI should be fine.
Problem :
While creating Security group using cloud formation template, it fails with VPCIdNotSpecified error even though I have provided VPCID as an input.
Error Message:
No default VPC for this user (Service: AmazonEC2; Status Code: 400; Error Code: VPCIdNotSpecified; Request ID: d45efd39-16ce-4c0c-9e30-746b39f4ff44; Proxy: null)
Background :
I have deleted the default VPC that comes with the account and created my own VPC. Also, I am getting the VPC ID as a parameter input. \1/used aws cli to verify the template and its good.
All the input parameters were fetch correctly and shown in summary page of cloud formation creation. It even shows the VPCID which is matched.
Code :
Parameters:
VPCName:
Description: Enter the VPC that you want to launch the instance
Type : AWS::EC2::VPC::Id
ConstraintDescription: VPC must be already existing
Resources:
HANASG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: XSASG
GroupDescription: This will allow connections between your RDP instance & HANA Box
**VpcId: !Ref VPCName**
SecurityGroupIngress:
- IpProtocol: tcp
SourceSecurityGroupName: !Ref RdpSgName
FromPort: 0
ToPort: 65535
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 0
ToPort: 65535
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: HANAXSASG```
I would suggest re-creating the default VPC in the VPC section of the console per amazons instructions. Its a good idea NOT to use the default VPC and to create and configure your own as you describe. Internally there is something special about the default VPC that is not exposed via the console or API. I suspect that is the root cause of your issue, and creating a new default VPC should fix it.
AFAIK theres no issue in renaming the default VPC (mine are named Default VPC - DO NOT USE).
The scope of a SG is limited within a VPC. So its a mandatory field while creating a SecurityGroup to specify a value for VpcId .
It may be an item under EC2 buts its scope is within a VPC. You cannot create a SG without specifying a VPC. Just like you cant create an EC2 without specifying its Subnet and VPC.
Can you remove ** and try?
I have the following cloudformation stack which defines an ECS Service:
ApiService:
Type: AWS::ECS::Service
DependsOn:
- LoadBalancerListener80
- LoadBalancerListener443
Properties:
Cluster: !Ref EcsClusterArn
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 100
DeploymentController:
Type: ECS
DesiredCount: 1
HealthCheckGracePeriodSeconds: 10
LaunchType: FARGATE
LoadBalancers:
- ContainerName: !Join ['-', ['container', !Ref AWS::StackName]]
ContainerPort: !Ref Port
TargetGroupArn: !Ref LoadBalancerTargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED # <-- if disabled, pulling from ecr registry fails
SecurityGroups:
- !Ref ApiServiceContainerSecurityGroup
Subnets: !Ref Subnets
SchedulingStrategy: REPLICA
ServiceName: !Ref AWS::StackName
TaskDefinition: !Ref ApiServiceTaskDefinition
I've noticed that without enabling public IP auto-assign, service tasks are unable to pull docker image from the ECR registry. I don't understand why do I need containers to have public ip to pull images from the registry...the service security group allows all the outbound traffic, the subnets can access the internet through an internet gateway and the IAM role allows pulling from ECR...so why the need for a public ip?
I don't want my containers to have a public ip, they should be reachable only inside the VPC. Or I misunderstood and it's only the task that will receive a public ip (for whatever reason) while containers will still be private inside the VPC?
"the IAM role allows pulling from ECR"
The IAM role just gives it permission, it doesn't provide a network connection.
"the subnets can access the internet through an internet gateway"
I think you'll find that the Internet Gateway only provides Internet Access to resources with a public IP assigned to them.
ECR is a service that exists outside your VPC, so you need one of the following for the network connection to ECR to be established:
Public IP.
NAT Gateway, with a route to the NAT Gateway in the subnet.
ECR Interface VPC Endpoint, with a route to the endpoint in the subnet.
My Cloudformation YAML for autoscaling group keeps creating EC2 instances in default VPC even after I specify a custom VPC. Here's the snippets of code:
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Port: 80
Protocol: HTTP
VpcId: !Ref VpcId
Parameters section:
VpcId:
Description: Enter the VpcId
Type: AWS::EC2::VPC::Id
Default: vpc-0ed238eeecc11b493
I keep seeing termination of EC2 instances because the launch config is for some reason creating the instances in the default VPC even through I have specified to use the custom in the parameters section. I dont know why it is not taking the custom VPC. When I check security groups, launch config in the AWS console it shows the custom VPC but when I check the EC2 instance launched by the auto scaling group, I see the default VPC.
My default VPC is vpc-6a79470d and my custom VPC is vpc-0ed238eeecc11b493
The error I see in the Autoscaling group section of the console is:
Description:DescriptionLaunching a new EC2 instance: i-041b680f6470379e3.
Status Reason: Failed to update target group arn:aws:elasticloadbalancing:us-west-1:targetgroup/ALBTe-Targe-7DMLWW46T1E6/f74a31d17bf3c4dc:
The following targets are not in the target group VPC 'vpc-0ed238eeecc11b493': 'i-041b680f6470379e3' Updating load balancer configuration failed.
Hope someone can help point out what I am doing wrong. I see in AWS documentation that ASG by default launches in default VPC but there must be a way to do it in CloudFormation if it is possible to do it through console.
=============================== After update==========================
Here's how it looks now after adding VPCZoneIdentifier, not sure what I am doing wrong and getting an issue with security group now
AutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones: !GetAZs
VPCZoneIdentifier: !Ref SubnetIds
LaunchConfigurationName: !Ref LaunchConfiguration
MinSize: 1
MaxSize: 3
TargetGroupARNs:
- !Ref TargetGroup
LaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
KeyName: !Ref KeyName
InstanceType: t2.micro
SecurityGroups:
- !Ref EC2SecurityGroup
ImageId:
Fn::FindInMap:
- RegionMap
- !Ref AWS::Region
- AMI
LaunchConfiguration --region ${AWS::Region}
ALBSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: ALB Security Group
VpcId: VpcId: !Ref VpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
EC2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: EC2 Instance
In your ASG you usually would define VPCZoneIdentifier:
A list of subnet IDs for a virtual private cloud (VPC). If you specify VPCZoneIdentifier with AvailabilityZones, the subnets that you specify for this property must reside in those Availability Zones.
The example is as follows:
Parameters:
SubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: Subnet IDs for ASG
Resources:
MyASG:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
# ... other properties
VPCZoneIdentifier: !Ref SubnetIds
The snippet you are providing is for the target group of the load balancer.
This error will occur because the subnets attached to your auto scaling group are not within the same VPC as your target group.
Use a parameter type of List<AWS::EC2::Subnet::Id> to specify the subnets for your autoscaling group.
For your autoscaling group the VPCZoneIdentifier parameter should be assigned the values of the parameter.
More information is available here for this parameter type.