SQS Interface Endpoint in CDK - amazon-web-services

Working with AWS-CDK. I had to move my Lambda that writes to SQS inside a VPC. I added the Interface Gateway to allow for direct connection from VPC to SQS with:
props.vpc.addInterfaceEndpoint('sqs-gateway', {
service: InterfaceVpcEndpointAwsService.SQS,
subnets: {
subnetType: SubnetType.PRIVATE,
},
})
the Lambda is deployed to that same VPC (to the same private subnet by default) and I pass the QUEUE_URL as env parameter as I did without the VPC:
const ingestLambda = new lambda.Function(this, 'TTPIngestFunction', {
...
environment: {
QUEUE_URL: queue.queueUrl,
},
vpc: props.vpc,
})
and the Lambda code sends messages simply with:
const sqs = new AWS.SQS({ region: process.env.AWS_REGION })
return sqs
.sendMessageBatch({
QueueUrl: process.env.QUEUE_URL as string,
Entries: entries,
})
.promise()
without the VPC, this sending works but now the Lambda just timeouts to the sending of SQS messages. What am I missing here?

By default, interface VPC endpoints create a new security group and traffic is not
automatically allowed from the VPC CIDR.
You can do as follows if you want to allow traffic from your Lambda:
const sqsEndpoint = props.vpc.addInterfaceEndpoint('sqs-gateway', {
service: InterfaceVpcEndpointAwsService.SQS,
});
sqsEndpoint.connections.allowDefaultPortFrom(ingestLambda);
Alternatively, you can allow all traffic:
sqsEndpoint.connections.allowDefaultPortFromAnyIpv4();
This default behavior is currently under discussion in https://github.com/aws/aws-cdk/pull/4938.

Related

CodeBuild in private subnets have connection issue to S3

I added CodeBuild in my VPC's private subnets, because I want to give it access to RDS cluster in the private subnets.
However, I got below error in the CodeBuild:
CLIENT_ERROR: RequestError: send request failed caused by: Get "https://abcdatabase-schema-abcdatabaseschemap-jatjfe01aqps.s3.amazonaws.com/abcDatabase-Sc/Artifact_S/LrJBqSR.zip": dial tcp 52.217.128.121:443: i/o timeout for primary source and source version arn:aws:s3:::abcdatabase-schema-abcdatabaseschemap-jatjfe01aqps/abcDatabase-Sc/Artifact_S/LrJBqSR.zip
It is a TCP 443 timeout.
It looks like the CodeBuild is trying to download the pipeline artifact from S3 bucket but have a connection timeout, which means that there is a network connection issue between my CodeBuild and S3. However, I added S3 VPC endpoint in my VPC, that is suppose to provide the network connection. https://docs.aws.amazon.com/codebuild/latest/userguide/use-vpc-endpoints-with-codebuild.html.
According to CodeBuild unable to fetch an S3 object despite having Administrator Access, as long as I have s3 VPC endpoint set up then I don't need NAT.
You can see the code below to see how I added the S3 VPC endpoint.
Code
VPC stack
private readonly coreVpc: EC2.Vpc;
constructor(scope: CDK.App, id: string, props?: VpcStackStackProps) {
super(scope, id, props);
const vpcName: string = "CoreVpc";
// Create VPC
this.coreVpc = new EC2.Vpc(this, "CoreVpc", {
vpcName: vpcName,
cidr: "10.0.0.0/16",
enableDnsHostnames: true,
enableDnsSupport: true,
maxAzs: 3, // 3 availability zones
// Each zone will have one public subnet and one private subnet.
subnetConfiguration: [
{
cidrMask: 19,
name: "PublicSubnet",
subnetType: EC2.SubnetType.PUBLIC,
},
{
cidrMask: 19,
name: "PrivateSubnet",
subnetType: EC2.SubnetType.PRIVATE_ISOLATED,
},
],
});
// Create security group for the VPC
const vpcEndpointSecurityGroup = new EC2.SecurityGroup(
this,
`${vpcName}-VPCEndpointSecurityGroup`,
{
securityGroupName: `${vpcName}-VPCEndpointSecurityGroup`,
vpc: this.coreVpc,
description: "Security group for granting AWS services access to the CoreVpc",
allowAllOutbound: false,
}
);
vpcEndpointSecurityGroup.addIngressRule(
EC2.Peer.ipv4(this.coreVpc.vpcCidrBlock),
EC2.Port.tcp(443),
"Allow HTTPS ingress traffic"
);
vpcEndpointSecurityGroup.addEgressRule(
EC2.Peer.ipv4(this.coreVpc.vpcCidrBlock),
EC2.Port.tcp(443),
"Allow HTTPS egress traffic"
);
const privateSubnets = this.coreVpc.selectSubnets(
{
subnetType: EC2.SubnetType.PRIVATE_ISOLATED
}
);
// Grant AWS CodeBuild service access to the VPC's private subnets.
new EC2.InterfaceVpcEndpoint(
this, 'CodeBuildInterfaceVpcEndpoint', {
service: EC2.InterfaceVpcEndpointAwsService.CODEBUILD,
vpc: this.coreVpc,
privateDnsEnabled: true,
securityGroups: [vpcEndpointSecurityGroup],
subnets: privateSubnets
}
);
// Grant VPC access to S3 service.
new EC2.GatewayVpcEndpoint(
this, 'S3InterfaceVpcEndpoint', {
service: EC2.GatewayVpcEndpointAwsService.S3,
vpc: this.coreVpc,
}
);
}
}
CodeBuild stack
export class CodeBuildStack extends CDK.Stack {
constructor(scope: Construct, id: string, props: CodeBuildStackProps) {
super(scope, id, props);
const buildspecFile = FS.readFileSync("./config/buildspec.yml", "utf-8");
const buildspecFileYaml = YAML.parse(buildspecFile, {
prettyErrors: true,
});
// Grant write permissions to the DeploymentRole to the artifact S3 bucket.
const deploymentRoleArn: string = `arn:aws:iam::${props.env?.account}:role/${props.pipelineName}-DeploymentRole`;
const deploymentRole = IAM.Role.fromRoleArn(
this,
`CodeBuild${props.pipelineStageInfo.stageName}DeploymentRoleConstructID`,
deploymentRoleArn,
{
mutable: false,
// Causes CDK to update the resource policy where required, instead of the Role
addGrantsToResources: true,
}
);
const coreVpc: EC2.IVpc = EC2.Vpc.fromLookup(
this,
`${props.pipelineStageInfo.stageName}VpcLookupId`,
{
vpcName: "CoreVpc",
}
);
const securityGroupForVpc: EC2.ISecurityGroup =
EC2.SecurityGroup.fromLookupByName(
this,
"SecurityGroupLookupForVpcEndpoint",
"CoreVpc-VPCEndpointSecurityGroup",
coreVpc
);
new CodeBuild.Project(
this,
`${props.pipelineName}-${props.pipelineStageInfo.stageName}-ColdBuild`,
{
projectName: `${props.pipelineName}-${props.pipelineStageInfo.stageName}-ColdBuild`,
environment: {
buildImage: CodeBuild.LinuxBuildImage.STANDARD_5_0,
},
buildSpec: CodeBuild.BuildSpec.fromObjectToYaml(buildspecFileYaml),
vpc: coreVpc,
securityGroups: [securityGroupForVpc],
role: deploymentRole,
}
);
}
}
The issue in my code is this part. The security group add ingress rule saying that allow TCP egress traffic from my VPC's CIDR block. But CodeBuild is not running inside my VPC's CIDR Block.
vpcEndpointSecurityGroup.addEgressRule(
EC2.Peer.ipv4(this.coreVpc.vpcCidrBlock),
EC2.Port.tcp(443),
"Allow TCP egress traffic"
);
It works by changing to :
vpcEndpointSecurityGroup.addEgressRule(
EC2.Peer.anyIpv4(),
EC2.Port.allTcp(),
"Allow TCP egress traffic"
);
There are other ways to make it work:
Alternative 1:
Changed based on #gshpychka's comments also works.
Alternative 2:
Make allowAllOutbound true. In this way you don't need to specify any egress rule any more.
If this is set to true, there will only be a single egress rule which allows all outbound traffic. If this is set to false, no outbound traffic will be allowed by default and all egress traffic must be explicitly authorized.
const vpcEndpointSecurityGroup = new EC2.SecurityGroup(
this,
`${vpcName}-VPCEndpointSecurityGroup`,
{
securityGroupName: `${vpcName}-VPCEndpointSecurityGroup`,
vpc: this.coreVpc,
description: "Security group for granting AWS services access to the CoreVpc",
allowAllOutbound: true,
}
);

ECS task unable to pull secrets or registry auth

I have a CDK project that creates a CodePipeline which deploys an application on ECS. I had it all previously working, but the VPC was using a NAT gateway, which ended up being too expensive. So now I am trying to recreate the project without requiring a NAT gateway. I am almost there, but I have now run into issues when the ECS service is trying to start tasks. All tasks fail to start with the following error:
ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve secret from asm: service call has been retried 5 time(s): failed to fetch secret
At this point I've kind of lost track of the different things I have tried, but I will post the relevant bits here as well as some of my attempts.
const repository = ECR.Repository.fromRepositoryAttributes(
this,
"ecr-repository",
{
repositoryArn: props.repository.arn,
repositoryName: props.repository.name,
}
);
// vpc
const vpc = new EC2.Vpc(this, this.resourceName(props, "vpc"), {
maxAzs: 2,
natGateways: 0,
enableDnsSupport: true,
});
const vpcSecurityGroup = new SecurityGroup(this, "vpc-security-group", {
vpc: vpc,
allowAllOutbound: true,
});
// tried this to allow the task to access secrets manager
const vpcEndpoint = new EC2.InterfaceVpcEndpoint(this, "secrets-manager-task-vpc-endpoint", {
vpc: vpc,
service: EC2.InterfaceVpcEndpointAwsService.SSM,
});
const secrets = SecretsManager.Secret.fromSecretCompleteArn(
this,
"secrets",
props.secrets.arn
);
const cluster = new ECS.Cluster(this, this.resourceName(props, "cluster"), {
vpc: vpc,
clusterName: `api-cluster`,
});
const ecsService = new EcsPatterns.ApplicationLoadBalancedFargateService(
this,
"ecs-service",
{
taskSubnets: {
subnetType: SubnetType.PUBLIC,
},
securityGroups: [vpcSecurityGroup],
serviceName: "api-service",
cluster: cluster,
cpu: 256,
desiredCount: props.scaling.desiredCount,
taskImageOptions: {
image: ECS.ContainerImage.fromEcrRepository(
repository,
this.ecrTagNameParameter.stringValue
),
secrets: getApplicationSecrets(secrets), // returns
logDriver: LogDriver.awsLogs({
streamPrefix: "api",
logGroup: new LogGroup(this, "ecs-task-log-group", {
logGroupName: `${props.environment}-api`,
}),
logRetention: RetentionDays.TWO_MONTHS,
}),
},
memoryLimitMiB: 512,
publicLoadBalancer: true,
domainZone: this.hostedZone,
certificate: this.certificate,
redirectHTTP: true,
}
);
const scalableTarget = ecsService.service.autoScaleTaskCount({
minCapacity: props.scaling.desiredCount,
maxCapacity: props.scaling.maxCount,
});
scalableTarget.scaleOnCpuUtilization("cpu-scaling", {
targetUtilizationPercent: props.scaling.cpuPercentage,
});
scalableTarget.scaleOnMemoryUtilization("memory-scaling", {
targetUtilizationPercent: props.scaling.memoryPercentage,
});
secrets.grantRead(ecsService.taskDefinition.taskRole);
repository.grantPull(ecsService.taskDefinition.taskRole);
I read somewhere that it probably has something to do with Fargate version 1.4.0 vs 1.3.0, but I'm not sure what I need to change to allow the tasks to access what they need to run.
You need to create an interface endpoints for Secrets Manager, ECR (two types of endpoints), CloudWatch, as well as a gateway endpoint for S3.
Refer to the documentation on the topic.
Here's an example in Python, it'd work the same in TS:
vpc.add_interface_endpoint(
"secretsmanager_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.SECRETS_MANAGER,
)
vpc.add_interface_endpoint(
"ecr_docker_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER,
)
vpc.add_interface_endpoint(
"ecr_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.ECR,
)
vpc.add_interface_endpoint(
"cloudwatch_logs_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.CLOUDWATCH_LOGS,
)
vpc.add_gateway_endpoint(
"s3_endpoint",
service=ec2.GatewayVpcEndpointAwsService.S3
)
Keep in mind that interface endpoints cost money as well, and may not be cheaper than a NAT.

How to add AWS Lambda to VPC using javascript

I need to add my lambda function to VPC but using javascript code - no from the aws console.
I'm creating my lambda in AWS CDK stack like that:
const myLambda = new lambda.Function(this, lambda-id, {
code: code,
handler: handler,
runtime: runtime,
...
**vpc**:
})
I think I need to pass VPC as an argument to this function. Is there any way to fetch this VPC by its id and then pass it as argument to this function?
You can import an existing VPC by ID and provide it as an attribute on your lambda like:
const vpc = ec2.Vpc.fromLookup(this, 'VPC', {
vpcId: 'your vpc id'
})
const myLambda = new lambda.Function(this, 'your-lambda', {
code,
handler,
runtime,
...,
vpc
})

AWS How to Invoke SSM Param Store using Private DNS Endpoint from Lamda function Nodejs

Hi have requirement where credential needs to be stored in SSM Param store and will be read by Lambda function which sits inside an VPC, and all the subnets inside my VPC is public subnet.
So when I am calling SSM Param store using below code I am getting timed out error.
const AWS = require('aws-sdk');
AWS.config.update({
region: 'us-east-1'
})
const parameterStore = new AWS.SSM();
exports.handler = async (event, context, callback) => {
console.log('calling param store');
const param = await getParam('/my/param/name')
console.log('param : ',param);
//Send API Response
return {
statusCode: '200',
body: JSON.stringify('able to connect to param store'),
headers: {
'Content-Type': 'application/json',
},
};
};
const getParam = param => {
return new Promise((res, rej) => {
parameterStore.getParameter({
Name: param
}, (err, data) => {
if (err) {
return rej(err)
}
return res(data)
})
})
}
So I created vpc endpoint for Secrets Manager which has with Private DNS name enabled.
Still I am getting timed out error for above code.
Do I need change Lambda code to specify Private DNS Endpoint in Lambda function
Below Image contains outbound rule for subnet NACL
Below Image contains outbound rule for Security Group
I managed to fix this issue. The root cause of this problem was all the subnets were public subnet. Since VPC endpoints are accessed privately without internet hence the subnets associated with Lambda function should be private subnet.
Here are the below steps I have take to fix this issue
Created a NAT Gateway in side VPC and assigned one elastic IP to it
Created new route table and pointed all the traffics to NAT gateway created in steps 1
Attached new route table to couple of subnets (which made them private)
then attached only private subnets to Lambda function
Other than this IAM role associated with Lambda function should have below 2 policy to access SSM Param store
AmazonSSMReadOnlyAccess
AWSLambdaVPCAccessExecutionRole

Create an AWS RDS instance without NAT Gateway using CDK

Is it possible to create a serverless RDS cluster via CDK without NAT Gateway? The NAT Gateway base charge is pretty expensive for a development environment. I'm also not interested in setting up a NAT instance. I'm attaching a Lambda in the VPC with the RDS instance like this.
// VPC
const vpc = new ec2.Vpc(this, 'MyVPC');
// RDS
const dbCluster = new rds.ServerlessCluster(this, 'MyAuroraCluster', {
engine: rds.DatabaseClusterEngine.AURORA_MYSQL,
defaultDatabaseName: 'DbName',
vpc,
});
Yes, you can. You may have to add some VPC endpoints like Secrets Manager so password rotation can be done, but it is possible. You will need to create a VPC with subnets that have no NAT gateway too.
// VPC
const vpc = new ec2.Vpc(this, 'MyVPC', {
natGateways: 0,
subnetConfiguration: [
{
cidrMask: 24,
name: 'public',
subnetType: ec2.SubnetType.PUBLIC,
},
{
cidrMask: 28,
name: 'rds',
subnetType: ec2.SubnetType.ISOLATED,
}
]
});
// RDS
const dbCluster = new rds.ServerlessCluster(this, 'MyAuroraCluster', {
engine: rds.DatabaseClusterEngine.AURORA_MYSQL,
defaultDatabaseName: 'DbName',
vpcSubnets: {
subnetType: ec2.SubnetType.ISOLATED,
},
vpc,
});
If you want Secrets Manager controlled password, use:
vpc.addInterfaceEndpoint('SecretsManagerEndpoint', {
service: ec2.InterfaceVpcEndpointAwsService.SECRETS_MANAGER,
});
dbCluster.addRotationSingleUser();
Serverless clusters cannot be placed into a public subnet.
This is a hard and documented limitation of RDS Serverless.