How to add AWS Lambda to VPC using javascript - amazon-web-services

I need to add my lambda function to VPC but using javascript code - no from the aws console.
I'm creating my lambda in AWS CDK stack like that:
const myLambda = new lambda.Function(this, lambda-id, {
code: code,
handler: handler,
runtime: runtime,
...
**vpc**:
})
I think I need to pass VPC as an argument to this function. Is there any way to fetch this VPC by its id and then pass it as argument to this function?

You can import an existing VPC by ID and provide it as an attribute on your lambda like:
const vpc = ec2.Vpc.fromLookup(this, 'VPC', {
vpcId: 'your vpc id'
})
const myLambda = new lambda.Function(this, 'your-lambda', {
code,
handler,
runtime,
...,
vpc
})

Related

Cannot add ManagedPolicy to the lambda that is created in the same stack

I'm new to AWS CDK and I'm trying to set up lambda with few AWS managed policies.
Lambda configuration,
this.lambdaFunction = new Function(this, 'LambdaName', {
functionName: 'LambdaName',
description: `Timestamp: ${new Date().toISOString()} `,
code: ...,
handler: '...',
memorySize: 512,
timeout: Duration.seconds(30),
vpc: ...,
runtime: Runtime.PYTHON_3_8,
});
I want to add AmazonRedshiftDataFullAccess ManagedPolicy to lambda role but couldn't find out a way to do it as addToRolePolicy supports only the PolicyStatement and not ManagedPolicy.
Tried something as following, it errored out saying role may be undefined.
this.lambdaFunction.role
.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName("service-role/AmazonRedshiftDataFullAccess"));
Could anyone help me understand what is the right way to add a ManagedPolicy to the default role that gets created with the lambda function?
okay I have made a couple of mistakes,
It is AmazonRedshiftDataFullAccess, not service-role/AmazonRedshiftDataFullAccess
As the role is optional here, I should have done Optional Chaining (?.)
The following worked for me,
this.lambdaFunction.role
?.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName("AmazonRedshiftDataFullAccess"));
Its a 3 step process :-
You need to first create role for lambda.
create lambda and attach role to lambda.
add aws managed( make sure its correct name ) policy to lambda.
example
const myRole = new iam.Role(this, 'My Role', {
assumedBy: new iam.ServicePrincipal('lambda.amazonaws.com'),
});
const fn = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NODEJS_16_X,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'lambda-handler')),
role: myRole, // user-provided role
});
myRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonRedshiftDataFullAccess"));

AWS How to Invoke SSM Param Store using Private DNS Endpoint from Lamda function Nodejs

Hi have requirement where credential needs to be stored in SSM Param store and will be read by Lambda function which sits inside an VPC, and all the subnets inside my VPC is public subnet.
So when I am calling SSM Param store using below code I am getting timed out error.
const AWS = require('aws-sdk');
AWS.config.update({
region: 'us-east-1'
})
const parameterStore = new AWS.SSM();
exports.handler = async (event, context, callback) => {
console.log('calling param store');
const param = await getParam('/my/param/name')
console.log('param : ',param);
//Send API Response
return {
statusCode: '200',
body: JSON.stringify('able to connect to param store'),
headers: {
'Content-Type': 'application/json',
},
};
};
const getParam = param => {
return new Promise((res, rej) => {
parameterStore.getParameter({
Name: param
}, (err, data) => {
if (err) {
return rej(err)
}
return res(data)
})
})
}
So I created vpc endpoint for Secrets Manager which has with Private DNS name enabled.
Still I am getting timed out error for above code.
Do I need change Lambda code to specify Private DNS Endpoint in Lambda function
Below Image contains outbound rule for subnet NACL
Below Image contains outbound rule for Security Group
I managed to fix this issue. The root cause of this problem was all the subnets were public subnet. Since VPC endpoints are accessed privately without internet hence the subnets associated with Lambda function should be private subnet.
Here are the below steps I have take to fix this issue
Created a NAT Gateway in side VPC and assigned one elastic IP to it
Created new route table and pointed all the traffics to NAT gateway created in steps 1
Attached new route table to couple of subnets (which made them private)
then attached only private subnets to Lambda function
Other than this IAM role associated with Lambda function should have below 2 policy to access SSM Param store
AmazonSSMReadOnlyAccess
AWSLambdaVPCAccessExecutionRole

AWS SAM CLI cannot access Dynamo DB when function is invoked locally

I am building an AWS lambda with aws-sam-cli. In the function, I want to access a certain DynamoDB table.
My issue is that the function comes back with this error when I invoke it locally with the sam local invoke command: ResourceNotFoundException: Requested resource not found
const axios = require('axios')
const AWS = require('aws-sdk')
AWS.config.update({region: <MY REGION>})
const dynamo = new AWS.DynamoDB.DocumentClient()
exports.handler = async (event) => {
const scanParams = {
TableName: 'example-table'
}
const scanResult = await dynamo.scan(scanParams).promise().catch((error) => {
console.log(`Scan error: ${error}`)
// => Scan error: ResourceNotFoundException: Requested resource not found
})
console.log(scanResult)
}
However, if I actually sam deploy it to AWS and test it in the actual Lambda console, it logs the table info correctly.
{
Items: <TABLE ITEMS>,
Count: 1,
ScannedCount: 1
}
Is this expected behavior? Or is there some additional configuration I need to do for it to work locally? My template.yaml looks like this:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'Example SAM stack'
Resources:
ExampleFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
Policies:
- DynamoDBCrudPolicy:
TableName: 'example-table'
I believe when you invoke your Lambda locally, SAM is not recognising which profile to use for the remote resources, ex: DynamoDB
Try to pass the credentials profile for your remote dynamoDB
ex:
sam local invoke --profile default
You can check the command documentation here: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-invoke.html

SQS Interface Endpoint in CDK

Working with AWS-CDK. I had to move my Lambda that writes to SQS inside a VPC. I added the Interface Gateway to allow for direct connection from VPC to SQS with:
props.vpc.addInterfaceEndpoint('sqs-gateway', {
service: InterfaceVpcEndpointAwsService.SQS,
subnets: {
subnetType: SubnetType.PRIVATE,
},
})
the Lambda is deployed to that same VPC (to the same private subnet by default) and I pass the QUEUE_URL as env parameter as I did without the VPC:
const ingestLambda = new lambda.Function(this, 'TTPIngestFunction', {
...
environment: {
QUEUE_URL: queue.queueUrl,
},
vpc: props.vpc,
})
and the Lambda code sends messages simply with:
const sqs = new AWS.SQS({ region: process.env.AWS_REGION })
return sqs
.sendMessageBatch({
QueueUrl: process.env.QUEUE_URL as string,
Entries: entries,
})
.promise()
without the VPC, this sending works but now the Lambda just timeouts to the sending of SQS messages. What am I missing here?
By default, interface VPC endpoints create a new security group and traffic is not
automatically allowed from the VPC CIDR.
You can do as follows if you want to allow traffic from your Lambda:
const sqsEndpoint = props.vpc.addInterfaceEndpoint('sqs-gateway', {
service: InterfaceVpcEndpointAwsService.SQS,
});
sqsEndpoint.connections.allowDefaultPortFrom(ingestLambda);
Alternatively, you can allow all traffic:
sqsEndpoint.connections.allowDefaultPortFromAnyIpv4();
This default behavior is currently under discussion in https://github.com/aws/aws-cdk/pull/4938.

How to import existing VPC in aws cdk?

Hi I am working on aws cdk. I am trying to get existing non-default vpc. I tried below options.
vpc = ec2.Vpc.from_lookup(self, id = "VPC", vpc_id='vpcid', vpc_name='vpc-dev')
This results in below error
[Error at /LocationCdkStack-cdkstack] Request has expired.
[Warning at /LocationCdkStack-cdkstack/TaskDef/mw-service] Proper policies need to be attached before pulling from ECR repository, or use 'fromEcrRepository'.
Found errors
Other method I tried is
vpc = ec2.Vpc.from_vpc_attributes(self, 'VPC', vpc_id='vpc-839227e7', availability_zones=['ap-southeast-2a','ap-southeast-2b','ap-southeast-2c'])
This results in
[Error at /LocationCdkStack-cdkstack] Request has expired.
[Warning at /LocationCdkStack-cdkstack/TaskDef/mw-service] Proper policies need to be attached before pulling from ECR repository, or use 'fromEcrRepository'.
Found errors
Other method I tried is
vpc = ec2.Vpc.from_lookup(self, id = "VPC", is_default=True) // This will get default vpc and this will work
Can someone help me to get non-default vpc in aws cdk? Any help would be appreciated. Thanks
Take a look at aws_cdk.aws_ec2 documentation and at CDK Runtime Context.
If your VPC is created outside your CDK app, you can use
Vpc.fromLookup(). The CDK CLI will search for the specified VPC in the
the stack’s region and account, and import the subnet configuration.
Looking up can be done by VPC ID, but more flexibly by searching for a
specific tag on the VPC.
Usage:
# Example automatically generated. See https://github.com/aws/jsii/issues/826
from aws_cdk.core import App, Stack, Environment
from aws_cdk import aws_ec2 as ec2
# Information from environment is used to get context information
# so it has to be defined for the stack
stack = MyStack(
app, "MyStack", env=Environment(account="account_id", region="region")
)
# Retrieve VPC information
vpc = ec2.Vpc.from_lookup(stack, "VPC",
# This imports the default VPC but you can also
# specify a 'vpcName' or 'tags'.
is_default=True
)
Update with a relevant example:
vpc = ec2.Vpc.from_lookup(stack, "VPC",
vpc_id = VPC_ID
)
Update with typescript example:
import ec2 = require('#aws-cdk/aws-ec2');
const getExistingVpc = ec2.Vpc.fromLookup(this, 'ImportVPC',{isDefault: true});
More info here.
For AWS CDK v2 or v1(latest), You can use:
// You can either use vpcId OR vpcName and fetch the desired vpc
const getExistingVpc = ec2.Vpc.fromLookup(this, 'ImportVPC',{
vpcId: "VPC_ID",
vpcName: "VPC_NAME"
});
here is simple example
//get VPC Info form AWS account, FYI we are not rebuilding we are referencing
const DefaultVpc = Vpc.fromVpcAttributes(this, 'vpcdev', {
vpcId:'vpc-d0e0000b0',
availabilityZones: core.Fn.getAzs(),
privateSubnetIds: 'subnet-00a0de00',
publicSubnetIds: 'subnet-00a0de00'
});
const yourService = new lambda.Function(this, 'SomeName', {
code: lambda.Code.fromAsset("lambda"),
handler: 'handlers.your_handler',
role: lambdaExecutionRole,
securityGroup: lambdaSecurityGroup,
vpc: DefaultVpc,
runtime: lambda.Runtime.PYTHON_3_7,
timeout: Duration.minutes(2),
});
We can do it easily using ec2.vpc.fromLookup.
https://kuchbhilearning.blogspot.com/2022/10/httpskuchbhilearning.blogspot.comimport-existing-vpc-in-aws-cdk.html
The following dictates how to use the method.