Using AWSAppSyncClient inside an ECS Container (Fargate) with AWS_IAM auth mode - Returns 403 UnrecognizedClientException - amazon-iam

We have the following code in an ECS Fargate container however it is constantly returning an error.
When running identical code in a lambda with IAM authentication and the correct role setup, I am able to successfully run this.
Error
Network error: Response not successful: Received status code 403
UnrecognizedClientException
The security token included in the request is invalid.
Code
import 'isomorphic-fetch';
import AWSAppSyncClient, { AUTH_TYPE } from 'aws-appsync';
import AWS from 'aws-sdk';
// Setup variables for client
const graphqlEndpoint = process.env.GRAPHQL_ENDPOINT;
const awsRegion = process.env.AWS_DEFAULT_REGION;
const client = new AWSAppSyncClient({
url: graphqlEndpoint,
region: awsRegion,
auth: {
type: AUTH_TYPE.AWS_IAM,
credentials: AWS.config.credentials,
},
disableOffline: true,
})
Cloudformation
TaskDefinition:
Type: "AWS::ECS::TaskDefinition"
Properties:
ContainerDefinitions:
- Ommitted
Cpu: !FindInMap [CpuMap, !Ref Cpu, Cpu]
ExecutionRoleArn: !GetAtt "TaskExecutionRole.Arn"
Family: !Ref "AWS::StackName"
Memory: !FindInMap [MemoryMap, !Ref Memory, Memory]
NetworkMode: awsvpc
RequiresCompatibilities: [FARGATE]
TaskRoleArn: !GetAtt "TaskRole.Arn"
TaskRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: "ecs-tasks.amazonaws.com"
Action: "sts:AssumeRole"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/AWSAppSyncInvokeFullAccess" # Invoke Access for AppSync

I eventually discovered that this was a result of AppSyncClient not able to load the credentials in ECS correctly.
As per AWS Docs on IAM roles in ECS, credentials are loaded different to other AWS services. Instead of credentials being populated in env vars, the Amazon ECS agent instead populated the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI variable with a path to the credentials. I was able to successfully get AWSAppSyncClient working with IAM authenticaiton in an ECS container by first loading the ECS credentials manually and passing it to AWSAppSyncClient. The below example worked.
// AWSAppSyncClient needs to be provided ECS IAM credentials explicitly
const credentials = new AWS.ECSCredentials({
httpOptions: { timeout: 50000 },
maxRetries: 10,
});
AWS.config.credentials = credentials;
// Setup AppSync Config
const AppSyncConfig = {
url: graphqlEndpoint,
region: awsRegion,
auth: {
type: AUTH_TYPE.AWS_IAM,
credentials: AWS.config.credentials,
},
disableOffline: true,
};

Related

AWS Cognito User Pool requires manual save in console after CF template deployment

When testing my TOKEN endpoint in PostMan, I'm getting the error HTTP 400 - "invalid_grant".
In PostMan, I've configured the Authorization header (w/Basic clientId:secret) and header Content_Type. In the url encoded form, I've set the grant_type = client_credentials. All of these settings are confirmed in the instructions here:
https://docs.aws.amazon.com/cognito/latest/developerguide/token-endpoint.html
After manual inspection, my CloudFormation template deploys all the settings correctly.
If I go into Cognito settings, select App Clients from the navigation and then “Save app client changes” without making any changes, I no longer get the same error in PostMan and I can retrieve a valid access code from there on. It’s almost as is the changes aren’t ‘active’ in AWS unless I re-save in the AWS Console for whatever reason.
Is something not fully committed on the AWS backend side unless I manually hit save in the console?
**Again, this template, settings and PostMan test do work BUT only after I go into Cognito and make an edit, save, undo my edit and save again.
Here's my CF template
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: Integration for webSvc1 and webSvc2
Parameters:
StageName:
...
Globals:
Function:
Timeout: 20
Api:
OpenApiVersion: 3.0.1
Resources:
UserPool:
Type: 'AWS::Cognito::UserPool'
Properties:
UserPoolName: !Sub ${CognitoUserPoolName}-${EnvironmentName}
UserPoolResourceServer:
Type: 'AWS::Cognito::UserPoolResourceServer'
DependsOn:
- UserPool
Properties:
Identifier: !Sub ${CognitoUserPoolName}-${EnvironmentName}
Name: api-resource-server
Scopes:
- ScopeName: "api.read"
ScopeDescription: "Read access"
UserPoolId: !Ref UserPool
UserPoolDomain:
Type: AWS::Cognito::UserPoolDomain
DependsOn:
- UserPool
- UserPoolResourceServer
Properties:
UserPoolId: !Ref UserPool
Domain: !Sub id-${EnvironmentName}
# Creates a User Pool Client to be used by the identity pool
UserPoolClient:
Type: 'AWS::Cognito::UserPoolClient'
DependsOn:
- UserPool
- UserPoolResourceServer
Properties:
ClientName: !Sub ${CognitoUserPoolName}-client-${EnvironmentName}
GenerateSecret: true
UserPoolId: !Ref UserPool
SupportedIdentityProviders:
- COGNITO
AllowedOAuthFlows:
- client_credentials
AllowedOAuthScopes:
- !Sub ${CognitoUserPoolName}-${EnvironmentName}/api.read

give an aws lambad the necessary permissions to create and delete alarms

How can i give an aws lambda in a cloudformation template the necessary permissions to allow it to manage alarms (create / delete) them, i'm struggling to understand the policies and how they work
Role:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
{
'Version': '2012-10-17',
'Statement':
[
{
'Effect': 'Allow',
'Principal': { 'Service': ['lambda.amazonaws.com'] },
'Action': ['sts:AssumeRole'],
},
],
}
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
- 'arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess'
- 'arn:aws:iam::aws:policy/AWSLambdaReadOnlyAccess'
- 'arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole'
Lambda:
Type: 'AWS::Lambda::Function'
Properties:
PackageType: Zip
Handler: index.handler
Runtime: nodejs12.x
MemorySize: 512
Timeout: 30
Role:
Fn::GetAtt:
- Role
- Arn
Code:
ZipFile: |
const AWS = require('aws-sdk')
AWS.config.update({region: 'us-east-2'});
const cw = new AWS.CloudWatch({apiVersion: '2010-08-01'});
//
You could assign the CloudWatchFullAccess policy (arn:aws:iam::aws:policy/CloudWatchFullAccess), but that is probably providing excessive access.
If you are okay with writing your own policy, you could grant:
cloudwatch:PutMetricAlarm
cloduwatch:DeleteAlarms
For details, see: Actions, resources, and condition keys for Amazon CloudWatch - Service Authorization Reference

How to access AWS CloudFront that connected with S3 Bucket via Bearer token of a specific user (JWT Custom Auth)

I am using a serverless framework to deploy a serverless stack to AWS. My stack consists of some lambda functions, DynamoDB tables and API Gateway.
I am protected The API Gateway using what's called lambda authorizer. Also, I have a custom standalone self-hosted Auth service that can generate tokens.
So the scenario is that the user can request a token from this service (It's IdentityServer4 hosted on Azure) then the user can send a request to the API Gateway with the bearer token so the API gateway will ask the lambda authorizer to generate iam roles if the token is correct. All of that is valid and works as expected.
Here is an example of the lambda authorizer definition in my serverless.yml and how I use it to protect other API gateway endpoints: (You can see the addUserInfo function has API that protected using the custom authorizer )
functions:
# =================================================================
# API Gateway event handlers
# ================================================================
auth:
handler: api/auth/mda-auth-server.handler
addUserInfo:
handler: api/user/create-replace-user-info.handler
description: Create Or Replace user section
events:
- http:
path: user
method: post
authorizer:
name: auth
resultTtlInSeconds: ${self:custom.resultTtlInSeconds}
identitySource: method.request.header.Authorization
type: token
cors:
origin: '*'
headers: ${self:custom.allowedHeaders}
Now I wanted to extend my APIs so I will allow the user to add images, so I followed this approach. So in this approach, the user will initiate what's called a signed S3 URL and I can put an image to my bucket using this S3 signed URL.
Also, the S3 bucket is not publicly accessible but instead, it's connected to CloudFront distribution. Now I missed the things here, I can't understand how I can protect my images. Is it anyway so I can protect the Images in the CloudFront CDN with my custom Authentication service so the user that has a valid token can just access those resources? How can I protect my CDN (CloudFront) using my Custom Authentication service and configure that using the serverless framework?
This is a bit tricky and it takes from me around a day to get all set.
First we have options here:
Instead of authentication we can sign the URL and return a signed CloudFront URL or signed S3 URL and it's pretty straight forward but obviously that not what I was looking for.
The second option is to use Lambda#Edge to authorize the requests of the CloudFront and that what I followed.
So I ended up create a separate stack to handle all the S3, CloudFront, and Lambda#Edge stuff cause they are all deployed on edges which means that the region doesn't matter but for lambda edge we need to deploy it to the main AWS region ((N. Virginia), us-east-1) So i ended up creating one stack for all of them.
First I have the below code in my auth-service.js (It's just some helpers to allow me to verify my custom jwt):
import * as jwtDecode from 'jwt-decode';
import * as util from 'util';
import * as jwt from 'jsonwebtoken';
import * as jwksClient from 'jwks-rsa';
export function getToken(bearerToken) {
if(bearerToken && bearerToken.startsWith("Bearer "))
{
return bearerToken.replace(/^Bearer\s/, '');
}
throw new Error("Invalid Bearer Token.");
};
export function getDecodedHeader(token) {
return jwtDecode(token, { header: true });
};
export async function getSigningKey(decodedJwtTokenHeader, jwksclient){
const key = await util.promisify(jwksclient.getSigningKey)(decodedJwtTokenHeader.kid);
const signingKey = key.publicKey || key.rsaPublicKey;
if (!signingKey) {
throw new Error('could not get signing key');
}
return signingKey;
};
export async function verifyToken(token,signingKey){
return await jwt.verify(token, signingKey);
};
export function getJwksClient(jwksEndpoint){
return jwksClient({
cache: true,
rateLimit: true,
jwksRequestsPerMinute: 10,
jwksUri: jwksEndpoint
});
};
Then inside the serverless.yml here is my file:
service: mda-app-uploads
plugins:
- serverless-offline
- serverless-pseudo-parameters
- serverless-iam-roles-per-function
- serverless-bundle
custom:
stage: ${opt:stage, self:provider.stage}
resourcesBucketName: ${self:custom.stage}-mda-resources-bucket
resourcesStages:
prod: prod
dev: dev
resourcesStage: ${self:custom.resourcesStages.${self:custom.stage}, self:custom.resourcesStages.dev}
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
region: us-east-1
versionFunctions: true
functions:
oauthEdge:
handler: src/mda-edge-auth.handler
role: LambdaEdgeFunctionRole
memorySize: 128
timeout: 5
resources:
- ${file(resources/s3-cloudfront.yml)}
Quick points here:
The us-east-1 important here.
It's a bit tricky and not practical to create any lambda edge using the serverless framework so I used it to just configure the function and then inside this cloud formation template resources/s3-cloudfront.yml I added all the needed bits.
Then here is the content of resources/s3-cloudfront.yml:
Resources:
AuthEdgeLambdaVersion:
Type: Custom::LatestLambdaVersion
Properties:
ServiceToken: !GetAtt PublishLambdaVersion.Arn
FunctionName: !Ref OauthEdgeLambdaFunction
Nonce: "Test"
PublishLambdaVersion:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
Role: !GetAtt PublishLambdaVersionRole.Arn
Code:
ZipFile: |
const {Lambda} = require('aws-sdk')
const {send, SUCCESS, FAILED} = require('cfn-response')
const lambda = new Lambda()
exports.handler = (event, context) => {
const {RequestType, ResourceProperties: {FunctionName}} = event
if (RequestType == 'Delete') return send(event, context, SUCCESS)
lambda.publishVersion({FunctionName}, (err, {FunctionArn}) => {
err
? send(event, context, FAILED, err)
: send(event, context, SUCCESS, {FunctionArn})
})
}
PublishLambdaVersionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: PublishVersion
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: lambda:PublishVersion
Resource: '*'
LambdaEdgeFunctionRole:
Type: "AWS::IAM::Role"
Properties:
Path: "/"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Sid: "AllowLambdaServiceToAssumeRole"
Effect: "Allow"
Action:
- "sts:AssumeRole"
Principal:
Service:
- "lambda.amazonaws.com"
- "edgelambda.amazonaws.com"
LambdaEdgeFunctionPolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: MainEdgePolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
Effect: "Allow"
Action:
- "lambda:GetFunction"
- "lambda:GetFunctionConfiguration"
Resource: !GetAtt AuthEdgeLambdaVersion.FunctionArn
Roles:
- !Ref LambdaEdgeFunctionRole
ResourcesBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.resourcesBucketName}
AccessControl: Private
CorsConfiguration:
CorsRules:
- AllowedHeaders: ['*']
AllowedMethods: ['PUT']
AllowedOrigins: ['*']
ResourcesBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: ResourcesBucket
PolicyDocument:
Statement:
# Read permission for CloudFront
- Action: s3:GetObject
Effect: "Allow"
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: "ResourcesBucket"
- "/*"
Principal:
CanonicalUser: !GetAtt CloudFrontOriginAccessIdentity.S3CanonicalUserId
- Action: s3:PutObject
Effect: "Allow"
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: "ResourcesBucket"
- "/*"
Principal:
AWS: !GetAtt LambdaEdgeFunctionRole.Arn
- Action: s3:GetObject
Effect: "Allow"
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: "ResourcesBucket"
- "/*"
Principal:
AWS: !GetAtt LambdaEdgeFunctionRole.Arn
CloudFrontOriginAccessIdentity:
Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
Properties:
CloudFrontOriginAccessIdentityConfig:
Comment:
Fn::Join:
- ""
-
- "Identity for accessing CloudFront from S3 within stack "
-
Ref: "AWS::StackName"
- ""
# Cloudfront distro backed by ResourcesBucket
ResourcesCdnDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Origins:
# S3 origin for private resources
- DomainName: !Sub '${self:custom.resourcesBucketName}.s3.amazonaws.com'
Id: S3OriginPrivate
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/#{CloudFrontOriginAccessIdentity}'
# S3 origin for public resources
- DomainName: !Sub '${self:custom.resourcesBucketName}.s3.amazonaws.com'
Id: S3OriginPublic
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/#{CloudFrontOriginAccessIdentity}'
Enabled: true
Comment: CDN for public and provate static content.
DefaultRootObject: index.html
HttpVersion: http2
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
TargetOriginId: S3OriginPublic
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
CacheBehaviors:
-
PathPattern: 'private/*'
TargetOriginId: S3OriginPrivate
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
LambdaFunctionAssociations:
-
EventType: viewer-request
LambdaFunctionARN: !GetAtt AuthEdgeLambdaVersion.FunctionArn
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
-
PathPattern: 'public/*'
TargetOriginId: S3OriginPublic
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
Compress: true
ForwardedValues:
QueryString: false
Headers:
- Origin
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
PriceClass: PriceClass_200
Some quick points related to this file:
Here I created the S3 bucket that will contain all my private and public resources.
This bucket is private and not accessible and you will find a role who just give the CDN and the lambda edge access to it.
I decided to create a CloudFront (CDN) with two origins public to be pointed to the S3's public folder and private to point it to the S3's private folder and configure the behavior of the CloudFront private origin to use my lambda edge function for the authentication through the viewer-request event type.
You will find also a code to create the function version and another function called PublishLambdaVersion with its role and it helps to give the lambda edge the correct permissions while deploying.
Finally here it the actually code for the lambda edge function used for CDN auth:
import {getJwksClient, getToken, getDecodedHeader, getSigningKey, verifyToken} from '../../../../libs/services/auth-service';
import config from '../../../../config';
const response401 = {
status: '401',
statusDescription: 'Unauthorized'
};
exports.handler = async (event) => {
try{
const cfrequest = event.Records[0].cf.request;
const headers = cfrequest.headers;
if(!headers.authorization) {
console.log("no auth header");
return response401;
}
const jwtValue = getToken(headers.authorization);
const client = getJwksClient(`https://${config.authDomain}/.well-known/openid-configuration/jwks`);
const decodedJwtHeader = getDecodedHeader(jwtValue);
if(decodedJwtHeader)
{
const signingKey = await getSigningKey(decodedJwtHeader, client);
const verifiedToken = await verifyToken(jwtValue, signingKey);
if(verifiedToken)
{
return cfrequest;
}
}else{
throw Error("Unauthorized");
}
}catch(err){
console.log(err);
return response401;
}
};
In case you are interested, I am using IdentityServer4 and hosted it as a docker image in Azure and using it as a custom authorizer.
So the full scenario now that we have an S3 bucket that totally private. It's only accessible through the CloudFront origins. If the request served through the public origin so no authentication needed but if it's served through the private origin so is I am triggering what's called lambda edge to authenticate it and validate the bearer token.
I was totally new to AWS stack before going deep into all of those but AWS is quite easy so I end up configured everything in a perfect way. Please let me know in case there is something not clear or if there are any questions.

Cannot fix The provided execution role does not have permissions to call CreateNetworkInterface on EC2

I am trying to deploy with VPC and this is my serverless.yaml
vpcSettings: &vpcSettings
vpc: ${self:custom.allVpcSettings.${self:provider.stage}.vpc}
provider:
name: aws
runtime: nodejs10.x
stage: ${opt:stage, 'local'}
region: us-west-1
memorySize: 256
timeout: 30
deploymentPrefix: fs-sls-${self:provider.stage}-deploy
deploymentBucket: fs-serverless-deployment
variables: ${file(.env.${opt:stage, self:provider.stage}.json)}
environment:
NODE_ENV: ${self:provider.variables.NODE_ENV}
functions:
ping:
handler: src/handler.ping
description: Let us know if the service is up and running
events:
- http:
path: ping
method: get
cors: true
graphql:
handler: src/handler.graphqlHandler
<<: *vpcSettings
description: One function where all GQL request comes
memorySize: 1024
events:
- http:
path: graphql
method: post
cors: true
- http:
path: graphql
method: get
cors: true
plugins:
- serverless-offline
custom:
serverless-offline:
port: 6000
allVpcSettings:
local:
vpc: 'This is a dummy value that should be ignored'
dev:
vpc:
securityGroupIds:
- sg-xxxxxxxxxxxxxxx
subnetIds:
- subnet-xxxxxxxxxxxxxxx
- subnet-xxxxxxxxxxxxxxx
prod:
vpc:
securityGroupIds:
- sg-xxxxxxxxxxxxxxx
subnetIds:
- subnet-xxxxxxxxxxxxxxx
- subnet-xxxxxxxxxxxxxxx
It fails with the following error
Serverless Error ---------------------------------------
The provided execution role does not have permissions to call CreateNetworkInterface on EC2
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 10.16.0
Framework Version: 1.52.0
Plugin Version: 2.0.0
SDK Version: 2.1.1
The user that I created for this purpose has AdministratorAccess as well as AWSLambdaVPCAccessExecutionRole in it's permissions. What else is expected here?
So I fixed it. It means the role of deploying lambda doesn't have permission. So it boils down to the fact to give it the role. First, confirm if you have the role.
Check in the image where to look for the role.
Once you don't find it. Which you most likely won't. Take the Role name and goto IAM -> Roles and Search for the role name and add AWSLambdaVPCAccessExecutionRole to the selected role.
This should give it the required permission.
Now try deploying the SLS and it should work.
Once you have the role, edit it by adding the
Although the user that you've created to deploy this lambda function has Administrator access, the lambda function itself needs networking permissions if you're deploying it into a VPC.
Try adding these permissions in the provider block of your serverless.yml template:
- Effect: Allow
Action:
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:DeleteNetworkInterface
- ec2:DescribeInstances
- ec2:AttachNetworkInterface
Resource:
- *
If that works, you'll want to deploy a more limited permission structure for your production environment.

aws Lambda created ENI not deleting while deletion of stack

CloudFormation creates Lambda function. When the function is executed an ENI is provisioned automatically by lambda. The ENI seems to be left in existence after function execution for to speed up subsequent function execution. CloudFormation deletes the lambda function. TheEN remains behind. When attempting to delete the VPC CloudFormation stack, stack deletion fails as the ENI is using a security group and subnet.
in my lambda role the delete permission are there.
"Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:DescribeNetworkInterfaces" ], "Resource": "*"
I am using custom resource to run the lambda from CloudFormation template, so lambda will be called both stack creation and deletion. The ENI will be used in creation of stack and deletion of stack. Now how to handle the eni deletion?
There is a known issue when using Lambda Functions in a VPC, as documented in Configuring a Lambda Function to Access Resources in an Amazon VPC:
There is a delay between the time your Lambda function executes and ENI deletion. If you do delete the role immediately after function execution, you are responsible for deleting the ENIs.
The documentation doesn't specify exactly how long this "delay" will be, but a forum post by Richard#AWS suggests it can last up to 6 hours(!). (In my observations using AWS CloudTrail, the delay between Lambda execution and ENI deletion was around one hour.)
Until AWS addresses this issue further, you can workaround the issue by detaching and deleting the leftover ENIs in between deleting the Lambda function and deleting the associated Security Group(s) and Subnet(s). This is how Terraform currently handles this issue in its framework.
You can do this manually by separating the VPC/Subnet/SG layer and the Lambda-function layer into two different CloudFormation Stacks, or you can automate it by implementing a Custom Resource to delete the ENIs using the AWS SDK.
Here's a complete working example that creates a VPC-Lambda Custom Resource, cleaning up its ENIs when deleted using the VPCDestroyENI Custom Resource:
Description: Creates a VPC-Lambda Custom Resource, cleaning up ENIs when deleted.
Parameters:
VPCId:
Description: VPC Id
Type: AWS::EC2::VPC::Id
SubnetId:
Description: Private Subnet Id
Type: AWS::EC2::Subnet::Id
Resources:
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Lambda VPC security group
VpcId: !Ref VPCId
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal: {Service: [lambda.amazonaws.com]}
Action: ['sts:AssumeRole']
Path: "/"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
Policies:
- PolicyName: DetachNetworkInterface
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: ['ec2:DetachNetworkInterface']
Resource: '*'
AppendTest:
Type: Custom::Split
DependsOn: VPCDestroyENI
Properties:
ServiceToken: !GetAtt AppendItemToListFunction.Arn
List: [1, 2, 3]
AppendedItem: 4
AppendItemToListFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: !Sub |
var response = require('cfn-response');
exports.handler = function(event, context) {
var responseData = {Value: event.ResourceProperties.List};
responseData.Value.push(event.ResourceProperties.AppendedItem);
response.send(event, context, response.SUCCESS, responseData);
};
Timeout: 30
Runtime: nodejs4.3
VpcConfig:
SecurityGroupIds: [!Ref SecurityGroup]
SubnetIds: [!Ref SubnetId]
VPCDestroyENIFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: !Sub |
var response = require('cfn-response');
var AWS = require('aws-sdk');
exports.handler = function(event, context) {
console.log("REQUEST RECEIVED:\n", JSON.stringify(event));
if (event.RequestType != 'Delete') {
response.send(event, context, response.SUCCESS, {});
return;
}
var ec2 = new AWS.EC2();
var params = {
Filters: [
{
Name: 'group-id',
Values: event.ResourceProperties.SecurityGroups
},
{
Name: 'description',
Values: ['AWS Lambda VPC ENI: *']
}
]
};
console.log("Deleting attachments!");
// Detach all network-interface attachments
ec2.describeNetworkInterfaces(params).promise().then(function(data) {
console.log("Got Interfaces:\n", JSON.stringify(data));
return Promise.all(data.NetworkInterfaces.map(function(networkInterface) {
var networkInterfaceId = networkInterface.NetworkInterfaceId;
var attachmentId = networkInterface.Attachment.AttachmentId;
return ec2.detachNetworkInterface({AttachmentId: attachmentId}).promise().then(function(data) {
return ec2.waitFor('networkInterfaceAvailable', {NetworkInterfaceIds: [networkInterfaceId]}).promise();
}).then(function(data) {
console.log("Detached Interface, deleting:\n", networkInterfaceId);
return ec2.deleteNetworkInterface({NetworkInterfaceId: networkInterfaceId}).promise();
});
}));
}).then(function(data) {
console.log("Success!");
response.send(event, context, response.SUCCESS, {});
}).catch(function(err) {
console.log("Failure:\n", JSON.stringify(err));
response.send(event, context, response.FAILED, {});
});
};
Timeout: 300
Runtime: nodejs4.3
VPCDestroyENI:
Type: Custom::VPCDestroyENI
Properties:
ServiceToken: !GetAtt VPCDestroyENIFunction.Arn
SecurityGroups: [!Ref SecurityGroup]
Outputs:
Output:
Description: output
Value: !Join [",", !GetAtt AppendTest.Value]
Note: To create the VPC and Private Subnet required in the above example, you can use the AWS Quick Start Amazon VPC Architecture template.