I'm trying to modify this AWS-provided CDK example to instead use an existing bucket. Additional documentation indicates that importing existing resources is supported. So far I am unable to add an event notification to the existing bucket using CDK.
Here is my modified version of the example:
class S3TriggerStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# create lambda function
function = _lambda.Function(self, "lambda_function",
runtime=_lambda.Runtime.PYTHON_3_7,
handler="lambda-handler.main",
code=_lambda.Code.asset("./lambda"))
# **MODIFIED TO GET EXISTING BUCKET**
#s3 = _s3.Bucket(self, "s3bucket")
s3 = _s3.Bucket.from_bucket_arn(self, 's3_bucket',
bucket_arn='arn:<my_region>:::<my_bucket>')
# create s3 notification for lambda function
notification = aws_s3_notifications.LambdaDestination(function)
# assign notification for the s3 event type (ex: OBJECT_CREATED)
s3.add_event_notification(_s3.EventType.OBJECT_CREATED, notification)
This results in the following error when trying to add_event_notification:
AttributeError: '_IBucketProxy' object has no attribute 'add_event_notification'
The from_bucket_arn function returns an IBucket, and the add_event_notification function is a method of the Bucket class, but I can't seem to find any other way to do this. Maybe it's not supported. Any help would be appreciated.
I managed to get this working with a custom resource. It's TypeScript, but it should be easily translated to Python:
const uploadBucket = s3.Bucket.fromBucketName(this, 'BucketByName', 'existing-bucket');
const fn = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NODEJS_10_X,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'lambda-handler'))
});
const rsrc = new AwsCustomResource(this, 'S3NotificationResource', {
onCreate: {
service: 'S3',
action: 'putBucketNotificationConfiguration',
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: uploadBucket.bucketName,
NotificationConfiguration: {
LambdaFunctionConfigurations: [
{
Events: ['s3:ObjectCreated:*'],
LambdaFunctionArn: fn.functionArn,
Filter: {
Key: {
FilterRules: [{ Name: 'suffix', Value: 'csv' }]
}
}
}
]
}
},
// Always update physical ID so function gets executed
physicalResourceId: 'S3NotifCustomResource' + Date.now().toString()
}
});
fn.addPermission('AllowS3Invocation', {
action: 'lambda:InvokeFunction',
principal: new iam.ServicePrincipal('s3.amazonaws.com'),
sourceArn: uploadBucket.bucketArn
});
rsrc.node.addDependency(fn.permissionsNode.findChild('AllowS3Invocation'));
This is basically a CDK version of the CloudFormation template laid out in this example. See the docs on the AWS SDK for the possible NotificationConfiguration parameters.
since June 2021 there is a nicer way to solve this problem. Since approx. Version 1.110.0 of the CDK it is possible to use the S3 notifications with Typescript Code:
Example:
const s3Bucket = s3.Bucket.fromBucketName(this, 'bucketId', 'bucketName');
s3Bucket.addEventNotification(s3.EventType.OBJECT_CREATED, new s3n.LambdaDestination(lambdaFunction), {
prefix: 'example/file.txt'
});
CDK Documentation:
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-s3-notifications-readme.html
Pull Request:
https://github.com/aws/aws-cdk/pull/15158
Sorry I can't comment on the excellent James Irwin's answer above due to a low reputation, but I took and made it into a Construct.
The comment about "Access Denied" took me some time to figure out too, but the crux of it is that the function is S3:putBucketNotificationConfiguration, but the IAM Policy action to allow is S3:PutBucketNotification.
Here's the [code for the construct]:(https://gist.github.com/archisgore/0f098ae1d7d19fddc13d2f5a68f606ab)
import * as cr from '#aws-cdk/custom-resources';
import * as logs from '#aws-cdk/aws-logs';
import * as s3 from '#aws-cdk/aws-s3';
import * as sqs from '#aws-cdk/aws-sqs';
import * as iam from '#aws-cdk/aws-iam';
import {Construct} from '#aws-cdk/core';
// You can drop this construct anywhere, and in your stack, invoke it like this:
// const s3ToSQSNotification = new S3NotificationToSQSCustomResource(this, 's3ToSQSNotification', existingBucket, queue);
export class S3NotificationToSQSCustomResource extends Construct {
constructor(scope: Construct, id: string, bucket: s3.IBucket, queue: sqs.Queue) {
super(scope, id);
// https://stackoverflow.com/questions/58087772/aws-cdk-how-to-add-an-event-notification-to-an-existing-s3-bucket
const notificationResource = new cr.AwsCustomResource(scope, id+"CustomResource", {
onCreate: {
service: 'S3',
action: 'putBucketNotificationConfiguration',
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: bucket.bucketName,
NotificationConfiguration: {
QueueConfigurations: [
{
Events: ['s3:ObjectCreated:*'],
QueueArn: queue.queueArn,
}
]
}
},
physicalResourceId: <cr.PhysicalResourceId>(id + Date.now().toString()),
},
onDelete: {
service: 'S3',
action: 'putBucketNotificationConfiguration',
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: bucket.bucketName,
// deleting a notification configuration involves setting it to empty.
NotificationConfiguration: {
}
},
physicalResourceId: <cr.PhysicalResourceId>(id + Date.now().toString()),
},
policy: cr.AwsCustomResourcePolicy.fromStatements([new iam.PolicyStatement({
// The actual function is PutBucketNotificationConfiguration.
// The "Action" for IAM policies is PutBucketNotification.
// https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html#amazons3-actions-as-permissions
actions: ["S3:PutBucketNotification"],
// allow this custom resource to modify this bucket
resources: [bucket.bucketArn],
})]),
logRetention: logs.RetentionDays.ONE_DAY,
});
// allow S3 to send notifications to our queue
// https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#grant-destinations-permissions-to-s3
queue.addToResourcePolicy(new iam.PolicyStatement({
principals: [new iam.ServicePrincipal("s3.amazonaws.com")],
actions: ["SQS:SendMessage"],
resources: [queue.queueArn],
conditions: {
ArnEquals: {"aws:SourceArn": bucket.bucketArn}
}
}));
// don't create the notification custom-resource until after both the bucket and queue
// are fully created and policies applied.
notificationResource.node.addDependency(bucket);
notificationResource.node.addDependency(queue);
}
}
UPDATED: Source code from original answer will overwrite existing notification list for bucket which will make it impossible adding new lambda triggers. Here's the solution which uses event sources to handle mentioned problem.
import aws_cdk {
aws_s3 as s3,
aws_cdk.aws_lambda as lambda_
aws_lambda_event_sources as event_src
}
import path as path
class S3LambdaTrigger(core.Stack):
def __init__(self, scope: core.Construct, id: str):
super().__init__(scope, id)
bucket = s3.Bucket(
self, "S3Bucket",
block_public_access=s3.BlockPublicAccess.BLOCK_ALL,
bucket_name='BucketName',
encryption=s3.BucketEncryption.S3_MANAGED,
versioned=True
)
fn = lambda_.Function(
self, "LambdaFunction",
runtime=lambda_.Runtime.NODEJS_10_X,
handler="index.handler",
code=lambda_.Code.from_asset(path.join(__dirname, "lambda-handler"))
)
fn.add_permission(
's3-service-principal',
principal=aws_iam.ServicePrincipal('s3.amazonaws.com')
)
fn.add_event_source(
event_src.S3EventSource(
bucket,
events=[s3.EventType.OBJECT_CREATED, s3.EventType.OBJECT_REMOVED],
filters=[s3.NotificationKeyFilter(prefix="subdir/", suffix=".txt")]
)
)
ORIGINAL:
I took ubi's solution in TypeScript and successfully translated it to Python. His solution worked for me.
#!/usr/bin/env python
from typing import List
from aws_cdk import (
core,
custom_resources as cr,
aws_lambda as lambda_,
aws_s3 as s3,
aws_iam as iam,
)
class S3NotificationLambdaProps:
def __init__(self, bucket: s3.Bucket, function: lambda_.Function, events: List[str], prefix: str):
self.bucket = bucket
self.function = function
self.events = events
self.prefix = prefix
class S3NotificationLambda(core.Construct):
def __init__(self, scope: core.Construct, id: str, props: S3NotificationLambdaProps):
super().__init__(scope, id)
self.notificationResource = cr.AwsCustomResource(
self, f'CustomResource{id}',
on_create=cr.AwsSdkCall(
service="S3",
action="S3:putBucketNotificationConfiguration",
# Always update physical ID so function gets executed
physical_resource_id=cr.PhysicalResourceId.of(f'S3NotifCustomResource{id}'),
parameters={
"Bucket": props.bucket.bucket_name,
"NotificationConfiguration": {
"LambdaFunctionConfigurations": [{
"Events": props.events,
"LambdaFunctionArn": props.function.function_arn,
"Filter": {
"Key": {"FilterRules": [{"Name": "prefix", "Value": props.prefix}]}
}}
]
}
}
),
on_delete=cr.AwsSdkCall(
service="S3",
action="S3:putBucketNotificationConfiguration",
# Always update physical ID so function gets executed
physical_resource_id=cr.PhysicalResourceId.of(f'S3NotifCustomResource{id}'),
parameters={
"Bucket": props.bucket.bucket_name,
"NotificationConfiguration": {},
}
),
policy=cr.AwsCustomResourcePolicy.from_statements(
statements=[
iam.PolicyStatement(
actions=["S3:PutBucketNotification", "S3:GetBucketNotification"],
resources=[props.bucket.bucket_arn]
),
]
)
)
props.function.add_permission(
"AllowS3Invocation",
action="lambda:InvokeFunction",
principal=iam.ServicePrincipal("s3.amazonaws.com"),
source_arn=props.bucket.bucket_arn,
)
# don't create the notification custom-resource until after both the bucket and lambda
# are fully created and policies applied.
self.notificationResource.node.add_dependency(props.bucket)
self.notificationResource.node.add_dependency(props.function)
# Usage:
s3NotificationLambdaProps = S3NotificationLambdaProps(
bucket=bucket_,
function=lambda_fn_,
events=['s3:ObjectCreated:*'],
prefix='foo/'
)
s3NotificationLambda = S3NotificationLambda(
self, "S3NotifLambda",
self.s3NotificationLambdaProps
)
Here is a python solution for adding / replacing a lambda trigger to an existing bucket including the filter. #James Irwin your example was very helpful.
Thanks to #JørgenFrøland for pointing out that the custom resource config will replace any existing notification triggers based on the boto3 documentation https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.BucketNotification.put
One note is he access denied issue is
because if you do putBucketNotificationConfiguration action the policy creates a s3:PutBucketNotificationConfiguration action but that action doesn't exist https://github.com/aws/aws-cdk/issues/3318#issuecomment-584737465
Same issue happens if you set the policy using AwsCustomResourcePolicy.fromSdkCalls
I've added a custom policy that might need to be restricted further.
s3_bucket = s3.Bucket.from_bucket_name(
self, 's3-bucket-by-name', 'existing-bucket-name')
trigger_lambda = _lambda.Function(
self,
'{id}-s3-trigger-lambda',
environment=lambda_env,
code=_lambda.Code.from_asset('./ladle-sink/'),
runtime=_lambda.Runtime.PYTHON_3_7,
handler='lambda_function.lambda_handler',
memory_size=512,
timeout=core.Duration.minutes(3))
trigger_lambda.add_permission(
's3-trigger-lambda-s3-invoke-function',
principal=iam.ServicePrincipal('s3.amazonaws.com'),
action='lambda:InvokeFunction',
source_arn=base_resources.incoming_documents_bucket.bucket_arn)
custom_s3_resource = _custom_resources.AwsCustomResource(
self,
's3-incoming-documents-notification-resource',
policy=_custom_resources.AwsCustomResourcePolicy.from_statements([
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
resources=['*'],
actions=['s3:PutBucketNotification']
)
]),
on_create=_custom_resources.AwsSdkCall(
service="S3",
action="putBucketNotificationConfiguration",
parameters={
"Bucket": s3_bucket.bucket_name,
"NotificationConfiguration": {
"LambdaFunctionConfigurations": [
{
"Events": ['s3:ObjectCreated:*'],
"LambdaFunctionArn": trigger_lambda.function_arn,
"Filter": {
"Key": {
"FilterRules": [
{'Name': 'suffix', 'Value': 'html'}]
}
}
}
]
}
},
physical_resource_id=_custom_resources.PhysicalResourceId.of(
f's3-notification-resource-{str(uuid.uuid1())}'),
region=env.region
))
custom_s3_resource.node.add_dependency(
trigger_lambda.permissions_node.find_child(
's3-trigger-lambda-s3-invoke-function'))
Thanks to the great answers above, see below for a construct for s3 -> lambda notification. It can be used like
const fn = new SingletonFunction(this, "Function", {
...
});
const bucket = Bucket.fromBucketName(this, "Bucket", "...");
const s3notification = new S3NotificationLambda(this, "S3Notification", {
bucket: bucket,
lambda: function,
events: ['s3:ObjectCreated:*'],
prefix: "some_prefix/"
})
Construct (drop-in to your project as a .ts file)
import * as cr from "#aws-cdk/custom-resources";
import * as logs from "#aws-cdk/aws-logs";
import * as s3 from "#aws-cdk/aws-s3";
import * as sqs from "#aws-cdk/aws-sqs";
import * as iam from "#aws-cdk/aws-iam";
import { Construct } from "#aws-cdk/core";
import * as lambda from "#aws-cdk/aws-lambda";
export interface S3NotificationLambdaProps {
bucket: s3.IBucket;
lambda: lambda.IFunction;
events: string[];
prefix: string;
}
export class S3NotificationLambda extends Construct {
constructor(scope: Construct, id: string, props: S3NotificationLambdaProps) {
super(scope, id);
const notificationResource = new cr.AwsCustomResource(
scope,
id + "CustomResource",
{
onCreate: {
service: "S3",
action: "putBucketNotificationConfiguration",
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: props.bucket.bucketName,
NotificationConfiguration: {
LambdaFunctionConfigurations: [
{
Events: props.events,
LambdaFunctionArn: props.lambda.functionArn,
Filter: {
Key: {
FilterRules: [{ Name: "prefix", Value: props.prefix }],
},
},
},
],
},
},
physicalResourceId: <cr.PhysicalResourceId>(
(id + Date.now().toString())
),
},
onDelete: {
service: "S3",
action: "putBucketNotificationConfiguration",
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: props.bucket.bucketName,
// deleting a notification configuration involves setting it to empty.
NotificationConfiguration: {},
},
physicalResourceId: <cr.PhysicalResourceId>(
(id + Date.now().toString())
),
},
policy: cr.AwsCustomResourcePolicy.fromStatements([
new iam.PolicyStatement({
// The actual function is PutBucketNotificationConfiguration.
// The "Action" for IAM policies is PutBucketNotification.
// https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html#amazons3-actions-as-permissions
actions: ["S3:PutBucketNotification", "S3:GetBucketNotification"],
// allow this custom resource to modify this bucket
resources: [props.bucket.bucketArn],
}),
]),
}
);
props.lambda.addPermission("AllowS3Invocation", {
action: "lambda:InvokeFunction",
principal: new iam.ServicePrincipal("s3.amazonaws.com"),
sourceArn: props.bucket.bucketArn,
});
// don't create the notification custom-resource until after both the bucket and queue
// are fully created and policies applied.
notificationResource.node.addDependency(props.bucket);
notificationResource.node.addDependency(props.lambda);
}
}
based on the answer from #ubi
in case of you don't need the SingletonFunction but Function + some cleanup
call like this:
const s3NotificationLambdaProps = < S3NotificationLambdaProps > {
bucket: bucket,
lambda: lambda,
events: ['s3:ObjectCreated:*'],
prefix: '', // or put some prefix
};
const s3NotificationLambda = new S3NotificationLambda(this, `${envNameUpperCase}S3ToLambdaNotification`, s3NotificationLambdaProps);
and the construct will be like this:
import * as cr from "#aws-cdk/custom-resources";
import * as s3 from "#aws-cdk/aws-s3";
import * as iam from "#aws-cdk/aws-iam";
import { Construct } from "#aws-cdk/core";
import * as lambda from "#aws-cdk/aws-lambda";
export interface S3NotificationLambdaProps {
bucket: s3.IBucket;
lambda: lambda.Function;
events: string[];
prefix: string;
}
export class S3NotificationLambda extends Construct {
constructor(scope: Construct, id: string, props: S3NotificationLambdaProps) {
super(scope, id);
const notificationResource = new cr.AwsCustomResource(
scope,
id + "CustomResource", {
onCreate: {
service: "S3",
action: "putBucketNotificationConfiguration",
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: props.bucket.bucketName,
NotificationConfiguration: {
LambdaFunctionConfigurations: [{
Events: props.events,
LambdaFunctionArn: props.lambda.functionArn,
Filter: {
Key: {
FilterRules: [{
Name: "prefix",
Value: props.prefix
}],
},
},
}, ],
},
},
physicalResourceId: < cr.PhysicalResourceId > (
(id + Date.now().toString())
),
},
onDelete: {
service: "S3",
action: "putBucketNotificationConfiguration",
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: props.bucket.bucketName,
// deleting a notification configuration involves setting it to empty.
NotificationConfiguration: {},
},
physicalResourceId: < cr.PhysicalResourceId > (
(id + Date.now().toString())
),
},
policy: cr.AwsCustomResourcePolicy.fromStatements([
new iam.PolicyStatement({
// The actual function is PutBucketNotificationConfiguration.
// The "Action" for IAM policies is PutBucketNotification.
// https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html#amazons3-actions-as-permissions
actions: ["S3:PutBucketNotification", "S3:GetBucketNotification"],
// allow this custom resource to modify this bucket
resources: [props.bucket.bucketArn],
}),
]),
}
);
props.lambda.addPermission("AllowS3Invocation", {
action: "lambda:InvokeFunction",
principal: new iam.ServicePrincipal("s3.amazonaws.com"),
sourceArn: props.bucket.bucketArn,
});
// don't create the notification custom-resource until after both the bucket and lambda
// are fully created and policies applied.
notificationResource.node.addDependency(props.bucket);
notificationResource.node.addDependency(props.lambda);
}
}
With the newer functionality, in python this can now be done as:
bucket = aws_s3.Bucket.from_bucket_name(
self, "bucket", "bucket-name"
)
bucket.add_event_notification(
aws_s3.EventType.OBJECT_CREATED,
aws_s3_notifications.LambdaDestination(your_lambda),
aws_s3.NotificationKeyFilter(
prefix="prefix/path/",
),
)
At the time of writing, the AWS documentation seems to have the prefix arguments incorrect in their examples so this was moderately confusing to figure out.
Thanks to #Kilian Pfeifer for starting me down the right path with the typescript example.
I used CloudTrail for resolving the issue, code looks like below and its more abstract:
const trail = new cloudtrail.Trail(this, 'MyAmazingCloudTrail');
const options: AddEventSelectorOptions = {
readWriteType: cloudtrail.ReadWriteType.WRITE_ONLY
};
// Adds an event selector to the bucket
trail.addS3EventSelector([{
bucket: bucket, // 'Bucket' is of type s3.IBucket,
}], options);
bucket.onCloudTrailWriteObject('MyAmazingCloudTrail', {
target: new targets.LambdaFunction(functionReference)
});
This is CDK solution.
Get a grab of existing bucket using fromBucketAttributes
Then for your bucket, use addEventNotification to trigger your lambda.
declare const myLambda: lambda.Function;
const bucket = s3.Bucket.fromBucketAttributes(this, 'ImportedBucket', {
bucketArn: 'arn:aws:s3:::my-bucket',
});
// now you can just call methods on the bucket
bucket.addEventNotification(s3.EventType.OBJECT_CREATED, new s3n.LambdaDestination(myLambda), {prefix: 'home/myusername/*'});
More details can be found here
AWS now supports s3 eventbridge events, which allows for adding a source s3 bucket by name. So this worked for me. Note that you need to enable eventbridge events manually for the triggering s3 bucket.
new Rule(this, 's3rule', {
eventPattern: {
source: ['aws.s3'],
detail: {
'bucket': {'name': ['existing-bucket']},
'object': {'key' : [{'prefix' : 'prefix'}]}
},
detailType: ['Object Created']
},
targets: [new targets.LambdaFunction(MyFunction)]
}
);
I'm trying to implement custom authorization on API Gateway, that would check user's permissions on each particular endpoint behind it, by reading them from the DynamoDB.
I associated the authorizer with the method in question (screenshot below)
The authorizer seems to be working ok, and it returns policy that looks fine to me (have a look underneath)
{
"policyDocument" : {
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : "execute-api:Invoke",
"Effect" : "Deny",
"Resource" : "arn:aws:execute-api:us-east-2:111111111111:mkvhd2q179/*/GET/api/Test"
}
]
},
"principalId" : "*"
}
However, regardless of the Effect authorizer returned inside the policy document, API Gateway still let's all requests pass. I get the status 200 as well as the result set from the API endpoint underneath.
Any ideas as to why the API Gateway would ignore the policy?
P.S.
I tried with the explicit principalID (the username/subject from the token) prior to putting an asterisk there. It behaved the same.
P.P.S
Here's completely dummed down version of my Lambda function, currently set up to allways return Deny as policy Effect...
public class Function
{
public AuthPolicy FunctionHandler(TokenAuthorizerContext request, ILambdaContext context)
{
var token = request.AuthorizationToken;
var stream = token;
var handler = new JwtSecurityTokenHandler();
var jsonToken = handler.ReadToken(stream);
var tokenS = handler.ReadToken(token) as JwtSecurityToken;
return generatePolicy(tokenS.Subject, "Deny", "arn:aws:execute-api:us-east-2:111111111111:mkvhd2q179/*");
}
private AuthPolicy generatePolicy(string principalId, string effect, string resource)
{
AuthPolicy authResponse = new AuthPolicy();
authResponse.policyDocument = new PolicyDocument();
authResponse.policyDocument.Version = "2012-10-17";// default version
authResponse.policyDocument.Statement = new Statement[1];
authResponse.principalId = "*";
Statement statementOne = new Statement();
statementOne.Action = "execute-api:Invoke"; // default action
statementOne.Effect = effect;
statementOne.Resource = resource;
authResponse.policyDocument.Statement[0] = statementOne;
return authResponse;
}
}
public class TokenAuthorizerContext
{
public string Type { get; set; }
public string AuthorizationToken { get; set; }
public string MethodArn { get; set; }
}
public class AuthPolicy
{
public PolicyDocument policyDocument { get; set; }
public string principalId { get; set; }
}
public class PolicyDocument
{
public string Version { get; set; }
public Statement[] Statement { get; set; }
}
public class Statement
{
public string Action { get; set; }
public string Effect { get; set; }
public string Resource { get; set; }
}
TL;DR; Remove/change/check the "Resource Policy" set in the Gateway.
I had a similar problem.
Somehow I had a "allow * principal access to * resources" policy set in the Resource Policy on the Gateway which was being combined with whatever the Authorizer was returning. I ended up removing all resource policies and let the Authorizer decide.
I had this problem as well. Turns out that making the request from the API Gateway console screen (e.g. https://us-west-2.console.aws.amazon.com/apigateway/) doesn't appropriately invoke the authorizer.
I'm guessing its because your console session has its own IAM policy, which interferes with the authorizer policy.
The solution was to manually CURL the endpoint outside of the API Gateway console.
Additionally, do not forget to deploy your API after you make your changes! Otherwise your changes won't be taking effect:
I had a similar issue. The way our API gateway resource policy was set up to allow us to execute any API in the account level (arn:aws:execute-api:us-east-1:xxxxx:*).
Even though implemented fine-grained access where we return a policy to allow only particular arn the API gateway resource policy was taking precedence. So I have removed the resource policy and redeployed the API and it was allowing that particular API and denying the others. OR u can try vice versa based on how you configure your Effect and policy statement.
Initial Resource Policy:(I removed and redeployed)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:xxxxx:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"xx.xx.xx.xxx/24"
]
}
}
}
]
}
Final Lambda Auth Policy returned:
{
"principalId": "xxxxxxxxxx",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-api:us-east-1:xxxxx:bxxxx/*/POST/*/someresource"
]
}
]
}
}
The AWS Documentation is confusing... it seems that you still need to use the "callback" to do the trick and is not enough to return an "Deny" policy...
exports.authorizer = (event, context, callback) => {
if (invalidToken) {
callback("Unauthorized", null);
}
// create a valid policy
return validPolicy
}
I am trying to implement custom authorizer lambda function via java SDK. Can somebody tell me the exact format of the JSON response that is expected from my lambda function. Also in which format i should return the output (JSON object or policy object).
{
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Resource": [
"arn:aws:execute-api:us-east-1:1234567:myapiId/staging/POST/*"
],
"Effect": "Allow"
}
]
},
"principalId": "User123"
}
this is the format i am providing in output in JSONObject format but getting error
Mon Apr 10 09:42:35 UTC 2017 : Endpoint request body after
transformations:
{"type":"TOKEN","authorizationToken":"ABC123","methodArn":"arn:aws:execute-api:ap-southeast-1:007183653813:ohlqxu9p57/null/GET/"}
Mon Apr 10 09:42:36 UTC 2017 : Execution failed due to configuration
error: Authorizer function failed with response body:
{"errorMessage":"An error occurred during JSON serialization of
response","errorType":"java.lang.RuntimeException","stackTrace":[],"cause":{"errorMessage":"com.fasterxml.jackson.databind.JsonMappingException:
JsonObject (through reference chain:
com.google.gson.JsonObject[\"asString\"])","errorType":"java.io.UncheckedIOException","stackTrace":[],"cause":{"errorMessage":"JsonObject
(through reference chain:
com.google.gson.JsonObject[\"asString\"])","errorType":"com.fasterxml.jackson.databind.JsonMappingException","stackTrace":["com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:210)","com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:177)","com.fasterxml.jackson.databind.ser.std.StdSerializer.wrapAndThrow(StdSerializer.java:199)","com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:683)","com.f
[TRUNCATED] Mon Apr 10 09:42:36 UTC 2017 :
AuthorizerConfigurationException
Any help would be great. Thanks in advance
The issue you are facing is Lambda framework related.
Essentially, Lambda will invoke the handler function and pass a serialized JSON.
public class LambdaCustomAuthorizer implements RequestHandler<AuthorizationRequestDO, Object> {
public Object handleRequest(AuthorizationRequestDO input, Context context) { }
}
When you work with custom authorizer, API gateway passes following JSON to your lambda function:
{
"type":"TOKEN",
"authorizationToken":"",
"methodArn":"arn:aws:execute-api:::///"
}
you should have a custom DO AuthorizationRequestDO
which is a POJO::
public class AuthorizationRequestDO {
String authorizationToken;
String methodArn;
public String getAuthorizationToken() {
return authorizationToken;
}
public void setAuthorizationToken(String authorizationToken) {
this.authorizationToken = authorizationToken;
}
public String getMethodArn() {
return methodArn;
}
public void setMethodArn(String methodArn) {
this.methodArn = methodArn;
}
#Override
public String toString() {
return "AuthorizationRequestDO [authorizationToken=" + authorizationToken + ", methodArn=" + methodArn
+ ", getAuthorizationToken()=" + getAuthorizationToken() + ", getMethodArn()=" + getMethodArn() + "]";
}
}
Your Resource property should be a single string value.
{
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:1234567:myapiId/staging/POST/*",
"Effect": "Allow"
}
]
},
"principalId": "User123"
}