How to get GCP Logs to show up in CloudRun? - google-cloud-platform

I see instructions at https://www.npmjs.com/package/#google-cloud/logging to start logging, but I can't seem to see the logs in https://console.cloud.google.com/run/detail/<location>/<service>/logs?project=<project> for CloudRun, and I'm not sure where they are (I invoked them with quickstart('my-project', 'my-log').

To find logs the logs from the tutorial, use the logs explorer https://console.cloud.google.com/logs/query with the query logName="projects/<project>/logs/<log_name>", so in your case, logName="projects/my-project/logs/my-log" of when you're editing the query you can find it in the dropdown under log name.
To log to CloudRun you need to set the following in the meta data:
{
...,
resource: {
type: 'cloud_run_revision',
labels: { service_name: 'my-service', location: 'us-east1' } },
}
You can use this sample function to test:
const log = async (text, name) => {
const gcpLogger = new Logging({ projectId: 'smodin-dev' })
const logSet = gcpLogger.log(name)
const metadata = {
severity: 'INFO',
resource: {
type: 'cloud_run_revision',
labels: { service_name: 'my-service', location: 'us-east1' }
},
}
const entry = logSet.entry(metadata, text)
await logSet.write(entry)
}
PS: If there is a different recommended method I'm all ears

Related

DynamoDB JavaScript PutItemCommand is neither failing nor working

Please note: although this question mentions AWS SAM, it is 100% a DynamoDB JavaScript SDK question at heart and can be answered by anyone with experience writing JavaScript Lambdas (or any client-side apps) against DynamoDB using the AWS DynamoDB client/SDK.
So I used AWS SAM to provision a new DynamoDB table with the following attributes:
FeedbackDynamoDB:
Type: AWS::DynamoDB::Table
Properties:
TableName: commentary
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
StreamSpecification:
StreamViewType: NEW_IMAGE
This configuration successfully creates a DynamoDB table called commentary. However, when I view this table in the DynamoDB web console, I noticed a few things:
it has a partition key of id (type S)
it has no sort key
it has no (0) indexes
it has a read/write capacity mode of "5"
I'm not sure if this raises any red flags with anyone but I figured I would include those details, in case I've configured anything incorrectly.
Now then, I have a JavaScript (TypeScript) Lambda that instantiates a DynamoDB client (using the JavaScript SDK) and attempts to add a record/item to this table:
// this code is in a file named app.ts:
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { User, allUsers } from './users';
import { Commentary } from './commentary';
import { PutItemCommand } from "#aws-sdk/client-dynamodb";
import { DynamoDBClient } from "#aws-sdk/client-dynamodb";
export const lambdaHandler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
try {
const ddbClient = new DynamoDBClient({ region: "us-east-1" });
let status: number = 200;
let responseBody: string = "\"message\": \"hello world\"";
const { id, content, createdAt, providerId, receiverId } = JSON.parse(event.body);
const commentary = new Commentary(id, content, createdAt, providerId, receiverId);
console.log("deserialized this into commentary");
console.log("and the deserialized commentary has content of: " + commentary.getContent());
await provideCommentary(ddbClient, commentary);
responseBody = "\"message\": \"received commentary -- check dynamoDb!\"";
return {
statusCode: status,
body: responseBody
};
} catch (err) {
console.log(err);
return {
statusCode: 500,
body: JSON.stringify({
message: err.stack,
}),
};
}
};
const provideCommentary = async (ddbClient: DynamoDBClient, commentary: Commentary) => {
const params = {
TableName: "commentary",
Item: {
id: {
S: commentary.getId()
},
content: {
S: commentary.getContent()
},
createdAt: {
S: commentary.getCreatedAt()
},
providerId: {
N: commentary.getProviderId()
},
receiverId: {
N: commentary.getReceiverId()
}
}
};
console.log("about to try to insert commentary into dynamo...");
try {
console.log("wait for it...")
const rc = await ddbClient.send(new PutItemCommand(params));
console.log("DDB response:", rc);
} catch (err) {
console.log("hmmm something awry. something....in the mist");
console.log("Error", err.stack);
throw err;
}
};
Where commentary.ts is:
class Commentary {
private id: string;
private content: string;
private createdAt: Date;
private providerId: number;
private receiverId: number;
constructor(id: string, content: string, createdAt: Date, providerId: number, receiverId: number) {
this.id = id;
this.content = content;
this.createdAt = createdAt;
this.providerId = providerId;
this.receiverId = receiverId;
}
public getId(): string {
return this.id;
}
public getContent(): string {
return this.content;
}
public getCreatedAt(): Date {
return this.createdAt;
}
public getProviderId(): number {
return this.providerId;
}
public getReceiverId(): number {
return this.receiverId;
}
}
export { Commentary };
When I update the Lambda with this handler code, and hit the Lambda with the following curl (the Lambda is invoked by an API Gateway URL that I can hit via curl/http):
curl -i --request POST 'https://<my-api-gateway>.execute-api.us-east-1.amazonaws.com/Stage/feedback' \
--header 'Content-Type: application/json' -d '{"id":"123","content":"test feedback","createdAt":"2022-12-02T08:45:26.261-05:00","providerId":457,"receiverId":789}'
I get the following HTTP 500 response:
{"message":"SerializationException: NUMBER_VALUE cannot be converted to String\n
Am I passing it a bad request body (in the curl) or do I need to tweak something in app.ts and/or commentary.ts?
Interestingly the DynamoDB API expects numerical fields of items as strings. For example:
"N": "123.45"
The doc says;
Numbers are sent across the network to DynamoDB as strings, to maximize compatibility across languages and libraries. However, DynamoDB treats them as number type attributes for mathematical operations.
Have you tried sending your input with the numerical parameters as strings as shown below? (See providerId and receiverId)
{
"id":"123",
"content":"test feedback",
"createdAt":"2022-12-02T08:45:26.261-05:00",
"providerId":"457",
"receiverId":"789"
}
You can convert these IDs into string when you're populating your input Item:
providerId: {
N: String(commentary.getProviderId())
},
receiverId: {
N: String(commentary.getReceiverId())
}
You could also use .toString() but then you'd get errors if the field is not set (null or undefined).
Try using a promise to see the outcome:
client.send(command).then(
(data) => {
// process data.
},
(error) => {
// error handling.
}
);
Everything seems alright with your table setup, I believe it's Lambda async issue with the JS sdk. I'm guessing Lambda is not waiting on your code and exiting early. Can you include your full lambda code.

NOT_FOUND(5): Instance Unavailable. HTTP status code 404

I got this error when task is trying processing.
This is my nodejs code
async function quickstart(message : any) {
// TODO(developer): Uncomment these lines and replace with your values.
const project = "";//projectid
const queue = "";//queuename
const location = "";//region
const payload = JSON.stringify({
id: message.id,
data: message.data,
attributes: message.attributes,
});
const inSeconds = 180;
// Construct the fully qualified queue name.
const parent = client.queuePath(project, location, queue);
const task = {
appEngineHttpRequest: {
headers: {"Content-type": "application/json"},
httpMethod: protos.google.cloud.tasks.v2.HttpMethod.POST,
relativeUri: "/api/download",
body: "",
},
scheduleTime: {},
};
if (payload) {
task.appEngineHttpRequest.body = Buffer.from(payload).toString("base64");
}
if (inSeconds) {
task.scheduleTime = {
seconds: inSeconds + Date.now() / 1000,
};
}
const request = {
parent: parent,
task: task,
};
console.log("Sending task:");
console.log(task);
// Send create task request.
const [response] = await client.createTask(request);
console.log(`Created task ${response.name}`);
console.log("Created task");
return true;
}
The task is created without issue. However, it didnt trigger my cloud function and I got 404 or unhandled exception in my cloud logs. I have no idea whats going wrong.
I also did test with gcloud cli without the issue. Gcloud cli able to trigger my cloud function based on provided url.

How to setup EventHub disasterRecoveryConfigs using Azure bicep?

I have created a module 'resources.bicep' to create event hub namespace in two regions.
resource eventHubNamespace 'Microsoft.EventHub/namespaces#2021-11-01' = {
name: resourceName
location: location
sku: {
name:'Standard'
tier:'Standard'
capacity:1
}
}
resource eventHub 'Microsoft.EventHub/namespaces/eventhubs#2021-11-01' = if (shortRegion == 'wus2') {
name: 'city-temprature'
parent: eventHubNamespace
properties: {
messageRetentionInDays: 1
partitionCount: 2
}
}
From the parent bicep file I run the module as
module weatherWest 'resources.bicep' = {
name:'westResources'
scope:resourceGroup('${name}-wus2')
params: {
name: name
shortRegion: 'wus2'
location: 'westus2'
}
}
module weatherEast 'resources.bicep' = {
name:'eastResources'
scope:resourceGroup('${name}-eus2')
params: {
name: name
shortRegion: 'eus2'
location: 'eastus2'
}
}
How do I setup the GeoPairing?
I have not found a way to call Microsoft.EventHub/namespaces/disasterRecoveryConfigs#2021-11-01 from the parent bicep file.
Code is located in this branch
https://github.com/xavierjohn/SearchIndexDisasterRecoverNearRealTime/blob/bicep/bicep/weatherResources.bicep
Per the docs,
you need to specify a parent resource. You can look up an existing resource with the existing keyword.
Something along these lines should work.
resource primary 'Microsoft.EventHub/namespaces#2021-11-01' existing = {
name: 'primaryEventHubName'
resource secondary 'Microsoft.EventHub/namespaces#2021-11-01' existing = {
name: 'secondaryEventHubName'
resource symbolicname 'Microsoft.EventHub/namespaces/disasterRecoveryConfigs#2021-11-01' = {
name: 'foo'
parent: primary
properties: {
alternateName: 'string'
partnerNamespace: secondary.id
}
}
I got help from the Azure Bicep team and currently there is no way to pass a resource as output but they are working on a proposal. For now there is a trick that will work so till the elegant solution comes out,
use existing and set dependsOn on the Geo Pairing fragment.
The end code looks like below.
module allResources 'resources.bicep' = [for location in locations : {
name:'allResources'
scope:resourceGroup('${name}-${location.shortRegion}')
params: {
name: name
shortRegion: location.shortRegion
location: location.region
}
}]
resource primaryEventHubNamespace 'Microsoft.EventHub/namespaces#2021-11-01' existing = {
name: '${name}wus2'
}
resource disasterRecoveryConfigs 'Microsoft.EventHub/namespaces/disasterRecoveryConfigs#2021-11-01' = {
name: name
parent: primaryEventHubNamespace
properties: {
partnerNamespace: resourceId('${name}-eus2', 'Microsoft.EventHub/namespaces', '${name}eus2')
}
dependsOn: [
allResources
]
}

Cloud Functions / Cloud Tasks UNAUTHENTICATED error

I am trying to get a Cloud Function to create a Cloud Task that will invoke a Cloud Function. Easy.
The flow and use case are very close to the official tutorial here.
I also looked at this article by Doug Stevenson and in particular its security section.
No luck, I am consistently getting a 16 (UNAUTHENTICATED) error in Cloud Task.
If I can trust what I see in the console it seems that Cloud Task is not attaching the OIDC token to the request:
Yet, in my code I do have the oidcToken object:
const { v2beta3, protos } = require("#google-cloud/tasks");
import {
PROJECT_ID,
EMAIL_QUEUE,
LOCATION,
EMAIL_SERVICE_ACCOUNT,
EMAIL_HANDLER,
} from "./../config/cloudFunctions";
export const createHttpTaskWithToken = async function (
payload: {
to_email: string;
templateId: string;
uid: string;
dynamicData?: Record<string, any>;
},
{
project = PROJECT_ID,
queue = EMAIL_QUEUE,
location = LOCATION,
url = EMAIL_HANDLER,
email = EMAIL_SERVICE_ACCOUNT,
} = {}
) {
const client = new v2beta3.CloudTasksClient();
const parent = client.queuePath(project, location, queue);
// Convert message to buffer.
const convertedPayload = JSON.stringify(payload);
const body = Buffer.from(convertedPayload).toString("base64");
const task = {
httpRequest: {
httpMethod: protos.google.cloud.tasks.v2.HttpMethod.POST,
url,
oidcToken: {
serviceAccountEmail: email,
audience: new URL(url).origin,
},
headers: {
"Content-Type": "application/json",
},
body,
},
};
try {
// Send create task request.
const request = { parent: parent, task: task };
const [response] = await client.createTask(request);
console.log(`Created task ${response.name}`);
return response.name;
} catch (error) {
if (error instanceof Error) console.error(Error(error.message));
return;
}
};
When logging the task object from the code above in Cloud Logging I can see that the service account is the one that I created for the purpose of this and that the Cloud Tasks are successfully created.
IAM:
And the function that the Cloud Task needs to invoke:
Everything seems to be there, in theory.
Any advice as to what I would be missing?
Thanks,
Your audience is incorrect. It must end by the function name. Here, you only have the region and the project https://<region>-<projectID>.cloudfunction.net/. Use the full Cloud Functions URL.

AWS CDK - How to add an event notification to an existing S3 Bucket

I'm trying to modify this AWS-provided CDK example to instead use an existing bucket. Additional documentation indicates that importing existing resources is supported. So far I am unable to add an event notification to the existing bucket using CDK.
Here is my modified version of the example:
class S3TriggerStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# create lambda function
function = _lambda.Function(self, "lambda_function",
runtime=_lambda.Runtime.PYTHON_3_7,
handler="lambda-handler.main",
code=_lambda.Code.asset("./lambda"))
# **MODIFIED TO GET EXISTING BUCKET**
#s3 = _s3.Bucket(self, "s3bucket")
s3 = _s3.Bucket.from_bucket_arn(self, 's3_bucket',
bucket_arn='arn:<my_region>:::<my_bucket>')
# create s3 notification for lambda function
notification = aws_s3_notifications.LambdaDestination(function)
# assign notification for the s3 event type (ex: OBJECT_CREATED)
s3.add_event_notification(_s3.EventType.OBJECT_CREATED, notification)
This results in the following error when trying to add_event_notification:
AttributeError: '_IBucketProxy' object has no attribute 'add_event_notification'
The from_bucket_arn function returns an IBucket, and the add_event_notification function is a method of the Bucket class, but I can't seem to find any other way to do this. Maybe it's not supported. Any help would be appreciated.
I managed to get this working with a custom resource. It's TypeScript, but it should be easily translated to Python:
const uploadBucket = s3.Bucket.fromBucketName(this, 'BucketByName', 'existing-bucket');
const fn = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NODEJS_10_X,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'lambda-handler'))
});
const rsrc = new AwsCustomResource(this, 'S3NotificationResource', {
onCreate: {
service: 'S3',
action: 'putBucketNotificationConfiguration',
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: uploadBucket.bucketName,
NotificationConfiguration: {
LambdaFunctionConfigurations: [
{
Events: ['s3:ObjectCreated:*'],
LambdaFunctionArn: fn.functionArn,
Filter: {
Key: {
FilterRules: [{ Name: 'suffix', Value: 'csv' }]
}
}
}
]
}
},
// Always update physical ID so function gets executed
physicalResourceId: 'S3NotifCustomResource' + Date.now().toString()
}
});
fn.addPermission('AllowS3Invocation', {
action: 'lambda:InvokeFunction',
principal: new iam.ServicePrincipal('s3.amazonaws.com'),
sourceArn: uploadBucket.bucketArn
});
rsrc.node.addDependency(fn.permissionsNode.findChild('AllowS3Invocation'));
This is basically a CDK version of the CloudFormation template laid out in this example. See the docs on the AWS SDK for the possible NotificationConfiguration parameters.
since June 2021 there is a nicer way to solve this problem. Since approx. Version 1.110.0 of the CDK it is possible to use the S3 notifications with Typescript Code:
Example:
const s3Bucket = s3.Bucket.fromBucketName(this, 'bucketId', 'bucketName');
s3Bucket.addEventNotification(s3.EventType.OBJECT_CREATED, new s3n.LambdaDestination(lambdaFunction), {
prefix: 'example/file.txt'
});
CDK Documentation:
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-s3-notifications-readme.html
Pull Request:
https://github.com/aws/aws-cdk/pull/15158
Sorry I can't comment on the excellent James Irwin's answer above due to a low reputation, but I took and made it into a Construct.
The comment about "Access Denied" took me some time to figure out too, but the crux of it is that the function is S3:putBucketNotificationConfiguration, but the IAM Policy action to allow is S3:PutBucketNotification.
Here's the [code for the construct]:(https://gist.github.com/archisgore/0f098ae1d7d19fddc13d2f5a68f606ab)
import * as cr from '#aws-cdk/custom-resources';
import * as logs from '#aws-cdk/aws-logs';
import * as s3 from '#aws-cdk/aws-s3';
import * as sqs from '#aws-cdk/aws-sqs';
import * as iam from '#aws-cdk/aws-iam';
import {Construct} from '#aws-cdk/core';
// You can drop this construct anywhere, and in your stack, invoke it like this:
// const s3ToSQSNotification = new S3NotificationToSQSCustomResource(this, 's3ToSQSNotification', existingBucket, queue);
export class S3NotificationToSQSCustomResource extends Construct {
constructor(scope: Construct, id: string, bucket: s3.IBucket, queue: sqs.Queue) {
super(scope, id);
// https://stackoverflow.com/questions/58087772/aws-cdk-how-to-add-an-event-notification-to-an-existing-s3-bucket
const notificationResource = new cr.AwsCustomResource(scope, id+"CustomResource", {
onCreate: {
service: 'S3',
action: 'putBucketNotificationConfiguration',
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: bucket.bucketName,
NotificationConfiguration: {
QueueConfigurations: [
{
Events: ['s3:ObjectCreated:*'],
QueueArn: queue.queueArn,
}
]
}
},
physicalResourceId: <cr.PhysicalResourceId>(id + Date.now().toString()),
},
onDelete: {
service: 'S3',
action: 'putBucketNotificationConfiguration',
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: bucket.bucketName,
// deleting a notification configuration involves setting it to empty.
NotificationConfiguration: {
}
},
physicalResourceId: <cr.PhysicalResourceId>(id + Date.now().toString()),
},
policy: cr.AwsCustomResourcePolicy.fromStatements([new iam.PolicyStatement({
// The actual function is PutBucketNotificationConfiguration.
// The "Action" for IAM policies is PutBucketNotification.
// https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html#amazons3-actions-as-permissions
actions: ["S3:PutBucketNotification"],
// allow this custom resource to modify this bucket
resources: [bucket.bucketArn],
})]),
logRetention: logs.RetentionDays.ONE_DAY,
});
// allow S3 to send notifications to our queue
// https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#grant-destinations-permissions-to-s3
queue.addToResourcePolicy(new iam.PolicyStatement({
principals: [new iam.ServicePrincipal("s3.amazonaws.com")],
actions: ["SQS:SendMessage"],
resources: [queue.queueArn],
conditions: {
ArnEquals: {"aws:SourceArn": bucket.bucketArn}
}
}));
// don't create the notification custom-resource until after both the bucket and queue
// are fully created and policies applied.
notificationResource.node.addDependency(bucket);
notificationResource.node.addDependency(queue);
}
}
UPDATED: Source code from original answer will overwrite existing notification list for bucket which will make it impossible adding new lambda triggers. Here's the solution which uses event sources to handle mentioned problem.
import aws_cdk {
aws_s3 as s3,
aws_cdk.aws_lambda as lambda_
aws_lambda_event_sources as event_src
}
import path as path
class S3LambdaTrigger(core.Stack):
def __init__(self, scope: core.Construct, id: str):
super().__init__(scope, id)
bucket = s3.Bucket(
self, "S3Bucket",
block_public_access=s3.BlockPublicAccess.BLOCK_ALL,
bucket_name='BucketName',
encryption=s3.BucketEncryption.S3_MANAGED,
versioned=True
)
fn = lambda_.Function(
self, "LambdaFunction",
runtime=lambda_.Runtime.NODEJS_10_X,
handler="index.handler",
code=lambda_.Code.from_asset(path.join(__dirname, "lambda-handler"))
)
fn.add_permission(
's3-service-principal',
principal=aws_iam.ServicePrincipal('s3.amazonaws.com')
)
fn.add_event_source(
event_src.S3EventSource(
bucket,
events=[s3.EventType.OBJECT_CREATED, s3.EventType.OBJECT_REMOVED],
filters=[s3.NotificationKeyFilter(prefix="subdir/", suffix=".txt")]
)
)
ORIGINAL:
I took ubi's solution in TypeScript and successfully translated it to Python. His solution worked for me.
#!/usr/bin/env python
from typing import List
from aws_cdk import (
core,
custom_resources as cr,
aws_lambda as lambda_,
aws_s3 as s3,
aws_iam as iam,
)
class S3NotificationLambdaProps:
def __init__(self, bucket: s3.Bucket, function: lambda_.Function, events: List[str], prefix: str):
self.bucket = bucket
self.function = function
self.events = events
self.prefix = prefix
class S3NotificationLambda(core.Construct):
def __init__(self, scope: core.Construct, id: str, props: S3NotificationLambdaProps):
super().__init__(scope, id)
self.notificationResource = cr.AwsCustomResource(
self, f'CustomResource{id}',
on_create=cr.AwsSdkCall(
service="S3",
action="S3:putBucketNotificationConfiguration",
# Always update physical ID so function gets executed
physical_resource_id=cr.PhysicalResourceId.of(f'S3NotifCustomResource{id}'),
parameters={
"Bucket": props.bucket.bucket_name,
"NotificationConfiguration": {
"LambdaFunctionConfigurations": [{
"Events": props.events,
"LambdaFunctionArn": props.function.function_arn,
"Filter": {
"Key": {"FilterRules": [{"Name": "prefix", "Value": props.prefix}]}
}}
]
}
}
),
on_delete=cr.AwsSdkCall(
service="S3",
action="S3:putBucketNotificationConfiguration",
# Always update physical ID so function gets executed
physical_resource_id=cr.PhysicalResourceId.of(f'S3NotifCustomResource{id}'),
parameters={
"Bucket": props.bucket.bucket_name,
"NotificationConfiguration": {},
}
),
policy=cr.AwsCustomResourcePolicy.from_statements(
statements=[
iam.PolicyStatement(
actions=["S3:PutBucketNotification", "S3:GetBucketNotification"],
resources=[props.bucket.bucket_arn]
),
]
)
)
props.function.add_permission(
"AllowS3Invocation",
action="lambda:InvokeFunction",
principal=iam.ServicePrincipal("s3.amazonaws.com"),
source_arn=props.bucket.bucket_arn,
)
# don't create the notification custom-resource until after both the bucket and lambda
# are fully created and policies applied.
self.notificationResource.node.add_dependency(props.bucket)
self.notificationResource.node.add_dependency(props.function)
# Usage:
s3NotificationLambdaProps = S3NotificationLambdaProps(
bucket=bucket_,
function=lambda_fn_,
events=['s3:ObjectCreated:*'],
prefix='foo/'
)
s3NotificationLambda = S3NotificationLambda(
self, "S3NotifLambda",
self.s3NotificationLambdaProps
)
Here is a python solution for adding / replacing a lambda trigger to an existing bucket including the filter. #James Irwin your example was very helpful.
Thanks to #JørgenFrøland for pointing out that the custom resource config will replace any existing notification triggers based on the boto3 documentation https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.BucketNotification.put
One note is he access denied issue is
because if you do putBucketNotificationConfiguration action the policy creates a s3:PutBucketNotificationConfiguration action but that action doesn't exist https://github.com/aws/aws-cdk/issues/3318#issuecomment-584737465
Same issue happens if you set the policy using AwsCustomResourcePolicy.fromSdkCalls
I've added a custom policy that might need to be restricted further.
s3_bucket = s3.Bucket.from_bucket_name(
self, 's3-bucket-by-name', 'existing-bucket-name')
trigger_lambda = _lambda.Function(
self,
'{id}-s3-trigger-lambda',
environment=lambda_env,
code=_lambda.Code.from_asset('./ladle-sink/'),
runtime=_lambda.Runtime.PYTHON_3_7,
handler='lambda_function.lambda_handler',
memory_size=512,
timeout=core.Duration.minutes(3))
trigger_lambda.add_permission(
's3-trigger-lambda-s3-invoke-function',
principal=iam.ServicePrincipal('s3.amazonaws.com'),
action='lambda:InvokeFunction',
source_arn=base_resources.incoming_documents_bucket.bucket_arn)
custom_s3_resource = _custom_resources.AwsCustomResource(
self,
's3-incoming-documents-notification-resource',
policy=_custom_resources.AwsCustomResourcePolicy.from_statements([
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
resources=['*'],
actions=['s3:PutBucketNotification']
)
]),
on_create=_custom_resources.AwsSdkCall(
service="S3",
action="putBucketNotificationConfiguration",
parameters={
"Bucket": s3_bucket.bucket_name,
"NotificationConfiguration": {
"LambdaFunctionConfigurations": [
{
"Events": ['s3:ObjectCreated:*'],
"LambdaFunctionArn": trigger_lambda.function_arn,
"Filter": {
"Key": {
"FilterRules": [
{'Name': 'suffix', 'Value': 'html'}]
}
}
}
]
}
},
physical_resource_id=_custom_resources.PhysicalResourceId.of(
f's3-notification-resource-{str(uuid.uuid1())}'),
region=env.region
))
custom_s3_resource.node.add_dependency(
trigger_lambda.permissions_node.find_child(
's3-trigger-lambda-s3-invoke-function'))
Thanks to the great answers above, see below for a construct for s3 -> lambda notification. It can be used like
const fn = new SingletonFunction(this, "Function", {
...
});
const bucket = Bucket.fromBucketName(this, "Bucket", "...");
const s3notification = new S3NotificationLambda(this, "S3Notification", {
bucket: bucket,
lambda: function,
events: ['s3:ObjectCreated:*'],
prefix: "some_prefix/"
})
Construct (drop-in to your project as a .ts file)
import * as cr from "#aws-cdk/custom-resources";
import * as logs from "#aws-cdk/aws-logs";
import * as s3 from "#aws-cdk/aws-s3";
import * as sqs from "#aws-cdk/aws-sqs";
import * as iam from "#aws-cdk/aws-iam";
import { Construct } from "#aws-cdk/core";
import * as lambda from "#aws-cdk/aws-lambda";
export interface S3NotificationLambdaProps {
bucket: s3.IBucket;
lambda: lambda.IFunction;
events: string[];
prefix: string;
}
export class S3NotificationLambda extends Construct {
constructor(scope: Construct, id: string, props: S3NotificationLambdaProps) {
super(scope, id);
const notificationResource = new cr.AwsCustomResource(
scope,
id + "CustomResource",
{
onCreate: {
service: "S3",
action: "putBucketNotificationConfiguration",
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: props.bucket.bucketName,
NotificationConfiguration: {
LambdaFunctionConfigurations: [
{
Events: props.events,
LambdaFunctionArn: props.lambda.functionArn,
Filter: {
Key: {
FilterRules: [{ Name: "prefix", Value: props.prefix }],
},
},
},
],
},
},
physicalResourceId: <cr.PhysicalResourceId>(
(id + Date.now().toString())
),
},
onDelete: {
service: "S3",
action: "putBucketNotificationConfiguration",
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: props.bucket.bucketName,
// deleting a notification configuration involves setting it to empty.
NotificationConfiguration: {},
},
physicalResourceId: <cr.PhysicalResourceId>(
(id + Date.now().toString())
),
},
policy: cr.AwsCustomResourcePolicy.fromStatements([
new iam.PolicyStatement({
// The actual function is PutBucketNotificationConfiguration.
// The "Action" for IAM policies is PutBucketNotification.
// https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html#amazons3-actions-as-permissions
actions: ["S3:PutBucketNotification", "S3:GetBucketNotification"],
// allow this custom resource to modify this bucket
resources: [props.bucket.bucketArn],
}),
]),
}
);
props.lambda.addPermission("AllowS3Invocation", {
action: "lambda:InvokeFunction",
principal: new iam.ServicePrincipal("s3.amazonaws.com"),
sourceArn: props.bucket.bucketArn,
});
// don't create the notification custom-resource until after both the bucket and queue
// are fully created and policies applied.
notificationResource.node.addDependency(props.bucket);
notificationResource.node.addDependency(props.lambda);
}
}
based on the answer from #ubi
in case of you don't need the SingletonFunction but Function + some cleanup
call like this:
const s3NotificationLambdaProps = < S3NotificationLambdaProps > {
bucket: bucket,
lambda: lambda,
events: ['s3:ObjectCreated:*'],
prefix: '', // or put some prefix
};
const s3NotificationLambda = new S3NotificationLambda(this, `${envNameUpperCase}S3ToLambdaNotification`, s3NotificationLambdaProps);
and the construct will be like this:
import * as cr from "#aws-cdk/custom-resources";
import * as s3 from "#aws-cdk/aws-s3";
import * as iam from "#aws-cdk/aws-iam";
import { Construct } from "#aws-cdk/core";
import * as lambda from "#aws-cdk/aws-lambda";
export interface S3NotificationLambdaProps {
bucket: s3.IBucket;
lambda: lambda.Function;
events: string[];
prefix: string;
}
export class S3NotificationLambda extends Construct {
constructor(scope: Construct, id: string, props: S3NotificationLambdaProps) {
super(scope, id);
const notificationResource = new cr.AwsCustomResource(
scope,
id + "CustomResource", {
onCreate: {
service: "S3",
action: "putBucketNotificationConfiguration",
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: props.bucket.bucketName,
NotificationConfiguration: {
LambdaFunctionConfigurations: [{
Events: props.events,
LambdaFunctionArn: props.lambda.functionArn,
Filter: {
Key: {
FilterRules: [{
Name: "prefix",
Value: props.prefix
}],
},
},
}, ],
},
},
physicalResourceId: < cr.PhysicalResourceId > (
(id + Date.now().toString())
),
},
onDelete: {
service: "S3",
action: "putBucketNotificationConfiguration",
parameters: {
// This bucket must be in the same region you are deploying to
Bucket: props.bucket.bucketName,
// deleting a notification configuration involves setting it to empty.
NotificationConfiguration: {},
},
physicalResourceId: < cr.PhysicalResourceId > (
(id + Date.now().toString())
),
},
policy: cr.AwsCustomResourcePolicy.fromStatements([
new iam.PolicyStatement({
// The actual function is PutBucketNotificationConfiguration.
// The "Action" for IAM policies is PutBucketNotification.
// https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html#amazons3-actions-as-permissions
actions: ["S3:PutBucketNotification", "S3:GetBucketNotification"],
// allow this custom resource to modify this bucket
resources: [props.bucket.bucketArn],
}),
]),
}
);
props.lambda.addPermission("AllowS3Invocation", {
action: "lambda:InvokeFunction",
principal: new iam.ServicePrincipal("s3.amazonaws.com"),
sourceArn: props.bucket.bucketArn,
});
// don't create the notification custom-resource until after both the bucket and lambda
// are fully created and policies applied.
notificationResource.node.addDependency(props.bucket);
notificationResource.node.addDependency(props.lambda);
}
}
With the newer functionality, in python this can now be done as:
bucket = aws_s3.Bucket.from_bucket_name(
self, "bucket", "bucket-name"
)
bucket.add_event_notification(
aws_s3.EventType.OBJECT_CREATED,
aws_s3_notifications.LambdaDestination(your_lambda),
aws_s3.NotificationKeyFilter(
prefix="prefix/path/",
),
)
At the time of writing, the AWS documentation seems to have the prefix arguments incorrect in their examples so this was moderately confusing to figure out.
Thanks to #Kilian Pfeifer for starting me down the right path with the typescript example.
I used CloudTrail for resolving the issue, code looks like below and its more abstract:
const trail = new cloudtrail.Trail(this, 'MyAmazingCloudTrail');
const options: AddEventSelectorOptions = {
readWriteType: cloudtrail.ReadWriteType.WRITE_ONLY
};
// Adds an event selector to the bucket
trail.addS3EventSelector([{
bucket: bucket, // 'Bucket' is of type s3.IBucket,
}], options);
bucket.onCloudTrailWriteObject('MyAmazingCloudTrail', {
target: new targets.LambdaFunction(functionReference)
});
This is CDK solution.
Get a grab of existing bucket using fromBucketAttributes
Then for your bucket, use addEventNotification to trigger your lambda.
declare const myLambda: lambda.Function;
const bucket = s3.Bucket.fromBucketAttributes(this, 'ImportedBucket', {
bucketArn: 'arn:aws:s3:::my-bucket',
});
// now you can just call methods on the bucket
bucket.addEventNotification(s3.EventType.OBJECT_CREATED, new s3n.LambdaDestination(myLambda), {prefix: 'home/myusername/*'});
More details can be found here
AWS now supports s3 eventbridge events, which allows for adding a source s3 bucket by name. So this worked for me. Note that you need to enable eventbridge events manually for the triggering s3 bucket.
new Rule(this, 's3rule', {
eventPattern: {
source: ['aws.s3'],
detail: {
'bucket': {'name': ['existing-bucket']},
'object': {'key' : [{'prefix' : 'prefix'}]}
},
detailType: ['Object Created']
},
targets: [new targets.LambdaFunction(MyFunction)]
}
);