AWS - Moving data from one S3 bucket to another with CloudFormation - amazon-web-services

I'm trying to create a stack with CloudFormation. The stack needs to take some data files from a central S3 bucket and copy them to it's own "local" bucket.
I've written a lambda function to do this, and it works when I run it in the Lambda console with a test event (the test event uses the real central repository and successfully copies the file to a specified repo).
My current CloudFormation script does the following things:
Creates the "local" S3 bucket
Creates a role that the Lambda function can use to access the buckets
Defines the Lambda function to move the specified file to the "local" bucket
Defines some Custom resources to invoke the Lambda function.
It's at step 4 where it starts to go wrong - the Cloudformation execution seems to freeze here (CREATE_IN_PROGESS). Also, when I try to delete the stack, it seems to just get stuck on DELETE_IN_PROGRESS instead.
Here's how I'm invoking the Lambda function in the CloudFormation script:
"DataSync": {
"Type": "Custom::S3DataSync",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : [ "S3DataSync", "Arn" ] },
"InputFile": "data/provided-as-ip-v6.json",
"OutputFile": "data/data.json"
}
},
"KeySync1": {
"Type": "Custom::S3DataSync",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : [ "S3DataSync", "Arn" ] },
"InputFile": "keys/1/public_key.pem"
}
},
"KeySync2": {
"Type": "Custom::S3DataSync",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : [ "S3DataSync", "Arn" ] },
"InputFile": "keys/2/public_key.pem"
}
}
And the Lambda function itself:
exports.handler = function(event, context) {
var buckets = {};
buckets.in = {
"Bucket":"central-data-repository",
"Key":"sandbox" + "/" + event.ResourceProperties.InputFile
};
buckets.out = {
"Bucket":"sandbox-data",
"Key":event.ResourceProperties.OutputFile || event.ResourceProperties.InputFile
};
var AWS = require('aws-sdk');
var S3 = new AWS.S3();
S3.getObject(buckets.in, function(err, data) {
if (err) {
console.log("Couldn't get file " + buckets.in.Key);
context.fail("Error getting file: " + err)
}
else {
buckets.out.Body = data.Body;
S3.putObject(buckets.out, function(err, data) {
if (err) {
console.log("Couln't write to S3 bucket " + buckets.out.Bucket);
context.fail("Error writing file: " + err);
}
else {
console.log("Successfully copied " + buckets.in.Key + " to " + buckets.out.Bucket + " at " + buckets.out.Key);
context.succeed();
}
});
}
});
}

Your Custom Resource function needs to send signals back to CloudFormation to indicate completion, status, and any returned values. You will see CREATE_IN_PROGRESS as the status in CloudFormation until you notify it that your function is complete.
The generic way of signaling CloudFormation is to post a response to a pre-signed S3 URL. But there is a cfn-response module to make this easier in Lambda functions. Interestingly, the two examples provided for Lambda-backed Custom Resources use different methods:
Walkthrough: Refer to Resources in Another Stack - uses the cfn-response module
Walkthrough: Looking Up Amazon Machine Image IDs - uses pre-signed URLs.

Yup, i did the same thing. We need to upload(PUT request) the status of our request.(Need to send the status as SUCCESS)

Related

AWS CDK - update existing lambda environment variables

I would love to be able to update an existing lambda function via AWS CDK. I need to update the environment variable configuration. From what I can see this is not possible, is there something workable to make this happen?
I am using code like this to import the lambda:
const importedLambdaFromArn = lambda.Function.fromFunctionAttributes(
this,
'external-lambda-from-arn',
{
functionArn: 'my-arn',
role: importedRole,
}
);
For now, I have to manually alter a cloudformation template. Updating directly in cdk would be much nicer.
Yes, it is possible, although you should read #Allan_Chua's answer before actually doing it. Lambda's UpdateFunctionConfiguration API can modify a deployed function's environment variables. The CDK's AwsCustomResource construct lets us call that API during stack deployment.*
Let's say you want to set TABLE_NAME on a previously deployed lambda to the value of a DynamoDB table's name:
// MyStack.ts
const existingFunc = lambda.Function.fromFunctionArn(this, 'ImportedFunction', arn);
const table = new dynamo.Table(this, 'DemoTable', {
partitionKey: { name: 'id', type: dynamo.AttributeType.STRING },
});
new cr.AwsCustomResource(this, 'UpdateEnvVar', {
onCreate: {
service: 'Lambda',
action: 'updateFunctionConfiguration',
parameters: {
FunctionName: existingFunc.functionArn,
Environment: {
Variables: {
TABLE_NAME: table.tableName,
},
},
},
physicalResourceId: cr.PhysicalResourceId.of('DemoTable'),
},
policy: cr.AwsCustomResourcePolicy.fromSdkCalls({
resources: [existingFunc.functionArn],
}),
});
Under the hood, the custom resource creates a lambda that makes the UpdateFunctionConfiguration call using the JS SDK when the stack is created. There are also onUpdate and onDelete cases to handle.
* Again, whether this is a good idea or not depends on the use case. You could always call UpdateFunctionConfiguration without the CDK.
The main purpose of CDK is to enable AWS customers to have the capability to automatically provision resources. If we're attempting to update settings of pre-existing resources that were managed by other CloudFormation stacks, it is better to update the variable on its parent CloudFormation template instead of CDK. This provides the following advantages on your side:
There's a single source of truth of what the variable should look like
There's no tug o war between the CDK and CloudFormation template whenever an update is being pushed from these sources.
Otherwise, since this is a compute layer, just get rid of the lambda function from CloudFormation and start full CDK usage altogether bro!
Hope this advise helps
If you are using AWS Amplify, the accepted answer will not work and instead you can do this by exporting a CloudFormation Output from your custom resource stack and then referencing that output using an input parameter in the other stack.
With CDK
new CfnOutput(this, 'MyOutput', { value: 'MyValue' });
With CloudFormation Template
"Outputs": {
"MyOutput": {
"Value": "MyValue"
}
}
Add an input parameter to the cloudformation-template.json of the resource you want to reference your output value in:
"Parameters": {
"myInput": {
"Type": "String",
"Description": "A custom input"
},
}
Create a parameters.json file that passes the output to the input parameter:
{
"myInput": {
"Fn::GetAtt": ["customResource", "Outputs.MyOutput"]
}
}
Finally, reference that input in your stack:
"Resources": {
"LambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Environment": {
"Variables": {
"myEnvVar": {
"Ref": "myInput"
},
}
},
}
}
}

Execute Lambda Function on CloudFormation Stack delete

Is there a way to trigger Lambda Function declared in the very same CFN template that was used to create a stack when the given stack is being deleted?
Preferably I'd like to implement somewhat opposite to THIS snippet (that is: a relatively simple solution that omits e.g. the need of creating SNS topics etc.).
In advance thx for any advices. Best regards! //M
You can achieve the desired behaviour by creating a CustomResource and hooking it up with your lambda function.
From the documentation page:
With AWS Lambda functions and custom resources, you can run custom code in response to stack events (create, update, and delete)
The lambda function needs to react to a Delete event, so it has to be written to allow for that.
Example lambda function source code
Here's an example for the AWS lambda function configuration, which needs to react to a delete event:
import cfnresponse
def lambda_handler(event, context):
response = {}
if event['RequestType'] == 'Delete':
... # do stuff
response['output'] = ' Delete event.'
cfnresponse.send(event, context, cfnresponse.SUCCESS, response)
Example lambda function
Here's a CloudFormation snippet for the lambda function:
"MyDeletionLambda": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"ZipFile": {
"Fn::Join": [
"\n",
[
"import cfnresponse",
"",
"def lambda_handler(event, context):",
" response = {}",
" if event['RequestType'] == 'Delete':",
" response['output'] = 'Delete event.'",
" cfnresponse.send(event, context, cfnresponse.SUCCESS, response)"
]
]
}
},
"Handler": "index.lambda_handler",
"Runtime": "python3.8"
}
}
}
Example custom resource
Here's a CloudFormation snippet for a custom resource:
"OnStackDeletion": {
"Type": "Custom::LambdaDependency",
"Properties": {
"ServiceToken": {
"Fn::GetAtt": [
"MyDeletionLambda",
"Arn"
]
}
}
}

Passing event data from Amazon EventBridge into an AWS Fargate task

Objective
I'd like to pass event data from Amazon EventBridge directly into an AWS Fargate task. However, it doesn't seem like this is currently possible.
Workaround
As a work-around, I've inserted an extra resource in between AWS Fargate and EventBridge. AWS Step Functions allows you to specify ContainerOverrides, in which the Environment property allows you to configure environment variables that will be passed into the Fargate task, from the EventBridge event.
Unfortunately, this workaround increases the solution complexity and cost unnecessarily.
Question: Is there a way to pass event data from EventBridge directly into an AWS Fargate (ECS) task, that I am simply unaware of?
To pass data from Eventbridge Event to ECS Task for e.g with a Launch Type FARGATE you can use Input Transformation. For example let's say we have an S3 bucket configured to send all event notifications to eventbridge and we have an eventbridge rule that looks like this.
{
"detail": {
"bucket": {
"name": ["mybucket"]
}
},
"detail-type": ["Object Created"],
"source": ["aws.s3"]
}
Now let's say we would like to pass the bucket name, object key, and the object version id to our ecs task running on fargate you can create a aws_cloudwatch_event_target resource in terraform with an input transformer below.
resource "aws_cloudwatch_event_target" "EventBridgeECSTaskTarget"{
target_id = "EventBridgeECSTaskTarget"
rule = aws_cloudwatch_event_rule.myeventbridgerule.name
arn = "arn:aws:ecs:us-east-1:123456789012:cluster/myecscluster"
role_arn = aws_iam_role.EventBridgeRuleInvokeECSTask.arn
ecs_target {
task_count = 1
task_definition_arn = "arn:aws:ecs:us-east-1:123456789012:task-definition/mytaskdefinition"
launch_type = "FARGATE"
network_configuration {
subnets = ["subnet-1","subnet-2","subnet-3"]
security_groups = ["sg-group-id"]
}
}
input_transformer {
input_paths = {
bucketname = "$.detail.bucket.name",
objectkey = "$.detail.object.key",
objectversionid = "$.detail.object.version-id",
}
input_template = <<EOF
{
"containerOverrides": [
{
"name": "containername",
"environment" : [
{
"name" : "S3_BUCKET_NAME",
"value" : <bucketname>
},
{
"name" : "S3_OBJECT_KEY",
"value" : <objectkey>
},
{
"name" : "S3_OBJ_VERSION_ID",
"value": <objectversionid>
}
]
}
]
}
EOF
}
}
Once your ECS Task is running you can easily access these variables to check what bucket the object was created in, what was the object and the version and do a GetObject.
For e.g: In Go we can easily do it as follows. (snippets only not adding imports and stuff but you get the idea.
filename := aws.String(os.Getenv("S3_OBJECT_KEY"))
bucketname := aws.String(os.Getenv("S3_BUCKET_NAME"))
versionId := aws.String(os.Getenv("S3_OBJ_VERSION_ID"))
//You can print and verify the values in CloudWatch
//Prepare the s3 GetObjectInput
s3goi := &s3.GetObjectInput{
Bucket: bucketname,
Key: filename,
VersionId: versionId,
}
s3goo, err := s3svc.GetObject(ctx, s3goi)
if err != nil {
log.Fatalf("Error retreiving object: %v", err)
}
b, err := ioutil.ReadAll(s3goo.Body)
if err != nil {
log.Fatalf("Error reading file: %v", err)
}
There's no current direct invocation between EventBridge and Fargate. You can find the list of targets supported at https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-targets.html
The workarounds is to use an intermediary that supports calling fargate (like step-functions) or send the message to compute (like lambda [the irony]) before sending it downstream.

Trying to use model within post-confirmation function

When the user is registered with the method Auth.signup and this one confirmed the code received by email. I want to execute the post-confirmation trigger and update a User table made through the method #model on the schema.graphql file.
I updated the Auth like this:
andres#DESKTOP-CPTOQVN:~/TeVi$ amplify update auth
Please note that certain attributes may not be overwritten if you choose to use defaults settings.
You have configured resources that might depend on this Cognito resource. Updating this Cognito resource could have unintended side effects.
Using service: Cognito, provided by: awscloudformation
What do you want to do? Walkthrough all the auth configurations
Select the authentication/authorization services that you want to use: User Sign-Up, Sign-In, connected with AWS IAM controls (Enables per-user Storage features for images or other content, Analytics, and more)
Allow unauthenticated logins? (Provides scoped down permissions that you can control via AWS IAM) Yes
Do you want to enable 3rd party authentication providers in your identity pool? No
Do you want to add User Pool Groups? No
Do you want to add an admin queries API? No
Multifactor authentication (MFA) user login options: OFF
Email based user registration/forgot password: Enabled (Requires per-user email entry at registration)
Please specify an email verification subject: Your verification code
Please specify an email verification message: Your verification code is {####}
Do you want to override the default password policy for this User Pool? No
Specify the app's refresh token expiration period (in days): 30
Do you want to specify the user attributes this app can read and write? No
Do you want to enable any of the following capabilities?
Do you want to use an OAuth flow? No
? Do you want to configure Lambda Triggers for Cognito? Yes
? Which triggers do you want to enable for Cognito Post Confirmation
? What functionality do you want to use for Post Confirmation Create your own module
Succesfully added the Lambda function locally
? Do you want to edit your custom function now? No
Successfully updated resource tevi locally
Some next steps:
"amplify push" will build all your local backend resources and provision it in the cloud
"amplify publish" will build all your local backend and frontend resources (if you have hosting category added) and provision it in the cloud
And then I did amplify push. Then when the function was completed, I updated this one like this:
andres#DESKTOP-CPTOQVN:~/TeVi$ amplify update function
Using service: Lambda, provided by: awscloudformation
? Please select the Lambda Function you would want to update teviPostConfirmation
? Do you want to update permissions granted to this Lambda function to perform on other resources in your project? Yes
? Select the category storage
? Storage has 12 resources in this project. Select the one you would like your Lambda to access User:#model(appsync)
? Select the operations you want to permit for User:#model(appsync) create, update
You can access the following resource attributes as environment variables from your Lambda function
API_TEVI_GRAPHQLAPIIDOUTPUT
API_TEVI_USERTABLE_ARN
API_TEVI_USERTABLE_NAME
ENV
REGION
? Do you want to invoke this function on a recurring schedule? No
? Do you want to edit the local lambda function now? No
Successfully updated resource
Then I did amplify push and I got this error:
andres#DESKTOP-CPTOQVN:~/TeVi$ amplify push
✔ Successfully pulled backend environment dev from the cloud.
Current Environment: dev
| Category | Resource name | Operation | Provider plugin |
| -------- | -------------------- | --------- | ----------------- |
| Function | teviPostConfirmation | Update | awscloudformation |
| Auth | tevi | No Change | awscloudformation |
| Api | tevi | No Change | awscloudformation |
| Storage | s3c1026a67 | No Change | awscloudformation |
? Are you sure you want to continue? Yes
⠼ Updating resources in the cloud. This may take a few minutes...Error updating cloudformation stack
✖ An error occurred when pushing the resources to the cloud
Circular dependency between resources: [functionteviPostConfirmation, authtevi, UpdateRolesWithIDPFunctionOutputs, apitevi, UpdateRolesWithIDPFunction]
An error occured during the push operation: Circular dependency between resources: [functionteviPostConfirmation, authtevi, UpdateRolesWithIDPFunctionOutputs, apitevi, UpdateRolesWithIDPFunction]
This is the backend-config.json I have right now:
{
"auth": {
"tevi": {
"service": "Cognito",
"providerPlugin": "awscloudformation",
"dependsOn": [
{
"category": "function",
"resourceName": "teviPostConfirmation",
"triggerProvider": "Cognito",
"attributes": [
"Arn",
"Name"
]
}
]
}
},
"api": {
"tevi": {
"service": "AppSync",
"providerPlugin": "awscloudformation",
"output": {
"authConfig": {
"additionalAuthenticationProviders": [
{
"authenticationType": "AWS_IAM"
}
],
"defaultAuthentication": {
"authenticationType": "AMAZON_COGNITO_USER_POOLS",
"userPoolConfig": {
"userPoolId": "authtevi"
}
}
}
}
}
},
"storage": {
"s3c1026a67": {
"service": "S3",
"providerPlugin": "awscloudformation"
}
},
"function": {
"teviPostConfirmation": {
"build": true,
"providerPlugin": "awscloudformation",
"service": "Lambda",
"dependsOn": [
{
"category": "api",
"resourceName": "tevi",
"attributes": [
"GraphQLAPIIdOutput"
]
}
]
}
}
}
Amplify CLI Version
4.21.3
Expected behavior
Work with the post-confirmation function and create or update content on the User table with this one.
How can I fix it :/?
As it has been discussed in the related GitHub ticket, we can solve this problem by invoking a function from another.
First, we assume that you used amplify update auth and added Cognito Post Confirmation function to add the user to a group.
Also it seems you followed the this tutorial to make a function for adding user to a custom model after confirmation event. However, you cannot follow the last step of this tutorial as only one function can be bind to the Post confirmation trigger. So let us bind our first function to Post confirmation trigger and keep this new function unbind. I assume that you named this recent function as addusertotable.
If you have added a function with amplify update auth, you will have a file which is called add-to-group.js, modify the function as follow:
/* eslint-disable-line */ const aws = require('aws-sdk');
// My defined function starts here
var addUserToTable = function (event, context) {
// Call function to add user to table Users
var lambda = new aws.Lambda({
region: process.env.AWS_REGION
});
var params = {
FunctionName: 'addusertotable-' + process.env.ENV,
InvocationType: 'RequestResponse',
Payload: JSON.stringify(event, context)
};
lambda.invoke(params, function (err, data) {
if (err) {
console.error(err);
} else {
console.log(params.FunctionName + ': ' + data.Payload);
}
});
};
// My defined function ends here
exports.handler = async (event, context, callback) => {
// Call function to add user to table Users
addUserToTable(event, context); // this is also my code to call my defined function
// Rest are original code from amplify template
// Add group to the user
const cognitoidentityserviceprovider = new aws.CognitoIdentityServiceProvider({ apiVersion: '2016-04-18' });
const groupParams = {
GroupName: process.env.GROUP,
UserPoolId: event.userPoolId,
};
const addUserParams = {
GroupName: process.env.GROUP,
UserPoolId: event.userPoolId,
Username: event.userName,
};
try {
await cognitoidentityserviceprovider.getGroup(groupParams).promise();
} catch (e) {
await cognitoidentityserviceprovider.createGroup(groupParams).promise();
}
try {
await cognitoidentityserviceprovider.adminAddUserToGroup(addUserParams).promise();
callback(null, event);
} catch (e) {
callback(e);
}
};
As it has been discussed here, we need to add a segment to the lambda policy that allows us to invoke the secondary lambda function (i.e. addusertotable).
Then find the xyzPostConfirmation-cloudformation-template.json file, and then change its PolicyDocument entry like this:
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": {
"Fn::Sub": [
"arn:aws:logs:${region}:${account}:log-group:/aws/lambda/${lambda}:log-stream:*",
{
"region": {
"Ref": "AWS::Region"
},
"account": {
"Ref": "AWS::AccountId"
},
"lambda": {
"Ref": "LambdaFunction"
}
}
]
}
},
{
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": {
"Fn::Sub": [
"arn:aws:lambda:${region}:${account}:function:addusertotable-${env}",
{
"region": {
"Ref": "AWS::Region"
},
"account": {
"Ref": "AWS::AccountId"
},
"lambda": {
"Ref": "LambdaFunction"
},
"env": {
"Ref": "env"
}
}
]
}
Actually what I have added is the second object inside the Statement array.
Please note that addusertotable is the name of the custom function.

AWS Serverless Application Model: Create S3 Event to Lambda

I would like to use the Serverless Application Model(SAM) and CloudFormation to create a simple lambda function which gets triggered when an object is created in a S3 bucket(e.g. thescore-cloudfront-trial). How do I enable the trigger from the S3 bucket to the Lambda Function? Below is my python3 boto3 code.
region = 'us-east-1'
import boto3
test_lambda_template = {
'AWSTemplateFormatVersion': '2010-09-09',
'Transform': 'AWS::Serverless-2016-10-31',
'Resources': {
'CopyS3RajivCloudF': {
'Type': 'AWS::Serverless::Function',
'Properties': {
"CodeUri": 's3://my-tmp/CopyS3Lambda',
"Handler": 'lambda.handler',
"Runtime": 'python3.6',
"Timeout": 300,
"Role": 'my_existing_role_arn'
},
'Events': {
'Type': 'S3',
'Properties': {
"Bucket": "thescore-cloudfront-trial",
"Events": 's3:ObjectCreated:*'
}
}
},
'SrcBucket': {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": 'thescore-cloudfront-trial',
}
}
}
}
conf = config.get_aws_config('development')
client = aws.client(conf, 'cloudformation', region_name=region)
response = client.create_change_set(
StackName='RajivTestStack',
TemplateBody=json.dumps(test_lambda_template),
Capabilities=['CAPABILITY_IAM'],
ChangeSetName='a',
Description='Rajiv ChangeSet Description',
ChangeSetType='CREATE'
)
response = client.execute_change_set(
ChangeSetName='a',
StackName='RajivTestStack',
)
I figured it out with caveats
Caveat 1: The trigger notification will show up in S3 console but not in the Lambda console. My existing python deploy scripts using boto3 s3 and lambda clients(which I want to replace) show the notification in both consoles.
Caveat 2: For monitoring, I see my lambda trigger only when I switch to see the lambda alias view. But I haven't specified an alias for my lambda. So I don't know why I don't see it in the non alias view(just seeing the LATEST version)
I had to modify the Events key like this:
'Events': {
'RajivCopyEvent': {
'Type': 'S3',
'Properties': {
"Bucket": {"Ref": "SrcBucket"},
"Events": "s3:ObjectCreated:*"
}
}
}