I am creating two Stack using AWS CDK. I use the first Stack to create an S3 bucket and upload lambda Zip file to the bucket using BucketDeployment construct, like this.
//FirstStack
const deployments = new BucketDeployment(this, 'LambdaDeployments', {
destinationBucket: bucket,
destinationKeyPrefix: '',
sources: [
Source.asset(path)
],
retainOnDelete: true,
extract: false,
accessControl: BucketAccessControl.PUBLIC_READ,
});
I use the second Stack just to generate CloudFormation template to my clients. In the second Stack, I want to create a Lambda function with parameters S3 bucket name and key name of the Lambda zip I uploaded in the 1st stack.
//SecondStack
const lambdaS3Bucket = "??"; //TODO
const lambdaS3Key = "??"; //TODO
const bucket = Bucket.fromBucketName(this, "Bucket", lambdaS3Bucket);
const lambda = new Function(this, "LambdaFunction", {
handler: 'index.handler',
runtime: Runtime.NODEJS_16_X,
code: Code.fromBucket(
bucket,
lambdaS3Key
),
});
How do I refer the parameters automatically from 2nd Lambda?
In addition to that, the lambdaS3Bucket need to have AWS::Region parameters so that my clients can deploy it in any region (I just need to run the first Stack in the region they require).
How do I do that?
I had a similar usecase to this one.
The very simple answer is to hardcode the values. The bucketName is obvious.
The lambdaS3Key You can look up in the synthesized template of the first stack.
More complex answer is to use pipelines for this. I've did this and in the build step of the pipeline I extracted all lambdaS3Keys and exported them as environment variable, so in the second stack I could reuse these in the code, like:
code: Code.fromBucket(
bucket,
process.env.MY_LAMBDA_KEY
),
I see You are aware of this PR, because You are using the extract flag.
Knowing that You can probably reuse this property for Lambda Key.
The problem of sharing the names between the stacks in different accounts remains nevertheless. My suggestion is to use pipelines and the exported constans there in the different steps, but also a local build script would do the job.
Do not forget to update the BucketPolicy and KeyPolicy if You use encryption, otherwise the customer account won't have the access to the file.
You could also read about the AWS Service Catalog. Probably this would be a esier way to share Your CDK products to Your customers (CDK team is going to support the out of the box lambda sharing next on)
Related
I have a requirement for creating aws lambda functions dynamically basis some input parameters like name, docker image etc.
I have been able to build this using terraform (triggered using gitlab pipelines).
Now the problem is that for every unique name I want a new lambda function to be created/updated, i.e if I trigger the pipeline 5 times with 5 names then there should be 5 lambda functions, instead what I get is the older function being destroyed and a new one being created.
How do I achieve this?
I am using Resource: aws_lambda_function
Terraform code
resource "aws_lambda_function" "executable" {
function_name = var.RUNNER_NAME
image_uri = var.DOCKER_PATH
package_type = "Image"
role = role.arn
architectures = ["x86_64"]
}
I think there is a misunderstanding on how terraform works.
Terraform maps 1 resource to 1 item in state and the state file is used to manage all created resources.
The reason why your function keeps getting destroyed and recreated with the new values is because you have only 1 resource in your terraform configuration.
This is the correct and expected behavior from terraform.
Now, as mentioned by some people above, you could use "count or for_each" to add new lambda functions without deleting the previous ones, as long as you can keep track of the previous passed values (always adding the new values to the "list").
Or, if there is no need to keep track/state of the lambda functions you have created, terraform may not be the best solution to solve your needs. The result you are looking for can be easily implemented by python or even shell with aws cli commands.
I need to copy a file from one folder to another inside an unique Amazon S3 bucket. However, due to files size, I can't simply call copyObject method from AWS SDK S3 class, since it timesout my Lambda function.
That's why I'm trying to create a S3 Batch Operations job to move this file, but I'm getting an Invalid job operation error when trying to. I'm using AWS SDK S3Control class, trying to invoke method createJob. I'm passing this object as parameter:
{
AccountId: '445084039568',
Manifest: {
Location: {
ETag: 'dbe4a392892992491a7124c10f2fbf03',
ObjectArn: 'arn:aws:s3:::amsp-media-bucket/manifest.csv'
},
Spec: {
Format: 'S3BatchOperations_CSV_20180820',
Fields: ['Bucket', 'Key']
},
},
Operation: {
S3PutObjectCopy: {
TargetResource: 'arn:aws:s3:::amsp-media-bucket/bigtest'
}
},
Report: {
Enabled: false
},
Priority: 10,
RoleArn: 'arn:aws:iam::445084039568:role/mehoasumsp-sandbox-asumspS3JobRole-64XWYA3CFZF3'
}
To be honest, I'm not sure if I'm specifying manifest correctly. This is manifest.csv content:
amsp-media-bucket, temp/37766a92-16ef-4ee2-8e79-3875679dad85.mkv
I'm not insecure about the file itself but about the way I define Spec property at param object.
Single quotes might not be valid in the job spec JSON. I have only seen double quotes.
In boto3 (Python SDK), using the managed .copy() function instead of .copy_object(), and tuning multipart_chunksize and the concurrency settings, the multiple UploadPartCopy requests may well complete within the Lambda runtime limit. The AWS JS SDK appears to lack an equivalent function, you may want to try something like https://github.com/Zooz/aws-s3-multipart-copy
As John Rotenstein said, beware the space in the object key in your CSV file.
S3PutObjectCopy S3 Batch Operation jobs use CopyObject, which has a size limit of 5GiB.
Together with the operation costs, S3 Batch Operation jobs cost $0.25 each, which might be expensive if copying a small number of objects.
I am quite new to the CDK, but I'm adding a LogQueryWidget to my CloudWatch Dashboard through the CDK, and I need a way to add all LogGroups ending with a suffix to the query.
Is there a way to either loop through all existing LogGroups and finding the ones with the correct suffix, or a way to search through LogGroups.
const queryWidget = new LogQueryWidget({
title: "Error Rate",
logGroupNames: ['/aws/lambda/someLogGroup'],
view: LogQueryVisualizationType.TABLE,
queryLines: [
'fields #message',
'filter #message like /(?i)error/'
],
})
Is there anyway I can add it so logGroupNames contains all LogGroups that end with a specific suffix?
You cannot do that dynamically (i.e. you can't make this work such that if you add a new LogGroup, the query automatically adjusts), without using something like AWS lambda that periodically updates your Log Query.
However, because CDK is just a code, there is nothing stopping you from making an AWS SDK API call inside the code to retrieve all the log groups (See https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatchLogs.html#describeLogGroups-property) and then populate logGroupNames accordingly.
That way, when CDK compiles, it will make an API call to fetch LogGroups and then generated CloudFormation will contain the log groups you need. Note that this list will only be updated when you re-synthesize and re-deploy your stack.
Finally, note that there is a limit on how many Log Groups you can query with Log Insights (20 according to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html).
If you want to achieve this, you can create a custom resource using AwsCustomResource and AwsSdkCall classes to do the AWS SDK API call (as mentioned by #Tofig above) as part of the deployment. You can read data from the API call response as well and act on it as you want.
I've create an S3 bucket for hosting my website. For that I've used the below code from the AWS CDK for python docs
self.bucket = s3.Bucket(
self,
"my-bucket-name",
bucket_name="my-bucket-name",
removal_policy=core.RemovalPolicy.DESTROY,
website_index_document="index.html",
public_read_access=True
)
For a reason, I want to send this bucket object as an argument to another object and get the bucket name from the argument. So, I've tried
self.bucket.bucket_name
self.bucket.bucket_arn
nothing seems working, instead the object returns ${Token[TOKEN.189]}. Could anyone guide me through this?
If the bucket name is hard coded like the example you pasted above, you can always externalize it to the cdk context file. As you've seen, when you access the bucket name from the Bucket construct, it creates a reference to it and that is so if you need it in another resource, cloud formation will depend on the value from the Bucket resource by using the Ref/GetAtt capabilities in CloudFormation. Then it will be guaranteed that the bucket actually exists before it is used downstream.
If you don't care about that and just want the actual bucket name in the cdk app code then put the value in the cdk context json file and use node.try_get_context to retrieve it wherever.
There is a handy method called fromBucketName you can use if it wasn't defined in your current app:
const bucket = aws_s3.Bucket.fromBucketName(this, 'bucketLabel", "nameYouGaveBucket")
Otherwise, I believe you are looking for bucket.bucketName (typescript) or bucket.bucket_name (python).
See typescript docs python docs. This is also available in the CDK wrappers in other languages.
Note that there are similar methods for all sorts of CDK constructs, so you should refer often to the API docs, as there is lots like this you can find easily there.
Context
I have created a AWS Logs SubscriptionFilter using CDK. I am now trying to create a metric/alarm for some of the metrics for this resource.
Problem
All the metrics I am interested in (see ForwardedLogEvents, DeliveryErrors, DeliveryThrottling in the Monitoring AWS Logs with CloudWatch Metrics docs) requires these dimensions to be specified:
LogGroupName
DestinationType
FilterName
The first two are easy to specify since the LogGroupName is also required while creating the construct and DestinationType in my case is just Lambda. However, I see no way to get FilterName using CDK.
Using CloudWatch, I see that the FilterName is like MyStackName-MyLogicalID29669D87-GCMA0Q4KKALH. So I can't directly specify it using a Fn.ref (since I don't know the logical id). Using CloudFormation, I could have directly done Ref: LogicalId.
I also don't see any properties on the SubscriptionFilter object that will return this (unlike most other CDK constructs this one seems pretty bare and returns absolutely no information about the resource).
There are also no metric* methods on SubscriptionFilter object (unlike other standard constructs like Lambda functions, S3 buckets etc.), so I have to manually specify the Metric object. See for example: CDK metric objects docs.
The CDK construct (and the underlying CloudFormation resource: AWS::Logs::SubscriptionFilter) does not let me specify the FilterName - so I can't use a variable to specify it also and the name is dynamically generated.
Example code that is very close to what I need:
const metric = new Metric({
namespace: 'AWS/Logs',
metricName: 'ForwardedLogEvents',
dimensions: {
DestinationType: 'Lambda',
// I know this value since I specified it while creating the SubscriptionFilter
LogGroupName: 'MyLogGroupName',
FilterName: Fn.ref('logical-id-wont-work-since-it-is-dynamic-in-CDK')
}
})
Question
How can I figure out how to acquire the FilterName property to construct the Metric object?
Or otherwise, is there another way to go about this?
I was able to work around this by using Stack#getLogicalId method.
Example code
In Kotlin, as an extension function for any Construct):
fun Construct.getLogicalId() = Stack.of(this).getLogicalId(this.node.defaultChild as CfnElement)
... and then use it with any Construct:
val metric = Metric.Builder.create()
.namespace("AWS/Logs")
.metricName("ForwardedLogEvents")
.dimensions(mapOf(
"DestinationType" to "Lambda",
"LogGroupName" to myLogGroup.logGroupName,
"FilterName" to mySubscriptionFilter.getLogicalId()
))
.statistic("sum")
.build()