I am trying to deploy a Layer, with the size of 99MB, and I am getting this error.
│ Error: Error creating lambda layer: RequestEntityTooLargeException:
│ status code: 413, request id: 5a87d055-ba71-47bb-8c60-86d3b00e8dfc
│
│ with aws_lambda_layer_version.aa,
│ on layers.tf line 68, in resource "aws_lambda_layer_version" "aa":
│ 68: resource "aws_lambda_layer_version" "aa" {
This is the .tf
resource "aws_lambda_layer_version" "aa" {
filename = "custom_layers/aa/a.zip"
layer_name = "aa"
compatible_runtimes = ["python3.8"]
}
The zip is in the right location.
According to the AWS Lambda quotas, you cannot have a deployment package (.zip file archive) with size more than:
50 MB (zipped, for direct upload)
250 MB (unzipped)
This quota applies to all the files you upload, including layers and
custom runtimes.
3 MB (console editor)
There's also a paragraph in the AWS Lambda docs for your exact error:
General: Error occurs when calling the UpdateFunctionCode Error: An
error occurred (RequestEntityTooLargeException) when calling the
UpdateFunctionCode operation
When you upload a deployment package or layer archive directly to
Lambda, the size of the ZIP file is limited to 50 MB. To upload a
larger file, store it in Amazon S3 and use the S3Bucket and S3Key
parameters.
You should try to do one of the following:
Split your current lambda layer into multiple layers
Upload the layer zip to S3, and specify the Object in your terraform lambda config
Related
I have such problem with Terraform:
I need to create archive (I use data.archive_file with output_path = ./archive/s3/my_test_archive.zip) and then I wanna upload archive to aws bucket (using aws_s3_object).
To check hash sum I use
etag = filemd5("${data.archive_file.my_archive.output_path}")
But when I run terraform apply I get error
on module\main.tf line 20, in resource "aws_s3_object" "object":
: etag = filemd5("${data.archive_file.my_archive.output_path}")
├────────────────
│ while calling filemd5(path)
│ data.archive_file.my_archive.output_path is "./archive/s3/my_test_archive.zip"
│ Call to function "filemd5" failed: open archive/s3/my_test_archive.zip: The > system cannot find the file specified..```
I guess it's cause when Terraform is checking the code at the same moment file doesn't exist.
If I disable etag check this will work, but I need to check changes in archive.
How can I resolve it?
if I wanted to create a bucket with a layout like this:
bucket/
├─ subdir-1/
│ ├─ subsubdir-1/
├─ subdir-2/
├─ subdir-3/
how could I do this using the cdk?
I know that you can just upload a file with the requisite because subdirectories don't really do anything bc S3 is a file system, but I have a use case because Spark is expecting a subdirectory to exist for some reason.
And if you have to create a file in the directory, that is a really poor solution because you lose the ability to configure your S3 bucket within the CDK (things like versioning, vpc access, replication controls, etc.)
Since folders don't really exist in S3, and files only have a 'prefix' - which by convention results in an apparent directory structure... take a look at the BucketDeployment construct and upload 0 byte placeholder files named subdir-1/subsubdir-1/placeholder that Spark will ignore.
I've working on a project in which the s3 bucket in aws is already created via console and the lambda code is already there as an object. I'm creating a terraform script in which I will reference that zip and then create a lambda function and publish it. Incase there's any change detected in the code (the code zip can be changed from console), then it should publish latest. How can I do that? Currently I'm getting errror-
module "student_lambda"{
source = "https://....." // I'm using a template which creates lambda function
handler..
s3_bucket = "SaintJoseph"
s3_key = "grade5/studentlist.zip"
source_code_hash = filebase64sha256("/grade5/studentlist.zip").etag
.....
}
My bucket structure
SaintJoseph -- bucketname
grade5
studentlist.zip
subjectlist.zip
grade6
Errors I'm getting in plan -
Error in function call - Call to function filebase64sha256 failed: open grade5/studentlist.zip: no such file or directory
The bucket key or source is invalid.
Can someone please also help to let me know what to use like etag/source_code_hash, etc so that it only takes changes when zip file is changed and how to remove existing error?
filebase64sha256 works only on local filesystem. To reference etag of an s3 object you have to use aws_s3_object data source. The source returns etag.
When I try to run terraform applyin my project, it throws the following error:
Error: Error Updating Kinesis Firehose Delivery Stream: "delivery"
│ InvalidArgumentException: Enabling or disabling Dynamic Partitioning is not supported at the moment
│
│ with module.shippeo-api.module.v1.aws_kinesis_firehose_delivery_stream.event-eta,
│ on ../../modules/api_gateway_v1/kinesis.tf line 12, in resource "aws_kinesis_firehose_delivery_stream" "event-eta":
│ 12: resource "aws_kinesis_firehose_delivery_stream" "event-eta" {
│
╵
because of this part:
resource "aws_kinesis_firehose_delivery_stream" "event-eta" {
name = local.firehose_delivery_stream
destination = "extended_s3"
extended_s3_configuration {
role_arn = var.integration_role_arn
#bucket_arn = aws_s3_bucket.jsonfiles.arn
bucket_arn = var.target_bucket_arn
prefix = "!{partitionKeyFromLambda:apiPath}/!{partitionKeyFromLambda:authorizerClientId}/!{timestamp:yyyy}/!{timestamp:MM}/!{timestamp:dd}/!{timestamp:HH}/"
#prefix = "!{timestamp:yyyy}/!{timestamp:MM}/!{timestamp:dd}/!{timestamp:HH}/"
error_output_prefix = "error/!{timestamp:yyyy}/!{timestamp:MM}/!{timestamp:dd}/!{timestamp:HH}/!{firehose:error-output-type}"
dynamic_partitioning_configuration {
enabled = true
}
I already have the latest version in my provider.tf files:
required_providers {
archive = {
source = "hashicorp/archive"
version = "2.2.0"
}
aws = {
source = "hashicorp/aws"
version = "3.72.0"
}
}
However, when I check terraform versionon my terminal, I get this:
Terraform v1.0.7
on darwin_amd64
+ provider registry.terraform.io/hashicorp/archive v2.2.0
+ provider registry.terraform.io/hashicorp/aws v3.72.0
Your version of Terraform is out of date! The latest version
is 1.1.4. You can update by downloading from https://www.terraform.io/downloads.html
I already tried terraform init -upgradebut that didn't make a difference either.I also manually downloaded terraform's new version from the website but my terminal still shows 1.0.7
Yes, that's limitation at the moment. Currently, you can enable the dynamic partitioning only while creating a new delivery stream but not on existing delivery streams.
From AWS documentation:
Important You can enable dynamic partitioning only when you create a
new delivery stream. You cannot enable dynamic partitioning for an
existing delivery stream that does not have dynamic partitioning
already enabled.
It means, if you want to use the feature currently, you will have to create a new stream.
I have uploaded my layer zip file in AWS s3 and I tried to create layer and added the link of zip file from s3 I get error:
Failed to create layer version: 1 validation error detected: Value 's3' at 'content.s3Bucket' failed to satisfy constraint: Member must have length greater than or equal to 3
Try putting your file in which is in s3 correctly:
https://s3.amazonaws.com/yourbucket/YourFile.zip