AWS Creating layer from zip file uploaded in s3 - amazon-web-services

I have uploaded my layer zip file in AWS s3 and I tried to create layer and added the link of zip file from s3 I get error:
Failed to create layer version: 1 validation error detected: Value 's3' at 'content.s3Bucket' failed to satisfy constraint: Member must have length greater than or equal to 3

Try putting your file in which is in s3 correctly:
https://s3.amazonaws.com/yourbucket/YourFile.zip

Related

How to reference a zip file uploaded in an s3 bucket in your terraform lambda code?

I've working on a project in which the s3 bucket in aws is already created via console and the lambda code is already there as an object. I'm creating a terraform script in which I will reference that zip and then create a lambda function and publish it. Incase there's any change detected in the code (the code zip can be changed from console), then it should publish latest. How can I do that? Currently I'm getting errror-
module "student_lambda"{
source = "https://....." // I'm using a template which creates lambda function
handler..
s3_bucket = "SaintJoseph"
s3_key = "grade5/studentlist.zip"
source_code_hash = filebase64sha256("/grade5/studentlist.zip").etag
.....
}
My bucket structure
SaintJoseph -- bucketname
grade5
studentlist.zip
subjectlist.zip
grade6
Errors I'm getting in plan -
Error in function call - Call to function filebase64sha256 failed: open grade5/studentlist.zip: no such file or directory
The bucket key or source is invalid.
Can someone please also help to let me know what to use like etag/source_code_hash, etc so that it only takes changes when zip file is changed and how to remove existing error?
filebase64sha256 works only on local filesystem. To reference etag of an s3 object you have to use aws_s3_object data source. The source returns etag.

Thumbnails from S3 Videos using FFMPEG - "No such file or directory: '/bin/ffmpeg'"

I am trying to generate thumbnails from videos in an S3 bucket every x frames by following this documentation: https://aws.amazon.com/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg/
I am at the point where I'm testing the Lambda code provided in the documentation, but receive this error in CloudWatch Logs:
Here is the portion of the Lambda code associated with this error:
Any help is appreciated. Thanks!

AWS Layer RequestEntityTooLargeException Terraform

I am trying to deploy a Layer, with the size of 99MB, and I am getting this error.
│ Error: Error creating lambda layer: RequestEntityTooLargeException:
│ status code: 413, request id: 5a87d055-ba71-47bb-8c60-86d3b00e8dfc
│
│ with aws_lambda_layer_version.aa,
│ on layers.tf line 68, in resource "aws_lambda_layer_version" "aa":
│ 68: resource "aws_lambda_layer_version" "aa" {
This is the .tf
resource "aws_lambda_layer_version" "aa" {
filename = "custom_layers/aa/a.zip"
layer_name = "aa"
compatible_runtimes = ["python3.8"]
}
The zip is in the right location.
According to the AWS Lambda quotas, you cannot have a deployment package (.zip file archive) with size more than:
50 MB (zipped, for direct upload)
250 MB (unzipped)
This quota applies to all the files you upload, including layers and
custom runtimes.
3 MB (console editor)
There's also a paragraph in the AWS Lambda docs for your exact error:
General: Error occurs when calling the UpdateFunctionCode Error: An
error occurred (RequestEntityTooLargeException) when calling the
UpdateFunctionCode operation
When you upload a deployment package or layer archive directly to
Lambda, the size of the ZIP file is limited to 50 MB. To upload a
larger file, store it in Amazon S3 and use the S3Bucket and S3Key
parameters.
You should try to do one of the following:
Split your current lambda layer into multiple layers
Upload the layer zip to S3, and specify the Object in your terraform lambda config

WinSCP put to S3 bucket "folder" when path doesn't exist and user doesn't have access to list objects

Using the WinSCP client, how can I load a CSV file to an S3 bucket with the following conditions:
The only S3 access I have is to put an object at this example path: s3://my_bucket/folder1/folder2
This logical directory doesn't exist unless I load the file - When I upload my file a Lambda function is fired to move the newly uploaded file. So this directory only "exists" for a split second on putobject.
I'm trying to build a WinSCP script like so:
open s3://[my_id]:[my_key]#s3.amazonaws.com -rawsettings S3DefaultRegion="[my_region]"
put "[source_dir]/file1.csv" /[my_bucket]/[folder1]/[folder2]/
put "[source_dir]/file2.csv" /[my_bucket]/[folder1]/[folder2]/
exit
but this returns an error:
Connecting to host...
Access denied.
Access Denied
Connection failed.
I updated the open statement to include the bucket/prefix
open s3://[my_id]:[my_key]#s3.amazonaws.com/[my_bucket]/[folder1]/[folder2] -rawsettings S3DefaultRegion="[my_region]"
and get this error:
Connecting to host...
File or folder '[my_bucket]/[folder1]/[folder2]' does not exist.
Connection failed.
I simply want to load a file to [my_bucket]/[folder1]/[folder2] in the same was as this AWS CLI script, which works without issue:
aws s3 cp [source_dir]/file1.csv s3://[my_bucket]/[folder1]/[folder2]
aws s3 cp [source_dir]/file2.csv s3://[my_bucket]/[folder1]/[folder2]

AWS CLI sync from S3 to local fails with Errno 20

Im using the following command
aws s3 sync s3://mys3bucket/ .
to download all the files AND directories from my s3 bucket "mys3bucket" into an empty folder. In this bucket is a directory called "albums". However instead of copying the files into a "albums" directory, I am receiving the following error message (an example)
download failed: s3://mys3bucket//albums/albums/5384 to albums/albums/5384 [Errno 20] Not a directory: u'/storage/mys3bucket//albums/albums/5384'
When I look in the folder to see what files, if any, did get copied into the albums folder, there is only 1 file in there called "albums" which when I edit it contains the text "{E40327AD-517B-46e8-A6D2-AF51BC263F50}".
This behavior is similar for all the other directories in this bucket. I see more of the error #20 by far than I see successful downloads. There is over 100GB of image files in the albums folder but not a single one is able to download.
Any suggestions?
I suspect the problem here is that you have both a 'directory' and a 'file' on S3 which have the same name. If you delete the 'file' from S3 then you should find that the directory will sync again.
I have found that this situation can occur when using desktop clients to view an S3 bucket, or something like s3sync.
http://www.witti.ws/blog/2013/12/03/transitioning-s3sync-aws-cli/