I was following this guide to deploy the AWS Serverless Image Handler. I used the given template, and I was able to successfully deploy it.
However, I want to customize the code slightly for my specific needs, and I tried two different approaches, but none of them worked.
Approach #1
I downloaded the .zip source code from the Lambda console, I unarchived it, made the changes, and deployed it via S3 (because it was over 50 MB I couldn't directly from my machine).
However, this resulted in the following error: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-central-1'
Approach #2
Then I tried following the guide from their site: Customizing Lambda Thumbor Package
First problem is, they recommend an Amazon Linux for the listed operations which I don't have, and the instructions for installing it rather complex.
At the end of the process, they say to use the command aws s3 cp . s3://mybucket-[region_name]/serverless-image-handler/v1.0/ --recursive --exclude "*" --include "*.zip". However, this results in the error upload failed: Unable to locate credentials.
To fix that, I tried running aws configure, but here I got the following error: ./serverless-image-handler-ui.zip to s3://my-bucket-eu-central-1/serverless-image-handler/v10.0/serverless-image-handler-ui.zip An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist. I am suspecting that it's confused by the name of my bucket which uses the - separator the same as one separating the bucket name with the region in the command aws s3 cp . s3://mybucket-[region_name]/serverless-image-handler/v1.0/ ...
I just want a simple way to upload my customized code. How do I do that?
Related
I tried to run aws lambda publish-layer-version command line in my local console using my personal aws credentials, but I've got an Amazon S3 Access Denied error for the bucket in which the zip layer is stored.
aws lambda publish-layer-version --layer-name layer_name --content S3Bucket=bucket_name,S3Key=layers/libs.zip
An error occurred (AccessDeniedException) when calling the PublishLayerVersion operation: Your access has been denied by S3, please make sure your request credentials have permission to GetObject for {URI of layer in my S3 bucket}. S3 Error Code: AccessDenied. S3 Error Message: Access Denied
When I'm running the aws cp command in the same bucket, it all works perfectly fine
aws s3 cp s3://bucket_name/layers/libs.zip libs.zip
So I assume that the aws lambda command line is using an other role than the one used when I'm running the aws cp command line ? Or maybe it uses another mecanism that I just don't know. But I couldn't find any thing about it in the AWS documentation.
I've just read that AWS can return a 403 it couldn't find the file. So maybe it could be an issue with the command syntax ?
Thank you for your help.
For your call to publish-layer-version you may need to specify the --content parameter with 3 parts:
S3Bucket=string,S3Key=string,S3ObjectVersion=string
It looks like you are missing S3ObjectVersion. I don't know what the AWS behavior is for evaluating and applying the parts of that parameter, but it could be attempting to do something more since the version is not specified and hence giving you that error. Or it could be returning an error code that is not quite right and is misleading. Try adding S3ObjectVersion and let me know what you get.
Otherwise, AWS permission evaluation can be complex. I like this AWS diagram below, so it is a good one to follow to track down permissions issues, but I suspect that AccessDenied is a bit of a red herring in this case:
Your Lambda does not have privileges (S3:GetObject).
Try running aws sts get-caller-identity. This will give you the IAM role your command line is using.
Go to IAM dashboard, check this role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It will look something like attached image.Make sure S3:GetObject is listed.
Also, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.
I'm trying to get at the Common Crawl news S3 bucket, but I keep getting a "fatal error: Unable to locate credentials" message. Any suggestions for how to get around this? As far as I was aware Common Crawl doesn't even require credentials?
From News Dataset Available – Common Crawl:
You can access the data even without a AWS account by adding the command-line option --no-sign-request.
I tested this by launching a new Amazon EC2 instance (without an IAM role) and issuing the command:
aws s3 ls s3://commoncrawl/crawl-data/CC-NEWS/
It gave me the error: Unable to locate credentials
I then ran it with the additional parameter:
aws s3 ls s3://commoncrawl/crawl-data/CC-NEWS/ --no-sign-request
It successfully listed the directories.
AWS Lambda Functions have an option to enter the code uploaded as a file from S3. I have a successfully running lambda function with the code taken as a zip file from an S3 Bucket, however, any time you would like to update this code you would need to either manually edit the code inline within the lambda function or upload a new zip file to S3 and go into the lambda function and manually re-upload the file from S3. Is there any way to get the lambda function to link to a file in S3 so that it will automatically update its function code when you update the code file (or zip file) contained in S3?
Lambda doesn't actually reference the S3 code when it runs--just when it sets up the function. It is like it takes a copy of the code in your bucket and then runs the copy. So while there isn't a direct way to get the lambda function to automatically run the latest code in your bucket, you can make a small script to update the function code using SDK methods. I don't know which language you might want to use, but for example, you could write a script to call the AWS CLI to update the function code. See https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-code.html
Updates a Lambda function's code.
The function's code is locked when you publish a version. You can't
modify the code of a published version, only the unpublished version.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
Synopsis
update-function-code
--function-name [--zip-file ] [--s3-bucket ] [--s3-key ] [--s3-object-version ] [--publish |
--no-publish] [--dry-run | --no-dry-run] [--revision-id ] [--cli-input-json ] [--generate-cli-skeleton ]
You could do similar things using Python or PowerShell as well, such as using
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.update_function_code
You can set up an AWS Code deploy pipeline to get your code build and deployed on code commit in your code repository(github,bitbucket,etc)
CodeDeploy is a deployment service that automates application
deployments to Amazon EC2 instances, on-premises instances, serverless
Lambda functions, or Amazon ECS services.
Also, wanted to add if you want to go on a more unattended route of deploying your Updated code to the Lambda use this flow in your code Pipeline
Source -> Code Build (npm installs and zipping etc.) -> S3 Upload (sourcecode.zip in S3 bucket) -> Code Build (another build just for aws lambda update-funtion-code)
Make sure the role for the last stage has both S3 getObject and Lambda UpdateFunctionCode policies attached to it.
I'm using the upload-archive command in AWS-CLI in Windows PS to upload a zip archive to a Glacier vault and keep getting an 'InvalidParameterException: Invalid Content-Length' error. Not sure what parameter I'm missing.
My aws-cli command:
aws glacier upload-archive --account-id - --vault-name sawsa.video.glacier --body saw-09-21-19.7z
Returns the following error:
An error occurred (InvalidParameterValueException) when calling the UploadArchive operation: Invalid ContentLength: 13769102233
I've ensured the account keys/secret and region are all saved in aws-cli config. I can list/read content of the vault without any problem. I'm providing the full account ID in my command, but am using the '-' here, for posting code sample.
Multipart upload is required when the object size you are uploading is greater than 5 GBs.
As stated in the AWS documentation for S3:
Depending on the size of the data you are uploading, Amazon S3 offers the following options:
Upload objects in a single operation—With a single PUT operation, you can upload objects up to 5 GB in size.
Upload objects in parts—Using the multipart upload API, you can upload large objects, up to 5 TB.
Example:
Load the first part:
$ aws glacier initiate-multipart-upload --account-id - --part-size 1048576 --vault-name my-vault --archive-description "multipart upload test"
This command outputs an upload ID when successful. Use the upload ID when uploading each part of your archive with aws glacier upload-multipart-part as shown next:
Load the rest assuming the returned upload ID is 19gaRezEXAMPLES6Ry5YYdqthHOC_kGRCT03L9yetr220UmPtBYKk-OssZtLqyFu7sY1_lR7vgFuJV6NtcV5zpsJ (repeat as many times as necessary to consume the object):
aws glacier upload-multipart-part --body saw-09-21-19-part1.7z --range 'bytes 0-1048575/*' --account-id - --vault-name my-vault --upload-id 19gaRezEXAMPLES6Ry5YYdqthHOC_kGRCT03L9yetr220UmPtBYKk-OssZtLqyFu7sY1_lR7vgFuJV6NtcV5zpsJ
Here is step-by-step information on how to do it with the CLI:
https://docs.aws.amazon.com/cli/latest/userguide/cli-services-glacier.html
See here for even more information: https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html
https://docs.aws.amazon.com/cli/latest/reference/glacier/upload-multipart-part.html
According to the CLI docs it looks like you must provide a SHA256 treehash.
https://docs.aws.amazon.com/cli/latest/reference/glacier/upload-archive.html (see paragraph 3)
If I'm reading this correctly, it looks like your file is about 13GB. Even though upload-archive should work with files up to 40GB, you might also want to try using upload-multi-part command. https://docs.aws.amazon.com/cli/latest/reference/glacier/upload-multipart-part.html
As a side note I also noticed that the corresponding AWS Rest API endpoint/command for glacier seems to require Content-Length as a header parameter, but I don't see it mentioned in the CLI docs.
https://docs.aws.amazon.com/amazonglacier/latest/dev/api-archive-post.html
Can anyone explain this behaviour:
When I try to download a file from S3, I get the following error:
An error occurred (403) when calling the HeadObject operation: Forbidden.
Commandline used:
aws s3 cp s3://bucket/raw_logs/my_file.log .
However, when I use the S3 console website, I'm able to download the file without issues.
The access key used by the commandline is correct. I verified this, and other AWS operations via commandline work fine. The access key is tied to the same user account I use in the AWS console.
So I assume you're sure about the IAM policy of your user and the file exists in your bucket
If you have set a default region in your configuration but the bucket has not been created in this region (Yes s3 buckets are created in a region), it will not find it. Make sure to add the region flag to the CLI
aws s3 cp s3://bucket/raw_logs/my_file.log . --region <region of the bucket>
Other notes:
make sure to upgrade to latest version
can be cause if system clock is not synchronized, if you're not indicating any synchronize params, it might be ok but I dont know the internal and for some commands the CLI is looking at the system clock to compare to S3, if you're out of sync it might cause issues
I had a similar issue due to having two-factor authentication enabled on my account. Check out how to configure 2FA for the aws cli here: https://aws.amazon.com/premiumsupport/knowledge-center/authenticate-mfa-cli/