Lambda PutObjectCommand failing with "Resolved credential object is not valid" - amazon-web-services

I have a lambda which is attempting to put an object in an S3 bucket.
The code to configure the s3 client is as follows:
const configuration: S3ClientConfig = {
region: 'us-west-2',
};
if (process.env.DEVELOPMENT_MODE) {
configuration.credentials = {
accessKeyId: process.env.AWS_ACCESS_KEY!,
secretAccessKey: process.env.AWS_SECRET_KEY!,
}
}
export const s3 = new S3Client(configuration);
And the code to upload the file is as follows:
s3.send(new PutObjectCommand({
Bucket: bucketName,
Key: fileName,
ContentType: contentType,
Body: body,
}))
This works locally. The lambda's role includes a policy which in turn includes the following statement:
{
"Action": [
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
],
"Effect": "Allow"
}
However, when I invoke this lambda, it fails with the following stack trace
Error: Resolved credential object is not valid
at SignatureV4.validateResolvedCredentials (webpack://backend/../node_modules/#aws-sdk/signature-v4-multi-region/node_modules/#aws-sdk/signature-v4/dist-es/SignatureV4.js?:307:19)
at SignatureV4.eval (webpack://backend/../node_modules/#aws-sdk/signature-v4-multi-region/node_modules/#aws-sdk/signature-v4/dist-es/SignatureV4.js?:50:30)
at step (webpack://backend/../node_modules/tslib/tslib.es6.js?:130:23)
at Object.eval [as next] (webpack://backend/../node_modules/tslib/tslib.es6.js?:111:53)
at fulfilled (webpack://backend/../node_modules/tslib/tslib.es6.js?:101:58)
I'm using (what is currently) the latest javascript aws sdk, version 3.165.0. What am I missing here?

The problem is that I was trying to load the configuration credentials from environment variables instead of relying on the IAM role. Turns out process.env.DEVELOPMENT_MODE was resolving to the string 'true' instead of the boolean true.
if (process.env.DEVELOPMENT_MODE === 'true') {
configuration.credentials = {
accessKeyId: process.env.AWS_ACCESS_KEY!,
secretAccessKey: process.env.AWS_SECRET_KEY!,
}
}

Related

Unable to auth into AppSync GraphQL api from Lambda using IAM and AppSyncClient

I'm using an amplify stack and need to perform some actions to my graphql api which has dynamodb behind it. The request in my lambda function returns an Unauthorized error: "Not Authorized to access getSourceSync on type SourceSync", where getSourceSync is the gql query and SourceSync is the model name.
My schema.grapqhl for this particular model is set up as following. Note auth rule allow private provider iam:
type SourceSync #model (subscriptions: { level: off }) #auth(rules: [
{allow: private, provider: iam}
{allow: groups, groups: ["Admins"], provider: userPools},
{allow: groups, groups: ["Users"], operations: [create], provider: userPools},
{allow: groups, groupsField: "readGroups", operations: [create, read], provider: userPools},
{allow: groups, groupsField: "editGroups", provider: userPools}]) {
id: ID! #primaryKey
name: String
settings_id: ID #index(name: "bySettingsId", queryField: "sourceSyncBySettingsId")
settings: Settings #hasOne(fields: ["settings_id"])
childLookup: String
createdAt: AWSDateTime!
updatedAt: AWSDateTime!
_createdBy: String
_lastChangedBy: String
_localChanges: AWSJSON
readGroups: [String]
editGroups: [String]
}
My lambda function's role has the following inline policy attached to it. (Actual ID values have been omitted for security purposes on this post):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"appsync:GraphQL"
],
"Resource": [
"arn:aws:appsync:us-east-1:111myaccountID:apis/11mygraphqlapiID/*"
],
"Effect": "Allow"
},
{
"Action": [
"appsync:GetType"
],
"Resource": [
"*"
],
"Effect": "Allow"
}
]
}
And finally my lambda function is set up as follows with a simple query test:
/* stuff */
"use strict";
const axios = require("axios");
const awsAppSync = require("aws-appsync").default;
const gql = require("graphql-tag");
require("cross-fetch/polyfill");
const { PassThrough } = require("stream");
const aws = require("aws-sdk");
aws.config.update({
region: process.env.AWS_REGION,
});
const appSync = new aws.AppSync();
const graphqlClient = new awsAppSync({
url: process.env.API_GRAPHQLAPIENDPOINTOUTPUT,
region: process.env.AWS_REGION,
auth: {
type: "AWS_IAM",
credentials: aws.config.credentials,
},
disableOffline: true
});
exports.handler = async (event, context) => {
console.log('context :: '+JSON.stringify(context));
console.log('aws config :: '+JSON.stringify(aws.config));
const sourceSyncTypes = await appSync
.getType({
apiId: process.env.API_GRAPHQLAPIIDOUTPUT,
format: "JSON",
typeName: "SourceSync",
})
.promise();
console.log('ss = '+JSON.stringify(sourceSyncTypes));
try {
const qs = gql`query GetSourceSync {
getSourceSync(id: "ov3") {
id
name
}
}`;
const res = await graphqlClient.query({query: qs, fetchPolicy: 'no-cache'});
console.log(JSON.stringify(res));
}
catch(e) {
console.log('ERR :: '+e);
console.log(JSON.stringify(e));
}
};
Found the solution, there seems to be an issue with triggering a rebuild of the resolvers on the api after permitting a function to access the graphql api. However there is a distinction to note:
If the graphql api is part of an amplify app stack, then only functions created through the amplify cli for that app (ex: amplify add function) and that are given access to the api through there will be able to access the api.
additonally during the update when you either create, or update the function to give it permissions, you must ensure that during the amplify push operation, the api stack will also be updating. you can trigger this by simply adding or removing a space in a comment inside of your amplify/backend/api//schema.graphql file.
If the function was created "adhoc" directly through the aws console, but it is trying to access a graphql api that was created as part of an amplify app stack, then you will need to put that function's role in amplify/backend/api/< apiname>/custom-roles.json in the format
{
"adminRoleNames": ["<role name>", "<role name 2>", ...]
}
Documentation references here.
If neither your api or lambda function were created with the amplify cli as part of an app stack, then just need to give access to the graphql resources for query, mutation and subscription to the lambda's role in IAM, via inline policies or a pre-defined policy.

s3 pre-signed url for buckets in us-east-1, us-west-1, us-west-2 missing fields

I am trying to generate s3 pre-signed url so that I can PUT objects into my buckets from the client side. My regions of interest initially were:
ap-northeast-2 (seoul)
ap-south-1 (mumbai)
us-east-1
us-west-2
Below is the code I use to get the signed url:
const AWS = require("aws-sdk");
const { v4: uuidv4 } = require("uuid");
const region_name = "x-region";
const upload_bucket = "bucket-in-x-region";
AWS.config.update({ region: region_name });
const s3 = new AWS.S3();
async function getPresignedUrl() {
const random_id = uuidv4();
const obj_key = `${random_id}.png`;
const s3Params = {
Bucket: upload_bucket,
Key: obj_key,
Expires: 60,
ContentType: "image/png",
ACL: "public-read",
};
const presigned_url = await s3.getSignedUrlPromise("putObject", s3Params);
return presigned_url;
}
getPresignedUrl().then((data) => {
console.log(data);
});
Buckets in ap-northeast-2 and ap-south-1 returns a WORKING signed-url with the fields:
Content-Type, X-Amz-Algorithm, X-Amz-Credential, X-Amz-Date, X-Amz-Expires, X-Amz-Security-Token, X-Amz-Signature, X-Amz-SignedHeaders, x-amz-acl
However the all buckets I tried based in regions: us-east-1, us-west-1 and us-west-2 returns a NON WORKING url with only the following fields:
Content-Type, Expires, Signature, x-amz-acl
When I run this code in lambda, the invocation role has a policy with the following statement:
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bucket-name-mumbai/*",
"arn:aws:s3:::bucket-name-seoul/*",
"arn:aws:s3:::bucket-name-useast/*",
"arn:aws:s3:::bucket-name-uswest/*",
]
}
Any ideas will be appreciated.
Add signatureVersion: 'v4' in AWS config and it should work:
AWS.config.update({ region: region_name, signatureVersion: 'v4' });

How can I use IAM to invoke AppSync wtihin AWS Lambda?

I'm currently in the process of implementing a subscription mutation within AWS Lambda using AppSync. I want to use IAM and avoid using any other type of AUTH mechanism as I'm calling it within the AWS stack. Unfortunately, I'm receiving the following 403 error:
(Excerpt from an SQS' CloudWatch log)
{
"errorMessage": "Response not successful: Received status code 403",
"name": "ServerError",
"errorType": "UnrecognizedClientException",
"message": "The security token included in the request is invalid."
}
I've tried following these to no avail, but I don't know what I'm missing:
https://medium.com/#jan.hesters/how-to-use-aws-appsync-in-lambda-functions-e593a9cef1d5
https://www.edwardbeazer.com/using-appsync-client-from-lambda/
https://adrianhall.github.io/cloud/2018/10/26/backend-graphql-trigger-appsync/
How to send GraphQL mutation from one server to another?
AWS Appsync + HTTP DataSources + AWS IAM
AWS Appsync Invoke mutate from Lambda?
Here's the code that I'm currently calling it from:
import AWS from "aws-sdk";
import { AWSAppSyncClient } from "aws-appsync";
import { Mutation, mutations } from "./mutations/";
import "cross-fetch/polyfill";
/**
*
*/
AWS.config.update({
region: Config.region,
});
export class AppSyncClient {
client: AWSAppSyncClient<any>;
constructor() {
if (!env.APPSYNC_ENDPOINT) {
throw new Error("APPSYNC_ENDPOINT not defined");
}
/**
* We create the AppSyncClient with the AWS_IAM
* authentication.
*/
this.client = new AWSAppSyncClient({
url: env.APPSYNC_ENDPOINT,
region: Config.region,
auth: {
credentials: AWS.config.credentials!,
type: "AWS_IAM",
},
disableOffline: true,
});
}
/**
* Sends a mutation on the AppSync Client
* #param mutate The Mutation that will be sent with the variables.
* #returns
*/
sendMutation(mutate: Mutation) {
const mutation = mutations[mutate.type] as any;
const variables = mutate.variables;
console.log("Sending the mutation");
console.log("Variables is ", JSON.stringify(variables));
return this.client.mutate({
mutation,
fetchPolicy: "network-only",
variables,
});
}
}
Here's the current IAM from the Lambda SQS:
{
"Statement": [
{
"Action": [
"appsync:GraphQL"
],
"Effect": "Allow",
"Resource": [
"arn:aws:appsync:us-east-2:747936726382:apis/myapi"
]
}
],
"Version": "2012-10-17"
}
I know it is not an IAM problem from the lambda, because I've tried momentarily giving it full access, and I still got the 403 error.
I've also verified that AppSync has the IAM permission configured (as an additional provider).
Do you guys have any ideas? I'm impressed that this is a ghost topic with such little configuraiton references.
I finally nailed it. I went and re-read for third time Adrian Hall's post, and it did lead me to the solution.
Please note that I installed the AWS AppSync client which is not needed but simplifies the process (otherwise you'd have to sign the URL yourself. For that see Adrian Hall's post).
There are a couple of things:
You need to polyfill "fetch" by including either cross-fetch (Otherwise you're going to get hit by Invariant Violation from the Apollo Client which AppSync internally uses).
You need to pass the lambda's internal IAM credentials (Which I didn't even know existed) to the configuration portion of the AppSyncClient.
You need to add the proper permission to the IAM role of the lambda, in this case: ["appsync:GraphQL"] for the action.
Here's some code:
This is the AppSync code.
// The code is written in TypeScript.
// https://adrianhall.github.io/cloud/2018/10/26/backend-graphql-trigger-appsync/
// https://www.edwardbeazer.com/using-appsync-client-from-lambda/
import { env } from "process";
import { Config, env as Env } from "../../../../shared";
// This is such a bad practice
import AWS from "aws-sdk";
import { AWSAppSyncClient } from "aws-appsync";
import { Mutation, mutations } from "./mutations/";
// Very important, otherwise it won't work!!! You'll have Invariant Violation
// from Apollo Client.
import "cross-fetch/polyfill";
/**
*
*/
AWS.config.update({
region: Config.region,
credentials: new AWS.Credentials(
env.AWS_ACCESS_KEY_ID!,
env.AWS_SECRET_ACCESS_KEY!,
env.AWS_SESSION_TOKEN!
),
});
export class AppSyncClient {
client: AWSAppSyncClient<any>;
constructor() {
// Your AppSync endpoint - The Full URL.
if (!Env.APPSYNC_ENDPOINT) {
throw new Error("APPSYNC_ENDPOINT not defined");
}
/**
* We create the AppSyncClient with the AWS_IAM
* authentication.
*/
this.client = new AWSAppSyncClient({
url: Env.APPSYNC_ENDPOINT,
region: Config.region,
auth: {
credentials: AWS.config.credentials!,
type: "AWS_IAM",
},
disableOffline: true,
});
}
/**
* Sends a mutation on the AppSync Client
* #param mutate The Mutation that will be sent with the variables.
* #returns
*/
// The mutation is a object that holds the mutation in
// the `gql` tag. You can ommit this part.
sendMutation(mutate: Mutation) {
const mutation = mutations[mutate.type] as any;
const variables = mutate.variables;
// This is the important part.
return this.client.mutate({
mutation,
// Specify "no-cache" in the policy.
// network-only won't work.
fetchPolicy: "no-cache",
variables,
});
}
}
We need to enable IAM in the AppSync authorization mechanism. Yes, it is possible to have multiple Authentication enabled. I'm currently using OPEN_ID and IAM simultaneously.
https://us-east-2.console.aws.amazon.com/appsync/home?region=us-east-2#/myappsync-id/v1/settings
Here's the Lambda's IAM policy that executes the GQL:
{
"Statement": [
{
"Action": [
"appsync:GraphQL"
],
"Effect": "Allow",
"Resource": [
"arn:aws:appsync:us-east-2:747936726382:apis/ogolfgja65edlmhkcpp3lcmwli/*"
]
}
],
"Version": "2012-10-17"
}
You can further restrict here in the following fashion:
arn:${Partition}:appsync:${Region}:${Account}:apis/${GraphQLAPIId}/types/${TypeName}/fields/${FieldName}
arn:aws:appsync:us-east-2:747936726382:apis/ogolfgja65edlmhkcpp3lcmwli/types/Mutation/field/myCustomField"
Note, we need to better restrict this as we are currently giving it entire access to the API.
In your .gql file (AppSync GraphQL schema), add the #aws_iam directive to the mutation that is being used to send the subscriptions to, in order to restrict access from the front-end.
type Mutation {
addUsersMutationSubscription(
input: AddUsersSagaResultInput!
): AddUsersSagaResult #aws_iam
}

gulp-awspublish with AWS profile instead of AWS_ACCESS_KEY and secret

I am trying to deploy my nuxt static website to S3 using this guide.
https://nuxtjs.org/faq/deployment-aws-s3-cloudfront
The deployscript works when using which I tried an a personal AWS account:
AWS_ACCESS_KEY_ID="key"
AWS_SECRET_ACCESS_KEY="secret"
It does not work when unsetting these exports and using the AWS_PROFILE export on a separate AWS account. On this AWS I am not able to get an access key and secret because of company policy.
I also use these AWS profiles for other things so I am sure they are configured properly.
The error I am getting in the console is:
Error: Connect EHOSTUNREACH <EC2 IP address???>
The part in brackets is the IP address I am seeing. Which is weird where it tries to connect to EC2 since the script works on S2 and cloudfront.
The script I am using
#!/bin/bash
export AWS_PROFILE="profile_name"
export AWS_BUCKET_NAME="example.com"
export AWS_CLOUDFRONT="UPPERCASE"
# Load nvm (node version manager), install node (version in .nvmrc), and npm install packages
[ -s "$HOME/.nvm/nvm.sh" ] && source "$HOME/.nvm/nvm.sh" && nvm use
# Npm install if not already.
[ ! -d "node_modules" ] && npm install
npm run generate
gulp deploy
As for the gulpfile:
const gulp = require('gulp')
const awspublish = require('gulp-awspublish')
const cloudfront = require('gulp-cloudfront-invalidate-aws-publish')
const parallelize = require('concurrent-transform')
// https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html
const config = {
// Required
params: {
Bucket: process.env.AWS_BUCKET_NAME
},
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
signatureVersion: 'v3'
},
// Optional
deleteOldVersions: false, // NOT FOR PRODUCTION
distribution: process.env.AWS_CLOUDFRONT, // CloudFront distribution ID
region: process.env.AWS_DEFAULT_REGION,
headers: {
/* 'Cache-Control': 'max-age=315360000, no-transform, public', */
},
// Sensible Defaults - gitignore these Files and Dirs
distDir: 'dist',
indexRootPath: true,
cacheFileName: '.awspublish',
concurrentUploads: 10,
wait: true // wait for CloudFront invalidation to complete (about 30-60 seconds)
}
gulp.task('deploy', function () {
// create a new publisher using S3 options
// http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property
const publisher = awspublish.create(config)
let g = gulp.src('./' + config.distDir + '/**')
// publisher will add Content-Length, Content-Type and headers specified above
// If not specified it will set x-amz-acl to public-read by default
g = g.pipe(
parallelize(publisher.publish(config.headers), config.concurrentUploads)
)
// Invalidate CDN
if (config.distribution) {
console.log('Configured with CloudFront distribution')
g = g.pipe(cloudfront(config))
} else {
console.log(
'No CloudFront distribution configured - skipping CDN invalidation'
)
}
// Delete removed files
if (config.deleteOldVersions) {
g = g.pipe(publisher.sync())
}
// create a cache file to speed up consecutive uploads
g = g.pipe(publisher.cache())
// print upload updates to console
g = g.pipe(awspublish.reporter())
return g
})
The gulp-awspublish docs mention it should be possible to connect with an AWS profile by adding it to the export (which I do in my deploy file).
They also mention using the aws js sdk which I also tried by integrating following snippet.
var AWS = require("aws-sdk");
var publisher = awspublish.create({
region: "your-region-id",
params: {
Bucket: "..."
},
credentials: new AWS.SharedIniFileCredentials({ profile: "myprofile" })
});
When I use the export aws_profile it does at least seam to authenticate. When using the SDK I receive an error mentioning
CredentialsError: Missing Credentials in config, if using
AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
Where adding the latter (AWS_SDK_LOAD_CONFIG=1) to my deployment script does not make any difference.
Any Idea if I a missing something in the script to make it work?
My user policies where set as mentioned in the tutorial. Maybe they forgot something?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::example.com"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": ["arn:aws:s3:::example.com/*"]
},
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations",
"cloudfront:UnknownOperation"
],
"Resource": "*"
}
]
}
Since awspublish uses the javascript sdk I needed to export AWS_SDK_LOAD_CONFIG=true which solved the issue!

aws lambda function getting access denied when getObject from s3

I am getting an acccess denied error from S3 AWS service on my Lambda function.
This is the code:
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration.
exports.handler = function(event, context) {
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
/*
{
originalFilename: <string>,
versions: [
{
size: <number>,
crop: [x,y],
max: [x, y],
rotate: <number>
}
]
}*/
var fileInfo;
var dstBucket = "xmovo.transformedimages.develop";
try {
//TODO: Decompress and decode the returned value
fileInfo = JSON.parse(key);
//download s3File
// get reference to S3 client
var s3 = new AWS.S3();
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: key
},
function (err, response) {
if (err) {
console.log("Error getting from s3: >>> " + err + "::: Bucket-Key >>>" + srcBucket + "-" + key + ":::Principal>>>" + event.Records[0].userIdentity.principalId, err.stack);
return;
}
// Infer the image type.
var img = gm(response.Body);
var imageType = null;
img.identify(function (err, data) {
if (err) {
console.log("Error image type: >>> " + err);
deleteFromS3(srcBucket, key);
return;
}
imageType = data.format;
//foreach of the versions requested
async.each(fileInfo.versions, function (currentVersion, callback) {
//apply transform
async.waterfall([async.apply(transform, response, currentVersion), uploadToS3, callback]);
}, function (err) {
if (err) console.log("Error on excecution of watefall: >>> " + err);
else {
//when all done then delete the original image from srcBucket
deleteFromS3(srcBucket, key);
}
});
});
});
}
catch (ex){
context.fail("exception through: " + ex);
deleteFromS3(srcBucket, key);
return;
}
function transform(response, version, callback){
var imageProcess = gm(response.Body);
if (version.rotate!=0) imageProcess = imageProcess.rotate("black",version.rotate);
if(version.size!=null) {
if (version.crop != null) {
//crop the image from the coordinates
imageProcess=imageProcess.crop(version.size[0], version.size[1], version.crop[0], version.crop[1]);
}
else {
//find the bigger and resize proportioned the other dimension
var widthIsMax = version.size[0]>version.size[1];
var maxValue = Math.max(version.size[0],version.size[1]);
imageProcess=(widthIsMax)?imageProcess.resize(maxValue):imageProcess.resize(null, maxValue);
}
}
//finally convert the image to jpg 90%
imageProcess.toBuffer("jpg",{quality:90}, function(err, buffer){
if (err) callback(err);
callback(null, version, "image/jpeg", buffer);
});
}
function deleteFromS3(bucket, filename){
s3.deleteObject({
Bucket: bucket,
Key: filename
});
}
function uploadToS3(version, contentType, data, callback) {
// Stream the transformed image to a different S3 bucket.
var dstKey = fileInfo.originalFilename + "_" + version.size + ".jpg";
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
}, callback);
}
};
This is the error on Cloudwatch:
AccessDenied: Access Denied
This is the stack error:
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:329:35)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:596:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:21:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:37:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:598:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
Without any other description or info
on S3 bucket permissions allow to everyone put list and delete.
What can I do to access the S3 bucket?
PS: on Lambda event properties the principal is correct and has administrative privileges.
Interestingly enough, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.
If you are specifying the Resource don't forget to add the sub folder specification as well. Like this:
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
Your Lambda does not have privileges (S3:GetObject).
Go to IAM dashboard, check the role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It should show something similar to the attached image. Make sure S3:GetObject is listed.
I ran into this issue and after hours of IAM policy madness, the solution was to:
Go to S3 console
Click bucket you are interested in.
Click 'Properties'
Unfold 'Permissions'
Click 'Add more permissions'
Choose 'Any Authenticated AWS User' from dropdown. Select 'Upload/Delete' and 'List' (or whatever you need for your lambda).
Click 'Save'
Done.
Your carefully written IAM role policies don't matter, neither do specific bucket policies (I've written those too to make it work). Or they just don't work on my account, who knows.
[EDIT]
After a lot of tinkering the above approach is not the best. Try this:
Keep your role policy as in the helloV post.
Go to S3. Select your bucket. Click Permissions. Click Bucket Policy.
Try something like this:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME"
}
]
}
Worked for me and does not require for you to share with all authenticated AWS users (which most of the time is not ideal).
If you have encryption set on your S3 bucket (such as AWS KMS), you may need to make sure the IAM role applied to your Lambda function is added to the list of IAM > Encryption keys > region > key > Key Users for the corresponding key that you used to encrypt your S3 bucket at rest.
In my screenshot, for example, I added the CyclopsApplicationLambdaRole role that I have applied to my Lambda function as a Key User in IAM for the same AWS KMS key that I used to encrypt my S3 bucket. Don't forget to select the correct region for your key when you open up the Encryption keys UI.
Find the execution role you've applied to your Lambda function:
Find the key you used to add encryption to your S3 bucket:
In IAM > Encryption keys, choose your region and click on the key name:
Add the role as a Key User in IAM Encryption keys for the key specified in S3:
If all the other policy ducks are in a row, S3 will still return an Access Denied message if the object doesn't exist AND the requester doesn't have ListBucket permission on the bucket.
From https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html:
...If the object you request does not exist, the error Amazon S3
returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3 will
return an HTTP status code 404 ("no such key") error. if you don’t
have the s3:ListBucket permission, Amazon S3 will return an HTTP
status code 403 ("access denied") error.
I too ran into this issue, I fixed this by providing s3:GetObject* in the ACL as it is attempting to obtain a version of that object.
I tried to execute a basic blueprint Python lambda function [example code] and I had the same issue. My execition role was lambda_basic_execution
I went to S3 > (my bucket name here) > permissions .
Because I'm beginner, I used the Policy Generator provided by Amazon rather than writing JSON myself: http://awspolicygen.s3.amazonaws.com/policygen.html
my JSON looks like this:
{
"Id": "Policy153536723xxxx",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt153536722xxxx",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::tokabucket/*",
"Principal": {
"AWS": [
"arn:aws:iam::82557712xxxx:role/lambda_basic_execution"
]
}
}
]
And then the code executed nicely:
I solved my problem following all the instruction from the AWS - How do I allow my Lambda execution role to access my Amazon S3 bucket?:
Create an AWS Identity and Access Management (IAM) role for the Lambda function that grants access to the S3 bucket.
Modify the IAM role's trust policy.
Set the IAM role as the Lambda function's execution role.
Verify that the bucket policy grants access to the Lambda function's execution role.
I was trying to read a file from s3 and create a new file by changing content of file read (Lambda + Node). Reading file from S3 did not had any problem. As soon I tried writing to S3 bucket I get 'Access Denied' error.
I tried every thing listed above but couldn't get rid of 'Access Denied'. Finally I was able to get it working by giving 'List Object' permission to everyone on my bucket.
Obviously this not the best approach but nothing else worked.
After searching for a long time i saw that my bucket policy was only allowed read access and not put access:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicListGet",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:List*",
"s3:Get*",
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::bucketName",
"arn:aws:s3:::bucketName/*"
]
}
]
}
Also another issue might be that in order to fetch objects from cross region you need to initialize new s3 client with other region name like:
const getS3Client = (region) => new S3({ region })
I used this function to get s3 client based on region.
I was struggling with this issue for hours. I was using AmazonS3EncryptionClient and nothing I did helped. Then I noticed that the client is actually deprecated, so I thought I'd try switching to the builder model they have:
var builder = AmazonS3EncryptionClientBuilder.standard()
.withEncryptionMaterials(new StaticEncryptionMaterialsProvider(encryptionMaterials))
if (accessKey.nonEmpty && secretKey.nonEmpty) builder = builder.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey.get, secretKey.get)))
builder.build()
And... that solved it. Looks like Lambda has trouble injecting the credentials in the old model, but works well in the new one.
I was getting the same error "AccessDenied: Access Denied" while cropping s3 images using lambda function. I updated the s3 bucket policy and IAM role inline policy as per the document link given below.
But still, I was getting the same error. Then I realised, I was trying to give "public-read" access in a private bucket. After removed ACL: 'public-read' from S3.putObject problem get resolved.
https://aws.amazon.com/premiumsupport/knowledge-center/access-denied-lambda-s3-bucket/
I had this error message in aws lambda environment when using boto3 with python:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
It turns out I needed an extra permission because I was using object tags. If your objects have tags you will need
s3:GetObject AND s3:GetObjectTagging for getting the object.
I have faced the same problem when creating Lambda function that should have read S3 bucket content. I created the Lambda function and S3 bucket using AWS CDK. To solve this within AWS CDK, I used magic from the docs.
Resources that use execution roles, such as lambda.Function, also
implement IGrantable, so you can grant them access directly instead of
granting access to their role. For example, if bucket is an Amazon S3
bucket, and function is a Lambda function, the code below grants the
function read access to the bucket.
bucket.grantRead(function);