I have cdk script which makes one S3 bucket and lambda then add s3 trigger to lambda
const up_bk = new s3.Bucket(this, 'cdk-st-in-bk', { // image-resize用のbucket
bucketName: `cdk-st-${targetEnv}-resource-in-bk`,
removalPolicy: RemovalPolicy.DESTROY,
autoDeleteObjects: true,
cors: [{
allowedMethods: [
s3.HttpMethods.GET,
s3.HttpMethods.POST,
s3.HttpMethods.PUT,
s3.HttpMethods.DELETE,
s3.HttpMethods.HEAD,
],
allowedHeaders: ["*"],
allowedOrigins: ["*"],
exposedHeaders: ["ETag"],
maxAge: 3000
}]
});
const resizerLambda = new lambda.DockerImageFunction(this, "ResizerLambda", {
code: lambda.DockerImageCode.fromImageAsset("resizer-sam/resizer"),
});
resizerLambda.addEventSource(new S3EventSource(up_bk, {
events: [ s3.EventType.OBJECT_CREATED ],
}));
Now,It makes role automatically st-dev-base-stack-ResizerLambdaServiceRoleAE27CE82-1LWJL0D35A0GW
But it has only AWSLambdaBasicExecutionRole
So,when I try to access S3 from bucket there comes error like `
For example,
obj = s3_client.get_object(Bucket=bucket_name, Key=obj_key)
"An error occurred (AccessDenied) when calling the GetObject operation: Access Denied"
I guess I should add the AmazonS3FullAccess to this role.
However how can I do this??
You need to give the Lambda function permission to read from the bucket:
up_bk.grantRead(resizerLambda);
If you also need it to write to the bucket, do:
up_bk.grantReadWrite(resizerLambda);
Related
I have a lambda which is attempting to put an object in an S3 bucket.
The code to configure the s3 client is as follows:
const configuration: S3ClientConfig = {
region: 'us-west-2',
};
if (process.env.DEVELOPMENT_MODE) {
configuration.credentials = {
accessKeyId: process.env.AWS_ACCESS_KEY!,
secretAccessKey: process.env.AWS_SECRET_KEY!,
}
}
export const s3 = new S3Client(configuration);
And the code to upload the file is as follows:
s3.send(new PutObjectCommand({
Bucket: bucketName,
Key: fileName,
ContentType: contentType,
Body: body,
}))
This works locally. The lambda's role includes a policy which in turn includes the following statement:
{
"Action": [
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
],
"Effect": "Allow"
}
However, when I invoke this lambda, it fails with the following stack trace
Error: Resolved credential object is not valid
at SignatureV4.validateResolvedCredentials (webpack://backend/../node_modules/#aws-sdk/signature-v4-multi-region/node_modules/#aws-sdk/signature-v4/dist-es/SignatureV4.js?:307:19)
at SignatureV4.eval (webpack://backend/../node_modules/#aws-sdk/signature-v4-multi-region/node_modules/#aws-sdk/signature-v4/dist-es/SignatureV4.js?:50:30)
at step (webpack://backend/../node_modules/tslib/tslib.es6.js?:130:23)
at Object.eval [as next] (webpack://backend/../node_modules/tslib/tslib.es6.js?:111:53)
at fulfilled (webpack://backend/../node_modules/tslib/tslib.es6.js?:101:58)
I'm using (what is currently) the latest javascript aws sdk, version 3.165.0. What am I missing here?
The problem is that I was trying to load the configuration credentials from environment variables instead of relying on the IAM role. Turns out process.env.DEVELOPMENT_MODE was resolving to the string 'true' instead of the boolean true.
if (process.env.DEVELOPMENT_MODE === 'true') {
configuration.credentials = {
accessKeyId: process.env.AWS_ACCESS_KEY!,
secretAccessKey: process.env.AWS_SECRET_KEY!,
}
}
Does anyone know if I can use a wildcard and give access to everything within S3 bucket?
Instead of adding every location explicitly like I am currently doing?
const policyDoc = new PolicyDocument({
statements: [
new PolicyStatement({
sid: 'Grant role to read/write to S3 bucket',
resources: [
`${this.attrArn}`,
`${this.attrArn}/*`,
`${this.attrArn}/emailstore`,
`${this.attrArn}/emailstore/*`,
`${this.attrArn}/attachments`,
`${this.attrArn}/attachments/*`
],
actions: ['s3:*'],
effect: Effect.ALLOW,
principals: props.allowedArnPrincipals
})
]
});
You should be able to use:
resources: [
`${this.attrArn}`,
`${this.attrArn}/*`
],
The first one gives permission for actions on the bucket itself (eg ListBucket), while /* gives permission for actions inside the bucket (eg GetObject).
I'm using Serverless Framework to handle my CloudFormation stuff. I'm building a User Pool with groups that have their own roles. I want to build my Identity Pool so that the Cognito provider setting for Authenticated role selection is set to Choose role from token with a Role resolultion of DENY.
This is my relevant CloudFormation - ignore the ${self:custom....} stuff:
IdentityPool:
Type: AWS::Cognito::IdentityPool
Properties:
IdentityPoolName: ${self:custom.identityPoolName}
AllowUnauthenticatedIdentities: false
CognitoIdentityProviders:
- ClientId:
Ref: UserPoolClient
ProviderName:
Fn::GetAtt: ["UserPool", "ProviderName"]
IdentityPoolRoleAttachment:
Type: AWS::Cognito::IdentityPoolRoleAttachment
Properties:
IdentityPoolId:
Ref: IdentityPool
RoleMappings:
CognitoProvider:
IdentityProvider:
Fn::Join:
- ""
- - "cognito-idp."
- Ref: AWS::Region
- ".amazonaws.com/"
- Ref: UserPool
- ":"
- Ref: UserPoolClient
Type: Token
AmbiguousRoleResolution: Deny
This does_not work because the IdentityPoolRoleAttachment is requiring a Roles section. But I do_not want to use the authenticated and unauthenticated roles with the Identity Pool. I want the Identity Pool Cognito provider to only check the tokens being passed in.
This is the error I'm getting:
ServerlessError: An error occurred: IdentityPoolRoleAttachment - 1 validation error detected: Value null at 'roles' failed to satisfy constraint: Member must not be null (Service: AmazonCognitoIdentity; Status Code: 400; Error Code: ValidationException; Request ID: 80026230-eaa9-4045-86d8-6fe4c07cce9d).
How can I do this? Do I need to create an empty role and assigned it to the IdentityPoolRoleAttachment?
I am able to do this without Identity Pool roles in the console.
I was able to get this working without creating an empty role.
roles is not required according to doc but it seems like CFN cant handle null well.
you just need to set "roles": { }.
cdk code
new CfnIdentityPoolRoleAttachment(
this,
'ExampleCognitoIdentityPoolRoleAttachment',
{
identityPoolId: identityPool.ref,
roles: {},
roleMappings: {
mapping: {
type: 'Token',
ambiguousRoleResolution: 'Deny',
identityProvider: `cognito-idp.${cdk.Stack.of(this).region}.amazonaws.com/${userPool.userPoolId}:${cognitoAppClient.ref}`,
},
},
},
);
Cloudformation template output from cdk
"ExampleCognitoIdentityPoolRoleAttachment": {
"Type": "AWS::Cognito::IdentityPoolRoleAttachment",
"Properties": {
"IdentityPoolId": {
"Ref": "ExampleCognitoIdentityPool"
},
"RoleMappings": {
"mapping": {
"AmbiguousRoleResolution": "Deny",
"IdentityProvider": {
"Fn::Join": [
"",
[
"cognito-idp.eu-west-1.amazonaws.com/",
{
"Ref": "<UserPoolRef>"
},
":",
{
"Ref": "<UserPoolAppClientRef>"
}
]
]
},
"Type": "Token"
}
},
"Roles": { }
},
"Metadata": {
"aws:cdk:path": "example-stack/ExampleCognitoIdentityPoolRoleAttachment"
}
}
I am working with AWS Textract and I want to analyze a multipage document, therefore I have to use the async options, so I first used startDocumentAnalysisfunction and I got a JobId as the return, But it needs to trigger a function that I have set to trigger when the SNS topic got a message.
These are my serverless file and handler file.
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: { "Fn::Join": ["", ["arn:aws:s3:::${self:custom.secrets.IMAGE_BUCKET_NAME}", "/*" ] ] }
- Effect: "Allow"
Action:
- "sts:AssumeRole"
- "SNS:Publish"
- "lambda:InvokeFunction"
- "textract:DetectDocumentText"
- "textract:AnalyzeDocument"
- "textract:StartDocumentAnalysis"
- "textract:GetDocumentAnalysis"
Resource: "*"
custom:
secrets: ${file(secrets.${opt:stage, self:provider.stage}.yml)}
functions:
routes:
handler: src/functions/routes/handler.run
events:
- s3:
bucket: ${self:custom.secrets.IMAGE_BUCKET_NAME}
event: s3:ObjectCreated:*
textract:
handler: src/functions/routes/handler.detectTextAnalysis
events:
- sns: "TextractTopic"
resources:
Resources:
TextractTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: "Start Textract API Response"
TopicName: TextractResponseTopic
Handler.js
module.exports.run = async (event) => {
const uploadedBucket = event.Records[0].s3.bucket.name;
const uploadedObjetct = event.Records[0].s3.object.key;
var params = {
DocumentLocation: {
S3Object: {
Bucket: uploadedBucket,
Name: uploadedObjetct
}
},
FeatureTypes: [
"TABLES",
"FORMS"
],
NotificationChannel: {
RoleArn: 'arn:aws:iam::<accont-id>:role/qvalia-ocr-solution-dev-us-east-1-lambdaRole',
SNSTopicArn: 'arn:aws:sns:us-east-1:<accont-id>:TextractTopic'
}
};
let textractOutput = await new Promise((resolve, reject) => {
textract.startDocumentAnalysis(params, function(err, data) {
if (err) reject(err);
else resolve(data);
});
});
}
I manually published an sns message to the topic and then it is firing the textract lambda, which currently has this,
module.exports.detectTextAnalysis = async (event) => {
console.log('SNS Topic isssss Generated');
console.log(event.Records[0].Sns.Message);
};
What is the mistake that I have and why the textract startDocumentAnalysis is not publishing a message and making it trigger the lambda?
Note: I haven't use the startDocumentTextDetection before using the startTextAnalysis function, though it is not necessary to call it before this.
Make sure you have in your Trusted Relationships of the role you are using:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"textract.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
The SNS Topic name must be AmazonTextract
At the end your arn should look this:
arn:aws:sns:us-east-2:111111111111:AmazonTextract
I was able got this working directly via Serverless Framework by adding a Lambda execution resource to my serverless.yml file:
resources:
Resources:
IamRoleLambdaExecution:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- textract.amazonaws.com
Action: sts:AssumeRole
And then I just used the same role generated by Serverless (for the lambda function) as the notification channel role parameter when starting the Textract document analysis:
Thanks to this this post for pointing me in the right direction!
For anyone using the CDK in TypeScript, you will need to add Lambda as a ServicePrincipal as usual to the Lambda Execution Role. Next, access the assumeRolePolicy of the execution role and call the addStatements method.
The basic execution role without any additional statement (add those later)
this.executionRole = new iam.Role(this, 'ExecutionRole', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
Next, add Textract as an additional ServicePrincipal
this.executionRole.assumeRolePolicy?.addStatements(
new PolicyStatement({
principals: [
new ServicePrincipal('textract.amazonaws.com'),
],
actions: ['sts:AssumeRole']
})
);
Also, ensure the execution role has full permissions on the target SNS topic (note the topic is created already and accessed via fromTopicArn method)
const stmtSNSOps = new PolicyStatement({
effect: iam.Effect.ALLOW,
actions: [
"SNS:*"
],
resources: [
this.textractJobStatusTopic.topicArn
]
});
Add the policy statement to a global policy (within the active stack)
this.standardPolicy = new iam.Policy(this, 'Policy', {
statements: [
...
stmtSNSOps,
...
]
});
Finally, attach the policy to the execution role
this.executionRole.attachInlinePolicy(this.standardPolicy);
If you have your bucket encrypted you should grant kms permissions, otherwise it won't work
Access Denied for bucket: appdeploy-logbucket-1cca50r865s65.
Please check S3bucket permission (Service: AmazonElasticLoadBalancingV2; Status Code: 400; Error Code:
InvalidConfigurationRequest; Request ID: e5e2245f-2f9b-11e9-a3e9-2dcad78a31ec)
I want to store my ALB logs to s3 bucket, i have added policies to s3 bucket, but it says access denied, i have tried alot, and worked with so many configurations, but it failed again and again, And my stack Roll back, I have used Troposphere to create template.
I have tried my policies using but it's not wokring.
BucketPolicy = t.add_resource(
s3.BucketPolicy(
"BucketPolicy",
Bucket=Ref(LogBucket),
PolicyDocument={
"Id": "Policy1550067507528",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1550067500750",
"Action": [
"s3:PutObject",
"s3:PutBucketAcl",
"s3:PutBucketLogging",
"s3:PutBucketPolicy"
],
"Effect": "Allow",
"Resource": Join("", [
"arn:aws:s3:::",
Ref(LogBucket),
"/AWSLogs/",
Ref("AWS::AccountId"),
"/*"]),
"Principal": {"AWS": "027434742980"},
}
],
},
))
Any help?
troposphere/stacker maintainer here. We have a stacker blueprint (which is a wrapper around a troposphere template) that we use at work for our logging bucket:
from troposphere import Sub
from troposphere import s3
from stacker.blueprints.base import Blueprint
from awacs.aws import (
Statement, Allow, Policy, AWSPrincipal
)
from awacs.s3 import PutObject
class LoggingBucket(Blueprint):
VARIABLES = {
"ExpirationInDays": {
"type": int,
"description": "Number of days to keep logs around for",
},
# See the table here for account ids.
# https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy
"AWSAccountId": {
"type": str,
"description": "The AWS account ID to allow access to putting "
"logs in this bucket.",
"default": "797873946194" # us-west-2
},
}
def create_template(self):
t = self.template
variables = self.get_variables()
bucket = t.add_resource(
s3.Bucket(
"Bucket",
LifecycleConfiguration=s3.LifecycleConfiguration(
Rules=[
s3.LifecycleRule(
Status="Enabled",
ExpirationInDays=variables["ExpirationInDays"]
)
]
)
)
)
# Give ELB access to PutObject in the bucket.
t.add_resource(
s3.BucketPolicy(
"BucketPolicy",
Bucket=bucket.Ref(),
PolicyDocument=Policy(
Statement=[
Statement(
Effect=Allow,
Action=[PutObject],
Principal=AWSPrincipal(variables["AWSAccountId"]),
Resource=[Sub("arn:aws:s3:::${Bucket}/*")]
)
]
)
)
)
self.add_output("BucketId", bucket.Ref())
self.add_output("BucketArn", bucket.GetAtt("Arn"))
Hopefully that helps!
The principal is wrong in the CloudFormation template. You should use the proper principal AWS Account Id for your region. Lookup the proper value in this document:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html#access-logging-bucket-permissions
Also, you could narrow down your actions. If you just want to push ALB logs to S3, you only need:
Action: s3:PutObject
Here's a sample BucketPolicy Cloudformation that works (you can easily translate that into the troposphere PolicyDocument element):
Resources:
# Create an S3 logs bucket
ALBLogsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub "my-logs-${AWS::AccountId}"
AccessControl: LogDeliveryWrite
LifecycleConfiguration:
Rules:
- Id: ExpireLogs
ExpirationInDays: 365
Status: Enabled
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
DeletionPolicy: Retain
# Grant access for the load balancer to write the logs
# For the magic number 127311923021, refer to https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html#access-logging-bucket-permissions
ALBLoggingBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ALBLogsBucket
PolicyDocument:
Statement:
- Effect: Allow
Principal:
AWS: 127311923021 # Elastic Load Balancing Account ID for us-east-1
Action: s3:PutObject
Resource: !Sub "arn:aws:s3:::my-logs-${AWS::AccountId}/*"