Every time I run the command
aws rekognition detect-labels --image "S3Object={Bucket=BucketName,Name=picture.jpg}" --region us-east-1
I get this error.
InvalidS3ObjectException: An error occurred (InvalidS3ObjectException) when calling the DetectLabels operation: Unable to get image metadata from S3. Check object key, region and/or access permissions.
I am trying to retrieve labels for a project I am working on but I can't seem to get past this step. I configured aws with my access key, secret key, us-east-1 region, and json as my output format.
I have also tried the code below and I receive the exact same error (I correctly Replaced BucketName with the name of my bucket.)
import boto3
BUCKET = "BucketName"
KEY = "picture.jpg"
def detect_labels(bucket, key, max_labels=10, min_confidence=90, region="eu-west-1"):
rekognition = boto3.client("rekognition", region)
response = rekognition.detect_labels(
Image={
"S3Object": {
"Bucket": bucket,
"Name": key,
}
},
MaxLabels=max_labels,
MinConfidence=min_confidence,
)
return response['Labels']
for label in detect_labels(BUCKET, KEY):
print "{Name} - {Confidence}%".format(**label)
I am able to see on my user account that it is calling Rekognition.
Image showing it being called from IAM.
It seems like the issue is somewhere with my S3 bucket but I haven't found out what.
Region of S3 and Rekognition should be the same for stability reasons.
More info: https://forums.aws.amazon.com/thread.jspa?threadID=243999
Kindly check your IAM Role Policies/Permissions, Also check the same for the role created for the lambda function. It's better to verify the policy using IAM Policy Checker.
I am facing a similar issue, This might due to the Permissions and Policy attached with the IAM Roles and with S3 Bucket. Need to check the metadata as well for the objects in S3 bucket.
My S3 bucket Policy:
{
"Version": "2012-10-17",
"Id": "Policy1547200240036",
"Statement": [
{
"Sid": "Stmt1547200205482",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::459983601504:user/veral"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::esp32-rekognition-459983601504/*"
}
]
}
Cross-origin resource sharing (CORS):
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"GET",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
If you use Server Side encryption for the bucket via KMS, remember to also have/give access to IAM role to decrypt using the KMS
Related
I am trying to query Athena View from my Lambda code. Created Athena table for S3 files which are in different account. Athena Query editor is giving me below error:
Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
I tried accessing Athena View from my Lambda code. Created Lambda Execution Role and allowed this role in Bucket Policy of another account S3 bucket as well like below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::2222222222:role/BAccountRoleFullAccess"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::s3_bucket/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111:role/A-Role",
"arn:aws:iam::111111111:role/B-Role"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::s3_bucket",
"arn:aws:s3:::s3_bucket/*"
]
}
]
}
From Lambda, getting below error:
'Status': {'State': 'FAILED', 'StateChangeReason': 'com.amazonaws.services.s3.model.AmazonS3Exception:
Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 3A8953784EC73B17;
S3 Extended Request ID: LfQZdTCj7sSQWcBqVNhtHrDEnJuGxgJQxvillSHznkWIr8t5TVzSaUwNSdSNh+YzDUj+S6aOUyI=),
S3 Extended Request ID: LfQZdTCj7sSQWcBqVNhtHrDEnJuGxgJQxvillSHznkWIr8t5TVzSaUwNSdSNh+YzDUj+S6aOUyI=
(Path: s3://s3_bucket/Input/myTestFile.csv)'
This Lambda function is using arn:aws:iam::111111111:role/B-Role Execution role which has full access to Athena and S3.
Someone please guide me.
To reproduce this situation, I did the following:
In Account-A, created an Amazon S3 bucket (Bucket-A) and uploaded a CSV file
In Account-B, created an IAM Role (Role-B) with S3 and Athena permissions
Turned OFF Block Public Access on Bucket-A
Added a bucket policy to Bucket-A that references Role-B:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::[ACCOUNT-B]:role/role-b"
},
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::bucket-a",
"arn:aws:s3:::bucket-a/*"
]
}
]
}
In Account-B, manually defined a table in the Amazon Athena console
Ran a query on the Athena table. As expected, received Access Denied because I was using an IAM User to access the console, not the IAM Role defined in the Bucket Policy on Bucket-A
Created an AWS Lambda function in Account-B that uses Role-B:
import boto3
import time
def lambda_handler(event, context):
athena_client = boto3.client('athena')
query1 = athena_client.start_query_execution(
QueryString='SELECT * FROM foo',
ResultConfiguration={'OutputLocation': 's3://my-athena-out-bucket/'}
)
time.sleep(10)
query2 = athena_client.get_query_results(QueryExecutionId=query1['QueryExecutionId'])
print(query2)
Ran the Lambda function. It successfully returned data from the CSV file.
Please compare your configurations against the above steps that I took. Hopefully you will find a difference that will enable your cross-account access by Athena.
Reference: Cross-account Access - Amazon Athena
I have a instance which needs to read data from two different account s3.
Bucket in DataAccount with bucket name "dataaccountlogs"
Bucket in UserAccount with bucket name "userlogs"
I have console access to both account, so now I need to configure bucket policy to allow instances to read s3 data from buckets dataaccountlogs and userlogs , and my instance is running in UserAccount .
I need to access these two bucket both from command line as well as using spark job.
You will need a role in UserAccount, which will be used to access mentioned buckets, say RoleA. Role should have permissions for required S3 operations.
Then you will able to configure a bucket policy for each bucket:
For DataAccount:
{
"Version": "2012-10-17",
"Id": "Policy1",
"Statement": [
{
"Sid": "test1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DataAccount:role/RoleA"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::dataaccountlogs",
"arn:aws:s3:::dataaccountlogs/*"
]
}
]
}
For UserAccount:
{
"Version": "2012-10-17",
"Id": "Policy1",
"Statement": [
{
"Sid": "test1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DataAccount:role/RoleA"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::userlogs",
"arn:aws:s3:::userlogs/*"
]
}
]
}
For accessing them from command line:
You will need to setup AWS CLI tool first:
https://docs.aws.amazon.com/polly/latest/dg/setup-aws-cli.html
Then you will need to configure a profile for using your role.
First you will need to make a profile for your user to login:
aws configure --profile YourProfileAlias
And follow instructions for setting up credentials.
Then you will need to edit config and add profile for a role:
~/.aws/config
Add to the end a block:
[profile YourRoleProfileName]
role_arn = arn:aws:iam::DataAccount:role/RoleA
source_profile = YourProfileAlias
After that you will be able to use aws s3api ... --profile YourRoleProfileName to access your both buckets on behalf of created role.
To access from spark:
If you run your cluster on EMR, you should use SecurityConfiguration, and fill a section for S3 role configuration. A different role for each specific bucket can be specified. You should use "Prefix" constraint and list all destination prefixes after. Like "s3://dataaccountlogs/,s3://userlogs".
Note: you should strictly use s3 protocol for this, not s3a. Also there is number of limitations, you can find here:
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-s3-optimized-committer.html
Another way with spark is to configure Hadoop to assume your role. Putting
spark.hadoop.fs.s3a.aws.credentials.provider =
"org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider,org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider"
And configuring you role to be used
spark.hadoop.fs.s3a.assumed.role.arn = arn:aws:iam::DataAccount:role/RoleA
This way is more general now, since EMR commiter have various limitations. You can find more information for configuring this at Hadoop docs:
https://hadoop.apache.org/docs/r3.1.1/hadoop-aws/tools/hadoop-aws/assumed_roles.html
I want to give read access to all AWS authenticated users to a bucket. Note I don't want my bucket to be publicly available. Old amazon console seems to give that provision which I no longer see -
Old S3 bucket ACL -
New bucket Acl -
How can I achieve old behavior? Can I do it using bucket policies -
Again I don't want
{
"Id": "Policy1510826508027",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1510826503866",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::athakur",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
That support is removed in the new s3 console and has to be set via ACL.
You can use the put-bucket-acl api to set the Any Authenticated AWS User as grantee.
The grantee for this is:
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"><URI><replaceable>http://acs.amazonaws.com/groups/global/AuthenticatedUsers</replaceable></URI></Grantee>
Refer http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html for more info.
We can give entire ACL string in the aws cli command as ExploringApple explained or just do -
aws s3api put-bucket-acl --bucket bucketname --grant-full-control uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers
Docs - http://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html
I have a bucket in Amazon S3 called 'data1'.
When I connect using Cyberduck to my S3, I want the user to only have access to 'data1' bucket and none of the others.
I also set up a new IAM user, called data1, and attached the 'AmazonS3FullAccess' policy to the permissions for that user - but that gives access to all of the buckets - which is what you would expect.
I guess I need to setup another policy for this - however what policy would I do?
First find the users principle. These can be found by looking at the Arn field output by this command
aws iam list-users
For instance
{
"Users": [
{
"UserName": "eric",
"Path": "/",
"CreateDate": "2016-07-12T09:08:21Z",
"UserId": "AIDAJXPI4SWK7X7PY4RX2",
"Arn": "arn:aws:iam::930517348925:user/eric"
},
{
"UserName": "bambi",
"Path": "/",
"CreateDate": "2015-07-15T11:07:16Z",
"UserId": "AIDAJ2LEXFRXJI5AKUU7W",
"Arn": "arn:aws:iam::930517348725:user/bambi"
}
]
}
Then set up an S3 bucket policy. These apply to the bucket and are set per bucket. Normal IAM policies are set per IAM entity and are attached to the IAM entity, for instance the user. You already have IAM policies. For this requirement an S3 policy is needed.
Just to emphasise - S3 policies apply to the bucket and are "attached" to S3, IAM policies apply to IAM and are associated with IAM objects. When IAM entities try to use an S3 bucket both S3 policys and IAM policies can apply. See http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Once you know the ARN of the principle then add a S3 policy like this
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "arn:aws:iam::930517348725:user/bambi",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
},
{
"Sid":"block",
"Effect":"Deny",
"Principal": "arn:aws:iam::930517348725:user/bambi",
"Action":["s3:*"],
"Resource":["arn:aws:s3:::*"]
}
]
}
I haven't tested this but that is the general idea. Sorry I didn't use "data1" for both the principle and bucket name in the example but it's too confusing..:)
For write-only access you can attach a policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
but it reads like you want to do more than just write?
I am trying to give myself permission to download existing files in an S3 bucket. I've modified the Bucket Policy, as follows:
{
"Sid": "someSID",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
}
My understanding is that addition to the policy should give me full rights to "bucketname" for my account "myuid", including all files that are already in that bucket. However, I'm still getting Access Denied errors when I try to download any of those files via the link that comes up in the console.
Any thoughts?
Step 1
Click on your bucket name, and under the permissions tab, make sure that Block new public bucket policies is unchecked
Step 2
Then you can apply your bucket policy
Hope that helps
David, You are right but I found that, in addition to what bennie said below, you also have to grant view (or whatever access you want) to 'Authenticated Users'.
But a better solution might be to edit the user's policy to just grant access to the bucket:
{
"Statement": [
{
"Sid": "Stmt1350703615347",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {}
}
]
}
The first block grants all S3 permissions to all elements within the bucket. The second block grants list permission on the bucket itself.
Change resource arn:aws:s3:::bucketname/AWSLogs/123123123123/* to arn:aws:s3:::bucketname/* to have full rights to bucketname
for show website static in s3:
This is bucket policies:
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::example-bucket/*"
]
}
]
}
Use below method for uploading any file for public readable form using TransferUtility in Android.
transferUtility.upload(String bucketName, String key, File file, CannedAccessControlList cannedAcl)
Example
transferUtility.upload("MY_BUCKET_NAME", "FileName", your_file, CannedAccessControlList.PublicRead);
To clarify: It is really not documented well, but you need two access statements.
In addition to your statement that allows actions to resource "arn:aws:s3:::bucketname/AWSLogs/123123123123/*", you also need a second statement that allows ListBucket to "arn:aws:s3:::bucketname", because internally the Aws client will try to list the bucket to determine it exists before doing its action.
With the second statement, it should look like:
"Statement": [
{
"Sid": "someSID",
"Action": "ActionThatYouMeantToAllow",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
},
{
"Sid": "someOtherSID",
"Action": "ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
]
Note: If you're using IAM, skip the "Principal" part.
If you have an encrypted bucket, you will need kms allowed.
Possible reason: if files have been put/copy by another AWS Account user then you can not access the file since still file owner is not you. The AWS account user who has been placed files in your directory has to grant access during a put or copy operation.
For a put operation, the object owner can run this command:
aws s3api put-object --bucket destination_awsexamplebucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2 --acl bucket-owner-full-control
For a copy operation of a single object, the object owner can run one of these commands:
aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control
ref : AWS Link
Giving public access to Bucket to add policy is NOT A RIGHT way.
This exposes your bucket to public even for a short amount of time.
You will face this error even if you are admin access (Root user will not face it)
According to aws documentation you have to add "PutBucketPolicy" to you IAM user.
So Simply add a S3 Policy to you IAM User as in below screenshot , mention your Bucket ARN for make it safer and you don't have to make you bucket public again.
No one metioned MFA. For Amazon users who have enabled MFA, please use this:
aws s3 ls s3://bucket-name --profile mfa.
And prepare the profile mfa first by running
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371).
This can also happen if the encryption algorithm in the S3 parameters is missing. If bucket's default encryption is set to enabled, ex. Amazon S3-managed keys (SSE-S3), you need to pass ServerSideEncryption: "AES256"|"aws:kms"|string to your bucket's param.
const params = {
Bucket: BUCKET_NAME,
Body: content,
Key: fileKey,
ContentType: "audio/m4a",
ServerSideEncryption: "AES256" // Here ..
}
await S3.putObject(params).promise()
Go to this link and generate a Policy.
In the Principal field give *
In the Actions set the Get Objects
Give the ARN as arn:aws:s3:::<bucket_name>/*
Then add statement and then generate policy, you will get a JSON file and then just copy that file and paste it in the Bucket Policy.
For More Details go here.