AWS S3 IAM user can't access bucket - amazon-web-services

I have an IAM user called server that uses s3cmd to backup up to S3.
s3cmd sync /path/to/file-to-send.bak s3://my-bucket-name/
Which gives:
ERROR: S3 error: 403 (SignatureDoesNotMatch): The request signature we calculated does not match the signature you provided. Check your key and signing method.
The same user can send email via SES so I know that the access_key and secret_key are correct.
I have also attached AmazonS3FullAccess policy to the IAM user and clicked on Simulate policy. I added all of the Amazon S3 actions and then clicked Run simulation. All of the actions were allowed so it seems that S3 thinks I should have access. The policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
The only way I can get access is to use use the root accounts access_key and secret_key. I can not get any IAM user to be able to login.
Using s3cmd --debug gives:
DEBUG: Response: {'status': 403, 'headers': {'x-amz-bucket-region': 'eu-west-1', 'x-amz-id-2': 'XXX', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': 'XXX', 'date': 'Tue, 30 Aug 2016 09:10:52 GMT', 'content-type': 'application/xml'}, 'reason': 'Forbidden', 'data': '<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>XXX</AWSAccessKeyId><StringToSign>GET\n\n\n\nx-amz-date:Tue, 30 Aug 2016 09:10:53 +0000\n/XXX/</StringToSign><SignatureProvided>XXX</SignatureProvided><StringToSignBytes>XXX</StringToSignBytes><RequestId>490BE76ECEABF4B3</RequestId><HostId>XXX</HostId></Error>'}
DEBUG: ConnMan.put(): connection put back to pool (https://XXX.s3.amazonaws.com#1)
DEBUG: S3Error: 403 (Forbidden)
Where I have replaced anything sensitive looking with XXX.
Have I missed something in the permissions setup?

explictly use the correct iam access key and secret key used with the s3cmd ie
s3cmd --access_key=75674745756 --secret_key=F6AFHDGFTFJGHGH sync /path/to/file-to-send.bak s3://my-bucket-name/
The error shown is for an incorrect access key and/or secret key

Related

IAM Permissions Errors When Using boto3 for AWS Comprehend

I'm playing around with the command line to run some sentiment analysis through aws and am running into some IAM issues. When running the "detect_dominant_language" function, I'm hitting NotAuthorizedExceptions despite having the policy in place to allow for all comprehend functions. The policy for the account is:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"comprehend:*",
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:GetBucketLocation",
"iam:ListRoles",
"iam:GetRole"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Any ideas of where I might be going wrong with this? I've triple-checked my access key to make sure that I'm referring to the correct account. When I check the policy, it's there so I'm a little bit at a loss as to the disconnect. S3 seems to be working fine as well.
Steps already taken:
Resetting access key/secret access key.
Creating iam policy which explicitly refers to the needed functionality and attaching it to the "Admin" user.
Calling this method from the CLI (get the same error).
Below, I've included additional information that may be helpful...
Code to check iam policies:
iam = boto3.client('iam',
aws_access_key_id = '*********************',
aws_secret_access_key = '*************************************')
iam.list_attached_user_policies(UserName="Admin")
Output:
{'AttachedPolicies': [{'PolicyName': 'ComprehendFullAccess',
'PolicyArn': 'arn:aws:iam::aws:policy/ComprehendFullAccess'},
{'PolicyName': 'AdministratorAccess',
'PolicyArn': 'arn:aws:iam::aws:policy/AdministratorAccess'},
{'PolicyName': 'Comprehend-Limitied',
'PolicyArn': 'arn:aws:iam::401311205158:policy/Comprehend-Limitied'}],
'IsTruncated': False,
'ResponseMetadata': {'RequestId': '9094d8ff-1730-44b8-af0f-9222a63b32e9',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amzn-requestid': '9094d8ff-1730-44b8-af0f-9222a63b32e9',
'content-type': 'text/xml',
'content-length': '871',
'date': 'Thu, 20 Jan 2022 21:48:11 GMT'},
'RetryAttempts': 0}}
Code to trigger error:
comprehend = boto3.client('comprehend',
aws_access_key_id = '*********************',
aws_secret_access_key = '********************************')
test_language_string = "This is a test string. I'm hoping that AWS Comprehend can interprete this as english..."
comprehend.detect_dominant_language(Text=test_language_string)
Output:
ClientError: An error occurred (NotAuthorizedException) when calling the DetectDominantLanguage operation: Your account is not authorized to make this call.
I encountered the same error and I end up creating a new user group and a user for that particular API access. Here're the steps in a nutshell:
Create a user group (e.g. Research)
Give access to ComprehendFullAccess
Create a user (e.g.
ComprehendUser) under the newly created user group (i.e.
Research)
Bingo! It should work now.
Here is my code snippet:
# import packages
import boto3
# aws access credentials
AWS_ACCESS_KEY_ID = 'your-access-key'
AWS_SECRET_ACCESS_KEY = 'your-secret-key'
comprehend = boto3.client('comprehend',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name='us-east-1')
test_language_string = "This is a test string. I'm hoping that AWS Comprehend can interprete this as english..."
comprehend.detect_dominant_language(Text=test_language_string)
Expected Output
{'Languages': [{'LanguageCode': 'en', 'Score': 0.9753355979919434}],
'ResponseMetadata': {'RequestId': 'd2ab429f-6ff7-4f9b-9ec2-dbf494ebf20a',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amzn-requestid': 'd2ab429f-6ff7-4f9b-9ec2-dbf494ebf20a',
'content-type': 'application/x-amz-json-1.1',
'content-length': '64',
'date': 'Mon, 07 Feb 2022 16:31:36 GMT'},
'RetryAttempts': 0}}
UPDATE: Thanks for all the feedback y'all! It turns out us-west-1 doesn't support comprehend. Switching to a different availability zone did the trick, so I would recommend anyone with similar problems try different zones before digging too deep into permissions//access keys.

I wan't to create presigned url post, but always failed

thanks for greate packages!
I have problem when i create development with localstack using S3 service to create presignedurl post.
I have run localstack with SERVICES=s3 DEBUG=1 S3_SKIP_SIGNATURE_VALIDATION=1 localstack start
I have settings AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test AWS_DEFAULT_REGION=us-east-1 AWS_ENDPOINT_URL=http://localhost:4566 S3_Bucket=my-bucket
I make sure have the bucket
> awslocal s3api list-buckets
{
"Buckets": [
{
"Name": "my-bucket",
"CreationDate": "2021-11-16T08:43:23+00:00"
}
],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}
I try create presigned url, and running in console with this
s3_client_sync.create_presigned_post(bucket_name=settings.S3_Bucket, object_name="application/test.png", fields={"Content-Type": "image/png"}, conditions=[["Expires", 3600]])
and have return like this
{'url': 'http://localhost:4566/kredivo-thailand',
'fields': {'Content-Type': 'image/png',
'key': 'application/test.png',
'AWSAccessKeyId': 'test',
'policy': 'eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19',
'signature': 'LfFelidjG+aaTOMxHL3fRPCw/xM='}}
And i test using insomnia
and i have read log in localstack
2021-11-16T10:54:04:DEBUG:localstack.services.s3.s3_utils: Received presign S3 URL: http://localhost:4566/my-bucket/application/test.png?AWSAccessKeyId=test&Policy=eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19&Signature=LfFelidjG%2BaaTOMxHL3fRPCw%2FxM%3D&Expires=3600
2021-11-16T10:54:04:WARNING:localstack.services.s3.s3_utils: Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1
2021-11-16T10:54:04:INFO:localstack.services.s3.s3_utils: Presign signature calculation failed: <Response [403]>
what i missing, so i cannot create the presignedurl post ?
The problem is with your AWS configuration -
AWS_ACCESS_KEY_ID=test // Should be an Actual access Key for the IAM user
AWS_SECRET_ACCESS_KEY=test // Should be an Actual Secret Key for the IAM user
AWS_DEFAULT_REGION=us-east-1
AWS_ENDPOINT_URL=http://localhost:4566 // Endpoint seems wrong
S3_Bucket=my-bucket // Actual Bucket Name in AWS S3 console
For more information, try to read here and setup your environment with correct AWS credentials - Setup AWS Credentials

AWS Pre-Signed Post URL suddenly stopped working

So I have been working with aws-s3 post signed URLs for a month now and it was working as a charm all of sudden( I didn't change any policies for my IAM user or bucket) it start giving me forbidden request.
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Invalid according to Policy: Policy expired.</Message>
</Error>
I found that AWS sent me an email informing me that my trail ended does this has anything to do with it.
Note:I can still upload files to my s3 manually
Edit
The code
const params = {
Bucket: 'ratemycourses',
Fields: {
key: `profileImage/${userId}/profile.jpeg`,
acl: 'public-read',
'Content-Type': 'multipart/form-data',
},
Expires: 60,
};
const data = await s3.createPresignedPost(params) //I made the callback function promisifed;
return data;
The expiration element in your POST policy specifies the expiration date/time of the policy. It looks like your policy has expired. Correct the policy expiration, and then re-create your signed URL.
Here's an example of a POST policy:
{
"expiration": "2021-07-10T12:00:00.000Z",
"conditions": [
{"bucket": "mybucket" },
["starts-with", "$key", "user/shahda/"],
]
}

AWS boto3 InvalidAccessKeyId when using IAM role

I upload to and download from S3 with a presigned post/url. The presigned url/post are generated with boto3 in the Lambda function (it is deployed with zappa).
While I add my AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID as env variable works perfectly. Then I removed my credentials and I add an IAM role to lambda to full access to S3 bucket. After that the lambda return with the presigned URL and getObject is working well, however when i want to upload object through the URL, it returns an InvalidAccessKeyId error. The used key id ASIA... which means those are temporary credentials.
It seems that the lambda does not use IAM role, or what is the problem?
class S3Api:
def __init__(self):
self.s3 = boto3.client(
's3',
region_name='eu-central-1'
)
def generate_store_url(self, key):
return self.s3.generate_presigned_post(FILE_BUCKET,
key,
Fields=None,
Conditions=None,
ExpiresIn=604800)
def generate_get_url(self, key):
return self.s3.generate_presigned_url('get_object',
Params={'Bucket': FILE_BUCKET,
'Key': key},
ExpiresIn=604800)
My result for sts:getCallerIdentity:
{
'UserId': '...:dermus-api-dev',
'Account': '....',
'Arn': 'arn:aws:sts::....:assumed-role/dermus-api-dev-ZappaLambdaExecutionRole/dermus-api-dev',
'ResponseMetadata': {
'RequestId': 'a1bd7c31-0199-472e-bff7-b93a4f855450',
'HTTPStatusCode': 200,
'HTTPHeaders': {
'x-amzn-requestid': 'a1bd7c31-0199-472e-bff7-b93a4f855450',
'content-type': 'text/xml',
'content-length': '474',
'date': 'Tue, 09 Mar 2021 08:36:30 GMT'
},
'RetryAttempts': 0
}
}
dermus-api-dev-ZappaLambdaExecutionRole role is attached to dermus-api-dev lambda.
Presigned URLs and the Lambda credentials work in a non-obvious way together.
From the docs, emphasis mine:
Anyone with valid security credentials can create a presigned URL. However, in order to successfully access an object, the presigned URL must be created by someone who has permission to perform the operation that the presigned URL is based upon.
The credentials that you can use to create a presigned URL include:
IAM instance profile: Valid up to 6 hours
AWS Security Token Service : Valid up to 36 hours when signed with permanent credentials, such as the credentials of the AWS account root user or an IAM user
IAM user: Valid up to 7 days when using AWS Signature Version 4
To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials (the access key and secret access key) to the SDK that you're using. Then, generate a presigned URL using AWS Signature Version 4.
If you created a presigned URL using a temporary token, then the URL expires when the token expires, even if the URL was created with a later expiration time.
Bottom line: The URL might be expired if you wait too long, because the Lambda functions credentials are already expired.

Running on EC2 using an instance profile using Terraform

I'm trying to run Terraform on an AWS EC2 instance, which is setup with an instance profile. However, Terraform doesn't seem to be implicitly using the Instance Profile and as such, I'm getting an "access denied" error whenever it tries to access my S3 remote state.
From the docs, I'm having trouble telling if, I am required to specify an AWS_METADATA_URL, or if there's anything else I'm explicitly required to do to make this work.
Per the Terraform docs:
EC2 Role If you're running Terraform from an EC2 instance with IAM
Instance Profile using IAM Role, Terraform will just ask the metadata
API endpoint for credentials.
This is a preferred approach over any other when running in EC2 as you
can avoid hard coding credentials. Instead these are leased on-the-fly
by Terraform which reduces the chance of leakage.
You can provide the custom metadata API endpoint via the
AWS_METADATA_URL variable which expects the endpoint URL, including
the version, and defaults to http://169.254.169.254:80/latest
Here's an example of what I'm trying to run:
# main.tf
provider "aws" {
region = "${var.region}"
}
terraform {
backend "s3" {}
}
module "core" {
// ....
}
# init .sh
terraform init -force-copy -input=false \
-backend-config="bucket=$TERRAFORM_STATE_BUCKET" \
-backend-config="key=$ENVIRONMENT/$SERVICE" \
-backend-config="region=$REGION" \
-upgrade=true
# AWS policy
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"*"
]
},
]
}
UPDATE
It seems the s3 list-objects command is failing in Terraform, though my policies should be allowing this to go through
-----------------------------------------------------
2018/02/20 21:09:37 [DEBUG] [aws-sdk-go] DEBUG: Response s3/ListObjects Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 403 Forbidden
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Tue, 20 Feb 2018 21:09:36 GMT
Server: AmazonS3
X-Amz-Id-2: OVK5E3d5R+Jgj3if5lxAXkwuERPZWsJNFJ7NeMYFbSrhQ/h4FfpV4z2mlgXFKT1Hg7lsqJ/jE6Q=
X-Amz-Request-Id: FE6B77C5C74BCFFF
-----------------------------------------------------
2018/02/20 21:09:37 [DEBUG] [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>FE6B77C5C74BCFFF</RequestId><HostId>OVK5E3d5R+Jgj3if5lxAXkwuERPZWsJNFJ7NeMYFbSrhQ/h4FfpV4z2mlgXFKT1Hg7lsqJ/jE6Q=</HostId></Error>
2018/02/20 21:09:37 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/ListObjects failed, not retrying, error AccessDenied: Access Denied
status code: 403, request id: FE6B77C5C74BCFFF, host id: OVK5E3d5R+Jgj3if5lxAXkwuERPZWsJNFJ7NeMYFbSrhQ/h4FfpV4z2mlgXFKT1Hg7lsqJ/jE6Q=
2018/02/20 21:09:37 [DEBUG] plugin: waiting for all plugin processes to complete...
[31mError inspecting state in "s3": AccessDenied: Access Denied
status code: 403, request id: FE6B77C5C74BCFFF, host id: OVK5E3d5R+Jgj3if5lxAXkwuERPZWsJNFJ7NeMYFbSrhQ/h4FfpV4z2mlgXFKT1Hg7lsqJ/jE6Q=