I keep getting this error:
An error occurred (AccessControlListNotSupported) when calling the PutObject operation: The bucket does not allow ACLs
I'm switching to chunked uploads, previously i could do below and this uploaded fine.
original = models.FileField(storage=S3Boto3Storage(bucket_name='video-sftp',default_acl=None),upload_to='', blank=False, null=False)
Now i'm using generate_presigned_url and the ACL parameter is being ignored.
url = client.generate_presigned_url(
ClientMethod="put_object",
Params={
"Bucket": "video-sftp",
"Key": f"{json.loads(request.body)['fileName']}",
"ACL": "None"
},
ExpiresIn=300,
)
How do i solve?
I have omitted the parameter ACL entirely and it works:
s3_client.generate_presigned_url(
'put_object',
Params = {'Bucket': bucket_name, 'Key': key}
)
If you want to use the ACL parameter, maybe you shouldn't set it to the string "None" and use the value None instead.
Related
My goal is to upload objects to S3, I have been trying with both smart_open and boto3 libraries with no success.
I don't know much about configuring IAM policies or Access points in S3; but finding very hard to debug and understand how to pass configurations.
IAM
this is my policy - it should be open and allow PUT. I don't have any access point set.
{
"Version": "2012-10-17",
"Id": "Policy1449009487903",
"Statement": [
{
"Sid": "Stmt1449009478455",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://s3-us-west-2.amazonaws.com/MY_BUCKET/*"
]
}
}
}
]
}
boto3
With boto3, I try to open a session, and then upload a file from local disk:
import boto3
session = boto3.Session(
aws_access_key_id = ACCESS_KEY,
aws_secret_access_key = SECRET_KEY,
)
s3 = boto3.resource('s3')
s3.Bucket(S3_BUCKET).upload_file(path_to_my_file_on_disk ,'test.json')
But I got error (very long), which end with:
EndpointConnectionError: Could not connect to the endpoint URL: "https://MY_BUCKET.s3.us-oregon.amazonaws.com/test.json"
Note that the url is different from the URI of an object shared on s3, that should be:
s3://MY_BUCKET/test.json
Looking at :
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html
I just tried:
import boto3
# Print out bucket names
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
And it yields error: fail to connect to:
EndpointConnectionError: Could not connect to the endpoint URL: "https://s3.us-oregon.amazonaws.com/"
Smart_open
I tried with smart_open like this:
with smart_open.open('s3://{}:{}#{}/{}'.format(ACCESS_KEY, SECRET_KEY, S3_BUCKET, filename), 'wb') as o:
o.write(json.dumps(template).encode('utf8'))
But also here, it fails to connect. It does not say why though.
Reading on Stackoverflow, some threads reported that uploading with Smart_open version >= 5.0.0 could be more complicated - see:
https://github.com/RaRe-Technologies/smart_open/blob/develop/howto.md
So I tried:
session = boto3.Session(
aws_access_key_id= ACCESS_KEY,
aws_secret_access_key= SECRET_KEY)
with smart_open.open(
's3://' + S3_BUCKET + '/robots.txt', mode = 'w', transport_params={'client': session.client('s3')}) as o:
o.write("nothing to see here\n")
o.close()
No success
with smart_open.open(
's3://' + S3_BUCKET + '/robots.txt',
'w',
transport_params = {
'client_kwargs': {
'S3.Client.create_multipart_upload': {
'ServerSideEncryption': 'aws:kms'
}
},
'client': boto3.client('s3')
}
) as o:
o.write("nothing to see here\n")
o.close()
no success.
Can you help debug and point to the correct direction ?
I found a solution for boto3:
it turned out I had to specify correct region in the Session:
s3 = boto3.client('s3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
region_name=MY_REGION
)
s3.upload_file(path_to_filename , S3_BUCKET, 'test.json')
worked out.
However, with smart_open I could not find a solution:
Ref.
How to use Python smart_open module to write to S3 with server-side encryption
I tried to specify the boto3 session as above, and then:
session = boto3.Session(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
region_name=MY_REGION)
client_kwargs = {'S3.Client.create_multipart_upload': {'ServerSideEncryption': 'AES256'}}
with smart_open.open('s3://{}:{}#{}/{}'.format(ACCESS_KEY, SECRET_KEY, S3_BUCKET, filename), 'wb', transport_params={'client_kwargs': client_kwargs}
) as o:
o.write(json.dumps(myfile).encode('utf8'))
Someone can show a correct way for smart_open as well ?
Using 6.3.0 version.
I post this partial answer if someone can find it useful...
.. debugging cumbersome for me, not an expert of AWS IAM either
I have tried everything but couldn't get any clue what's wrong with my IAM policy to do with Cognito sub with identity ID access
I am using Lambda to get authentication details > get_object from a folder separated by Cognito user using boto3.
Here's my Lambda code:
import json
import urllib.parse
import boto3
import sys
import hmac, hashlib, base64
print('Loading function')
cognito = boto3.client('cognito-idp')
cognito_identity = boto3.client('cognito-identity')
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
username = '{substitute_with_my_own_data}' //authenticated user
app_client_id = '{substitute_with_my_own_data}' //cognito client id
key = '{substitute_with_my_own_data}' //cognito app client secret key
cognito_provider = 'cognito-idp.{region}.amazonaws.com/{cognito-pool-id}'
message = bytes(username+app_client_id,'utf-8')
key = bytes(key,'utf-8')
secret_hash = base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode()
print("SECRET HASH:",secret_hash)
auth_data = { 'USERNAME': username, 'PASSWORD':'{substitute_user_password}', 'SECRET_HASH': secret_hash}
auth_response = cognito.initiate_auth(
AuthFlow='USER_PASSWORD_AUTH',
AuthParameters=auth_data,
ClientId=app_client_id
)
print(auth_response)
# From the response that contains the assumed role, get the temporary
# credentials that can be used to make subsequent API calls
auth_result=auth_response['AuthenticationResult']
id_token=auth_result['IdToken']
id_response = cognito_identity.get_id(
IdentityPoolId='{sub_cognito_identity_pool_id}',
Logins={cognito_provider: id_token}
)
print('id_response = ' + id_response['IdentityId']) // up to this stage verified correct user cognito identity id returned
credentials_response = cognito_identity.get_credentials_for_identity(
IdentityId=id_response['IdentityId'],
Logins={cognito_provider: id_token}
)
secretKey = credentials_response['Credentials']['SecretKey']
accessKey = credentials_response['Credentials']['AccessKeyId']
sessionToken = credentials_response['Credentials']['SessionToken']
print('secretKey = ' + secretKey)
print('accessKey = ' + accessKey)
print('sessionToken = ' + sessionToken)
# Use the temporary credentials that AssumeRole returns to make a
# connection to Amazon S3
s3 = boto3.client(
's3',
aws_access_key_id=accessKey,
aws_secret_access_key=secretKey,
aws_session_token=sessionToken,
)
# Use the Amazon S3 resource object that is now configured with the
# credentials to access your S3 buckets.
# for bucket in s3.buckets.all():
# print(bucket.name)
# Get the object from the event and show its content type
bucket = '{bucket-name}'
key = 'abc/{user_cognito_identity_id}/test1.txt'
prefix = 'abc/{user_cognito_identity_id}'
try:
response = s3.get_object(
Bucket=bucket,
Key=key
)
# response = s3.list_objects(
# Bucket=bucket,
# Prefix=prefix,
# Delimiter='/'
# )
print(response)
return response
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
What I have verified:
authentication OK
identity with correct assumed role (printed the cognito identity ID and verified it's the correct authenticated user with the ID)
removed the ${cognito-identity.amazonaws.com:sub} and granted general access to authenticated role > I will be able to get, however the ${cognito-identity.amazonaws.com:sub} seems not able to detect and match well
So it seems that there's issue with the IAM policy
IAM policy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"*/${cognito-identity.amazonaws.com:sub}/*"
]
}
}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}
I tried listing bucket / get object / put object, all access denied.
I did try playing around with policys such as removing the listbucket condition (obviously it allows access then since i have authenticated) / changing "s3:prefix" to "${cognito-identity.amazonaws.com:sub}/" or "cognito/${cognito-identity.amazonaws.com:sub}/" but can't make anything work.
Same goes for put or get object.
My S3 folder is bucket-name/cognito/{cognito-user-identity-id}/key
I referred to:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_cognito-bucket.html
https://aws.amazon.com/blogs/mobile/understanding-amazon-cognito-authentication-part-3-roles-and-policies/
any insights on where might be wrong?
I managed to resolve this after changing the GetObject and PutObject policy Resources from
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/*"
to
"arn:aws:s3:::bucket-name/*/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/*/${cognito-identity.amazonaws.com:sub}/*"
and it works magically. I don't quite get why cognito would prevent the access since my bucket has cognito prefix after the bucket root, but this is resolved now.
I have a versioned S3 bucket named protected-bucket and I want to programmatically delete objects or versions (sometimes just some versions). Bucket has the following policy attached that enforces the MFA to be present when Delete* actions are about to be executed:
{
"Sid": "RequireMFAForDelete",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:Delete*",
"Resource": "arn:aws:s3:::protected-bucket/*",
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
I also tried to use the "Condition": { "Null": { "aws:MultiFactorAuthAge": true }} in bucket policy, as suggested on the https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-use-case-7 page. Got the same problem from below...
Here is a minimal Python3 code that is supposed to delete object version in the bucket I mentioned above:
#!/usr/bin/env python3
import boto3
from datetime import datetime
mfa_totp = input("Enter the MFA code: ")
session_name='my-test-session-' + str(int(datetime.utcnow().timestamp()))
client=boto3.client('sts', 'us-east-1')
ar_res = client.assume_role(
RoleArn='arn:aws:iam::123456789102:role/test-role',
RoleSessionName=session_name,
DurationSeconds=900,
SerialNumber='arn:aws:iam::987654321098:mfa/my_user_name',
TokenCode=mfa_totp,
)
print(ar_res)
tmp_creds = ar_res["Credentials"]
s3_client = boto3.client("s3", "us-east-1",
aws_access_key_id=tmp_creds["AccessKeyId"],
aws_secret_access_key=tmp_creds["SecretAccessKey"],
aws_session_token=tmp_creds["SessionToken"])
s3_bucket = "protected-bucket"
s3_key = "test/test4.txt"
s3_version = "XYZXbHbi3lpCNlOM8peIim6gi.IZQJqM"
# If I put code here that lists objects in
if s3_version:
response = s3_client.delete_object(Bucket=s3_bucket,
Key=s3_key,
VersionId=s3_version)
else:
response = s3_client.delete_object(Bucket=s3_bucket,Key=s3_key)
print(response)
The error I am getting follows:
Traceback (most recent call last):
File "./del_test.py", line 37, in <module>
response = s3_client.delete_object(Bucket=s3_bucket,
File "/home/dejan/py/myproj/lib64/python3.8/site-packages/botocore/client.py", line 386, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/dejan/py/myproj/lib64/python3.8/site-packages/botocore/client.py", line 705, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
Few things to note:
Role I am assuming is a different account (some people may have noticed different account numbers in the Python code.
That Role has Delete* actions allowed in a policy attached to the role. When I remove the MFA protection in the bucket policy, the Python 3 code above works - it can delete objects and versions.
It turned out I missed the key piece of information given in the https://docs.amazonaws.cn/en_us/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html document:
The temporary credentials returned by AssumeRole do not include MFA information in the context, so you cannot check individual API operations for MFA. This is why you must use GetSessionToken to restrict access to resources protected by resource-based policies.
In short, if I just assume_role(), with MFA, like I did in the Python code presented in the question, the MFA data will not be passed down, so get_session_token() is a must... Following refactored code (made with help of my colleague #Chadwick) works as expected:
#!/usr/bin/env python3
import boto3
from datetime import datetime
mfa_serial = "arn:aws:iam::987654321098:mfa/my_user_name"
role_to_assume = "arn:aws:iam::123456789102:role/test-role"
mfa_totp = input("Enter the MFA code: ")
mfa_sts_client = boto3.client("sts", "us-east-1")
mfa_credentials = mfa_sts_client.get_session_token(
SerialNumber=mfa_serial,
TokenCode=mfa_totp,
)["Credentials"]
session_name='my-test-session-' + str(int(datetime.utcnow().timestamp()))
# We now create a client with credentials from the MFA enabled session we created above:
ar_sts_client=boto3.client("sts", "us-east-1",
aws_access_key_id=mfa_credentials["AccessKeyId"],
aws_secret_access_key=mfa_credentials["SecretAccessKey"],
aws_session_token=mfa_credentials["SessionToken"])
ar_res = ar_sts_client.assume_role(
RoleArn=role_to_assume,
RoleSessionName=session_name,
DurationSeconds=900
)
print(ar_res)
tmp_creds = ar_res["Credentials"]
s3_client = boto3.client("s3", "us-east-1",
aws_access_key_id=tmp_creds["AccessKeyId"],
aws_secret_access_key=tmp_creds["SecretAccessKey"],
aws_session_token=tmp_creds["SessionToken"])
s3_bucket = "protected-bucket"
s3_key = "test/test4.txt"
s3_version = "YYFMqnLaVEosoZ1Zk3Xy8dVbNGQVEF35"
# s3_version = None
if s3_version:
response = s3_client.delete_object(Bucket=s3_bucket,
Key=s3_key,
VersionId=s3_version)
else:
response = s3_client.delete_object(Bucket=s3_bucket,Key=s3_key)
print(response)
I am trying to upload the image file to one of the bucket over GCS by using python google cloud storage api from the program. I am able to list out the buckets through the program but when uploading image, I am getting a below error. :
google.api_core.exceptions.Forbidden: 403 POST https://storage.googleapis.com/upload/storage/v1/b/projbucket1/o?uploadType=multipart: {
"error": {
"code": 403,
"message": "The account for bucket \"projbucket1\" has not enabled billing.",
"errors": [
{
"message": "The account for bucket \"projbucket1\" has not enabled billing.",
"domain": "global",
"reason": "accountDisabled",
"locationType": "header",
"location": "Authorization"
}
]
}
}
: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>)
where projbucket1 is my bucket where I wanted to upload the file.
I am using below python code for that:
def upload_image(Imagedata, kind):
image_filename = Imagedata.filename
fullpath = os.path.join(app.root_path, 'static/images', image_filename)
bucket_name = "projbucket1"
if kind == "User" :
destination = "projbucket1/UserImages/"+image_filename
else:
destination = "projbucket1/PostImages/"+image_filename
storage_client = storage.Client()
buckets = storage_client.list_buckets()
for bucket in buckets:
print(bucket.name)
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination)
blob.upload_from_filename(fullpath)
I have storage admin and owner permissions for the service account I am using. Please help me in this case.
Thanks,
Pranamya
I have an AWS ElasticSearch domain in eu-west-1 region and have taken a snapshot to an S3 bucket sub folder also in the same region.
I have also deployed a second AWS ElasticSearch domain in another aws region - eu-west-2.
Added an S3 bucket replication between the buckets but when I try to register the repository on the eu-west-2 AWS ES domain, I get the following error:
500
{"error":{"root_cause":[{"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists"}],"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists","caused_by":{"type":"amazon_s3_exception","reason":"Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 14F0571DFF522922; S3 Extended Request ID: U1OnlKPOkfCNFzoV9HC5WBHJ+kfhAZDMOG0j0DzY5+jwaRFJvHkyzBacilA4FdIqDHDYWPCrywU=)"}},"status":500}
this is the code i am using to register the repository on the new cluster (taken from https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html#es-managedomains-snapshot-restore):
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/' # include https:// and trailing /
region = 'eu-west-2' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
# Register repository
path = '_snapshot/es-elk-prod' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers)
print(r.status_code)
print(r.text)
from the logs, i get:
curl -X GET 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/_snapshot/es-mw-elk-prod/_all?pretty'
{
"error" : {
"root_cause" : [
{
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
}
],
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
},
"status" : 500
}
the ARN is able to access the S3 bucket as is the same ARN i use to snapshot the eu-west-2 domain to S3 as the eu-west-1 snapshot is stored in a sub-folder on the S3 bucket, I added a path to the code, such that:
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"path": "es-elk-prod",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
but this didn't work either.
What is the correct way to restore snapshot created in one aws region into another aws region?
Any advice is much appreciated.
I've had similar but not identical error messages about The bucket is in this region: eu-west-1. Please use this region to retry the request when moving from eu-west-1 to us-west-2.
According to Amazon's documentation (under "Migrating data to a different domain") you will need to specify an endpoint rather than a region:
If you encounter this error, try replacing "region": "us-east-2" with "endpoint": "s3.amazonaws.com" in the PUT statement and retry the request.