boto3 generate_presigned_url with SSE encryption - amazon-web-services

I am looking for examples to generate presigned url using boto3 and sse encryption.
Here is my code so far
s3_client = boto3.client('s3',
region_name='ap-south-1',
endpoint_url='http://s3.ap-south-1.amazonaws.com',
config=boto3.session.Config(signature_version='s3v4'),
)
try:
response = s3_client.generate_presigned_url('put_object',
Params={'Bucket': bucket_name,
'Key': object_name},
ExpiresIn=expiration)
except ClientError as e:
logging.error("In client error exception code")
logging.error(e)
return None
I am struggling to find the right parameters to use SSE encryption.
I am able to use PUT call to upload a file. I would also like to know the headers to use from the client side to adhere to sse encryption.

import boto3
access_key = "..."
secret_key = "..."
bucket = "..."
s3 = boto3.client('s3',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
return(s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': filename,
'SSECustomerAlgorithm': 'AES256',
}
))
Also add the header:-
'x-amz-server-side-encryption': 'AES256'
in the front end code while calling the presigned url

You can add Conditions to the pre-signed URL that must be met for the upload to be valid. This could probably include x-amz-server-side-encryption.
See: Creating a POST Policy - Amazon S3
Alternatively, you could add a bucket policy that denies any request that is not encrypted.
See: How to Prevent Uploads of Unencrypted Objects to Amazon S3 | AWS Security Blog

Related

failed to download files from AWS S3

Senario:
commit Athena query with boto3 and output to s3
download result in s3
Error: An error occurred (404) when calling the HeadObject operation: Not Found
It's weird that the file exists in S3 and I can copy it down with aws s3 cp command. But I just cannot download with boto3 and failed to execute head-object.
aws s3api head-object --bucket dsp-smaato-sink-prod --key /athena_query_results/c96bdc09-d545-4ee3-bc66-be3be928e3f2.csv
It does work. I've checked account policies and it has granted admin policy.
# snippets
def s3_donwload(url, target=None):
# s3 = boto3.resource('s3')
# client = s3.meta.client
client = boto3.client("s3", region_name=constant.AWS_REGION, endpoint_url='https://s3.ap-southeast-1.amazonaws.com')
s3_file = urlparse(url)
if target:
target = os.path.abspath(target)
else:
target = os.path.abspath(os.path.basename(s3_file.path))
logger.info(f"download {url} to {target}...")
client.download_file(s3_file.netloc, s3_file.path, target)
logger.info(f"download {url} to {target} done!")
Take a look at the value of s3_file.path -- does it start with a slash? If so, it needs to change because Amazon S3 keys do not start with a slash.
I suggest that you print the content of netloc, path and target to see what values it is actually passing.
It's a bit strange to use os.path with an S3 URL, so it might need some tweaking.

How to access response from boto3 bucket.put_object?

Looking at the boto3 docs, I see that client.put_object has a response shown, but I don't see a way to get the response from bucket.put_object.
Sample snippet:
s3 = boto3.resource(
's3',
aws_access_key_id=redacted,
aws_secret_access_key=redacted,
)
s3.Bucket(bucketName).put_object(Key="bucket-path/" + fileName, Body=blob, ContentMD5=md5Checksum)
logging.info("Uploaded to S3 successfully")
How is this accomplished?
put_object returns S3.Object, which in turn has the wait_until_exists method.
Therefore, something along these lines should be sufficient (my verification code is bellow):
import boto3
s3 = boto3.resource('s3')
with open('test.img', 'rb') as f:
obj = s3.Bucket('test-ssss4444').put_object(
Key='fileName',
Body=f)
obj.wait_until_exists() # optional
print("Uploaded to S3 successfully")
put_object is a blocking operation. Thus it will block your program until your file is uploaded. Therefore wait_until_exists is not really needed. But if you want to make sure that the upload actually went through and the object is in S3 you can use it.
You have to use boto3.client instead of boto3.resource to get the response information like ETag and etc. It has a little bit different syntax.
import boto3
s3 = boto3.resource('s3')
s3.put_object(Bucket='bucket-name', Key='fileName', Body=body)

boto3 s3 generate_presigned_url ExpiresIn doesn't work as expected

I have tried to generate pre-signed URL with 7dsys expiration time.
(It is saying maximum duration is 7days, AWS S3 pre signed URL without Expiry date)
# It is called and retruned in AWS Lambda
boto3.client('s3').generate_presigned_url(
'get_object',
Params={'Bucket': bucket, 'Key': object_key},
ExpiresIn=(60*60*24*7) # 7days
)
However, it seems not to retain the pre-signed URL for 7days but just several hours. The pre-signed URL just returns the XML format after that.
<Error>
<Code>ExpiredToken</Code>
<Message>The provided token has expired.</Message>
.
.
.
</Error>
It seems even to be different expiration time every time I try, sometimes 5 hours, sometime 12hours.
I don't know why.
import boto3
from botocore.client
import Config
# Get the service client with sigv4 configured
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))
# Generate the URL to get 'key-name' from 'bucket-name'
# URL expires in 604800 seconds (seven days)
url = s3.generate_presigned_url(ClientMethod='get_object',Params={
'Bucket':'bucket-name',
'Key': 'key-name'
},ExpiresIn=604800)
print(url)

How to add encryption to boto3.s3.transfer.TransferConfig for s3 file upload

I am trying to upload a file to s3 using boto3 file_upload method. This is pretty straight forward until server side encryption is needed. In the past I have used put_object to achieve this.
Like so:
import boto3
s3 = boto3.resource('s3')
s3.Bucket(bucket).put_object(Key=object_name,
Body=data,
ServerSideEncryption='aws:kms',
SSEKMSKeyId='alias/aws/s3')
I now want to upload files directly to s3 using the file_upload method. I can't find how to add server side encryption to the file_upload method. The file_upload method can take a TransferConfig but I do not see any arguments that set the encryption but I do see them in S3Transfer.
I am looking for something like this:
import boto3
s3 = boto3.resource('s3')
tc = boto3.s3.transfer.TransferConfig(ServerSideEncryption='aws:kms',
SEKMSKeyId='alias/aws/s3')
s3.upload_file(file_name,
bucket,
object_name,
Config=tc)
boto3 documentation
file_upload
TransferConfig
I was able to come up with two solutions with jarmod's help.
Using boto3.s3.transfer.S3Transfer
import boto3
client = boto3.client('s3', 'us-west-2')
transfer = boto3.s3.transfer.S3Transfer(client=client)
transfer.upload_file(file_name,
bucket,
key_name,
extra_args={'ServerSideEncryption':'aws:kms',
'SSEKMSKeyId':'alias/aws/s3'}
)
Using s3.meta.client
import boto3
s3 = boto3.resource('s3')
s3.meta.client.upload_file(file_name,
bucket, key_name,
ExtraArgs={'ServerSideEncryption':'aws:kms',
'SSEKMSKeyId':'alias/aws/s3'})
you donot pass SSEKMSKeyId in boto3 api if you want to use s3 kms, by default, it uses s3 kms key.
import boto3
s3 = boto3.client('s3')
content = '64.242.88.10 - - [07/Mar/2004:16:06:51 -0800] "GET /twiki/bin/rdiff/TWiki/NewUserTemplate?rev1=1.3&rev2=1.2 HTTP/1.1" 200 4523'
s3.put_object(Bucket=testbucket, Key='ex31/input.log', Body=content,ServerSideEncryption='aws:kms')

Extract and save attachment from email (via SES) into AWS S3

I want to extract the attachment from email and save it into my new S3 bucket. So far, I have configured AWS Simple Email Service to intercept incoming emails. Now I have an AWS lambda python function, which gets triggered on S3 Put.
Until this it is working. But my lambda is giving error saying: "[Errno 2] No such file or directory: 'abc.docx': OSError". I see that the attachment with the name abc.docx is mentioned in the raw email in S3.
I assume the problem is in my upload_file. Could you please help me here.
Please find below the relevant parts of my code.
s3 = boto3.client('s3')
s3resource = boto3.resource('s3')
waiterFlg = s3.get_waiter('object_exists')
waiterFlg.wait(Bucket=bucket, Key=key)
response = s3resource.Bucket(bucket).Object(key)
message = email.message_from_string(response.get()["Body"].read())
if len(message.get_payload()) == 2:
attachment = msg.get_payload()[1]
s3resource.meta.client.upload_file(attachment.get_filename(), outputBucket, attachment.get_filename())
else:
print("Could not see file/attachment.")
You can download the attachment to /tmp directory in Lambda and then upload to S3.
The following code solved the issue:
open('/tmp/newFile.docx', 'wb') as f:
f.write(attachment.get_payload(decode=True))
s3r.meta.client.upload_file('/tmp/newFile.docx', outputBucket, attachment.get_filename())