S3 Images being downloaded automatically when linked - amazon-web-services

What permission am I missing or is this something else ?
We are using S3 for storage and to serve images to a site with moderate traffic.
We built out a custom thumbnail slider that links a small thumbnail image to a larger slider image at different resolution.
Before S3 came into play the images would link to each other. Now once the thumbnail is clicked that image is downloaded automatically rather than just linking to the larger image. Any thoughts?
Here is the code, but this is just an S3 question really. Thanks!
<div class="thumbnails" itemscope itemtype="http://schema.org/ImageGallery">
<ul id="easy-slide">
<i id="prev" class="fa fa-arrow-circle-o-left" aria-hidden="true"></i>
{thumbnails}
<li itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<a href="{thumbnails:large}" itemprop="contentUrl" data-size="500x500">
<img src="{thumbnails:thumb}" height="100px" width="100px" itemprop="thumbnail" alt="Image description" />
</a>
</li>
{/thumbnails}
<i id="next" class="fa fa-arrow-circle-o-right" aria-hidden="true"></i>
</ul>
</div>

Likely a Content-Type problem. The correct MIME type is not being set when you uploaded the images to S3.
Just to confirm, check the MIME type being returned:
curl -I -v http://www.example.com/image.jpg
Then you will need to set the correct content type in the S3 metadata. To update the the metadata on the S3 object, you can copy the object to itself, and specify the content type on the command line.
From StackOverflow: How can I change the content-type of an object using aws cli?:
$ aws s3api copy-object --bucket archive --content-type "image/jpg" \
--copy-source archive/test/image.jpg--key test/image.jpg \
--metadata-directive "REPLACE"
To answer your question:
Is there a way to set that for an entire folder or a bucket policy?
S3 does not actually have folder/directories. You need to touch each object via CLI to change its content type. See What is the difference between Buckets and Folders in Amazon S3?. But the command I referenced below will do that operation on an entire bucket.
So you will need to use the S3 CLI to update the content type metadata. Here is another answer showing the a variety of command line methods, that will change all the content type for all files of a given type (E.g. png), recursively:
aws s3 cp \
--exclude "*" \
--include "*.png" \
--content-type="image/png" \
--metadata-directive="REPLACE" \
--recursive \
--dryrun \
s3://mybucket/static/ \
s3://mybucket/static/
See https://serverfault.com/questions/725562/recursively-changing-the-content-type-for-files-of-a-given-extension-on-amazon-s

When uploading using the API, the default content type is attachment.
Sending a request using the content type parameter solves the problem. Include the "ContentType" parameter in the header request as below:
{
Bucket: <BUCKET NAME>,
Key: "", // pass key
Body: null, // pass file body,
ContentType: "image/jpg, image/png, image/jpeg"
}
The URL generated after uploading this way would allow browser access instead of an automatic download.

Related

Show Image stored in s3 on Web page using flask

Im trying to get images from an s3 bucket, and show them on a web page using flask (and boto3 to access the bucket).
I currently have a list of all the pictures from the bucket, but cant get the html to show them(gives me 404 error).
How do I do this without downloading the files?
this is what I have so far:
def list_files(bucket):
contents = []
for image in bucket.objects.all():
contents.append(image.key)
return contents
def files():
list_of_files = list_files(bucket)
return render_template('index.html', my_bucket=bucket, list_of_files=list_of_files)
and this is the html snippet:
<table class="table table-striped">
<br>
<br>
<tr>
<th>My Photos</th>
{% for f in list_of_files %}
<td> <img src="{{ f }}"></td>
{% endfor %}
Thanks a lot!
since loading an image to a html page requires a real image which exists in the directory. images from AWS S3 can be loaded onto a html page if you download them first in the directory, then use its url as a source in html <image> tag.
i found a solution to this but you need to modify it as your needs.
define a function that loads the image from S3 as:
import matplotlib.image as mpimg
import numpy as np
import boto3
import tempfile
s3 = boto3.resource('s3', region_name='us-east-2')
bucket = s3.Bucket('bucketName')
object = bucket.Object('dir/subdir/2015/12/7/img01.jpg')
tmp = tempfile.NamedTemporaryFile()
def imageSource(bucket, object, tmp):
with open(tmp.name, 'wb') as f:
object.download_fileobj(f)
src = tmp.name #dir/subdir/2015/12/7/img01.jpg
retrun src
Just ran into this problem as well, seems like this hasn't been updated for a while so will try to add it.
Your current approach below is right. The only issue is that in order to render an image that is not going to be downloaded to your server, you have to have a direct url to your S3 file. Currently, you only have the image name, not the full url.
def list_files(bucket):
contents = []
for image in bucket.objects.all():
contents.append(image.key)
return contents
def files():
list_of_files = list_files(bucket)
return render_template('index.html', my_bucket=bucket, list_of_files=list_of_files)
Currently, your items in the list of files will look like this:
['file_name1', 'file_name2', 'file_name3']
In order for them to render in your browser directly you need them to look like this:
['file_url1', 'file_url2', 'file_url3']
s3 file urls look something like this: https://S3BUCKETNAME.s3.amazonaws.com/file_name1.jpg
Therefore, instead of the line below
contents.append(image.key)
you need to replace the image.key with something that makes the URL
contents.append(f'https://{S3BUCKETNAME}.s3.amazonaws.com/{image.key})
That should do it, the html you have should work correctly as is. The only other big risk is the files you uploaded are not public, for that you'll need to look at the settings of your bucket on AWS.
Additional Resources and Sources:
Adding a public policy to your AWS S3 Bucket: https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html
Uploading and downloading files with Flask & S3: https://stackabuse.com/file-management-with-aws-s3-python-and-flask/

Presigned POST URLs work locally but not in Lambda

I have some Python that can request a presigned POST URL to upload an object into an S3 bucket. It works running it locally, under my IAM user with Admin abilities, and I can upload things to the bucket using Postman and cURL. However, when trying to run the same code in Lambda, it says "The AWS Access Key Id you provided does not exist in our records.".
The only difference is that the Lambda function runs without Admin-rights (but it does have a policy that allows it to run any S3 action on the bucket) and is using a different (older) version of Boto3.
This is the code I'm trying to use: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html#generating-a-presigned-url-to-upload-a-file
I've tried to use the details returned from the Lambda function in exactly the same way as I'm using the details returned locally, but the Lambda details don't work.
Here is 100% workable solution of AWS lambda
Attach policy AmazonS3FullAccess
Do not use multipart/form-data upload
Configure S3 CORS
Use next python code
import uuid
import boto3
def lambda_handler(event, context):
s3 = boto3.client('s3')
upload_key = 'myfile.pdf'
download_key = 'myfile.pdf'
bucket = 'mys3storage'
# Generate the presigned URL for download
presigned_download_url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': download_key,
'Expires': 3600
}
)
# Generate the presigned URL for upload
presigned_upload_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={
'Bucket': bucket,
'Key': upload_key,
'ContentType': 'application/pdf',
'Expires': 3600
}
)
# return the result
return {
"upload_url": presigned_upload_url
"download_url": download_url
}
This is a slight duplicate...
Essentially, the temporary lambda execution role's credentials are expiring once the Lambda function is complete. Therefore, but the time your client is using the signed URL, the credentials are no longer are valid/exist.
The solution here is to use AWS STS to use a different IAM role in the Lambda (aka AssumeRole) that has the necessary S3 permissions when creating the signed URL. This role's credentials will not expire, and thus the URL will remain valid.
See this example for further setup instructions.
Need to post x-amz-security-token value, when u use role
I had the same issue and it was driving me crazy. Locally all went smooth and once deployed into lambda I got 403 either using create_presigned_post or create_presigned_url.
Turns out the role the lambda is using was a different one than my local aws user is having. (The lambda role was automatically created with AWS SAM in my case) After granting the lambda role S3 permissions, the error was resolved.
Good question. You didn't describe how you are getting your credentials to the Lambda function. Your code, specifically this:
s3_client = boto3.client('s3')
expects to find default credentials using the ~/.aws/credentials file. You won't (nor should you) have that file in your Lambda execution environment, but you probably have it in your local environment. I suspect you are not getting your credentials to the Lambda function at all.
There are two options to use in Lambda in get the credentials in place.
don't use credentials, but use an IAM role for the Lambda function that provides the access to S3 required. This is the best practice. If you do this, you won't need the credentials. This is Best Practice.
set the credentials as environment variables for your lambda function. You can directly define AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, and then the code above should just pick those up and use them.
The Official python tutorial for this did not mention the x-amz-security-token with the use of lambda functions, however this needs to included as a form value when uploading a file to S3. So to recap, when using lambda, make sure the role attached to the function has s3 access, and the extra form field is present with the x-amz-security-token value.
<form action="URL HERE" method="post" enctype="multipart/form-data">
<input type="hidden" name="key" value="KEY HERE" />
<input type="hidden" name="AWSAccessKeyId" value="ACCESS KEY HERE" />
<!-- ADD THIS ONE -->
<input type="hidden" name="x-amz-security-token" value="SECURITY TOKEN HERE" />
<!-- ADD THIS ONE -->
<input type="hidden" name="policy" value="POLICY HERE" />
<input type="hidden" name="signature" value="SIGNATURE HERE" />
File:
<input type="file" name="file" /> <br />
<input type="submit" name="submit" value="Upload to Amazon S3" />
</form>
You can try below code to generate pre-signed URL for an object
import json
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client('s3')
bucket = 'test1'
download_key = 'path/to/Object.txt'
def lambda_handler(event, context):
try:
response = s3.generate_presigned_url('get_object',Params={'Bucket': bucket,'Key': download_key},ExpiresIn=3600)
except ClientError as e:
logging.error(e)
return None
url = response
print(url)
return{
'url' : url
}

AWS S3 HTTP POST - redirect to page with parameter in the URL

I have form that uploads to an image to S3, and it works perfectly fine. However, for the success_action_redirect key, I would like to have a URL that uses a template variable in the form, i.e.
{"success_action_redirect": "localhost/account/{{account.id}}"},
I can't seem to be able to use that in my post policy. Is there a way around it?
EDIT:
Here is the relevant line in the form:
<input type="hidden" name="success_action_redirect" value="http://localhost:8000/account/{{account.id}}">
I'm assuming that it does not work because {{account.id}} returns a number, so when S3 tries to compare it with the signature, it's comparing something like http://localhost:8000/accounts/2 with http://localhost:8000/accounts/{{account.id}}".
It returns an error:
Invalid according to Policy: Policy Condition failed: ["eq", "$success_action_redirect", "http://localhost:8000/account/{{account.id}}"]
Furthermore, I tried using {% templatetag %} to stop {{account.id}} from being a variable, and though it went through to S3, it (obviously) didn't redirect to the correct page.
In your policy you have to use starts-with and not $eq to allow for your success action redirect URL to match a prefix.
So your policy might look like:
["starts-with", "$success_action_redirect", "localhost/account/"],
See the Policy documentation:
https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html#sigv4-ConditionMatching

Django: Issue with a Video file uploader to S3 bucket using Boto3

I want to upload a video file to AWS S3 bucket using Boto3. I've already created a bucket named 'django-test' and given the required permissions. I am using Django and working on Windows 10 machine.
I've created a function called store_in_s3 in views.py file in my Django app.
The expected file size is lower than 200mbs. I am a bit confused with the several approaches I've tried. Below is the existing code
def store_in_s3(request):
transfer = S3Transfer(boto3.client(
's3',
region_name = settings.AWS_S3_REGION_NAME,
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY
))
client = boto3.client('s3')
bucket = "django-test"
file = request.FILES["file"]
filename = file.name
transfer.upload_file(filename, bucket, "test.mov")
At this point, I am getting the following error: FileNotFoundError: [WinError 2] The system cannot find the file specified: 'test.mov'
But test.mov is the file I've uploaded using HTML form.
My code in HTML form is below:
<form method="post" enctype="multipart/form-data">
{% csrf_token %}
{{ form.file }}
<button type="submit">Submit</button>
</form>
Additional Information: I was successful at uploading the video file at one point in this development process but on S3 its size was ridiculously small - only 28 Bytes. That's why I restarted building the uploader.
I'll be greateful for any help. Please feel free if you need any more information on the question. Thank you.
As you mentioned the file size is greater than 2 MB, its getting stored in the temp location by Django. From the error message, it seems the filename can't be found. So, try it by passing the temp location path, in this case, i.e. file.temporary_file_path()

S3 direct bucket upload: success but file not there

I'm uploading a file directly to an S3 bucket using a multipart form upload and a signed policy (with AWS Signature Version 2), as explained here and here.
The upload is successful (I get redirected to the success_action_redirect URL) but the file is not visible in the bucket, under the key it should be. Though the ACL of the uploaded file was set to public-read, I thought it might be a permission issue, but even the owner of the bucket does not see the file.
Does someone have a hint at might be wrong?
Thank you.
Turns out that all I needed to do was to make sure that the uploaded filename is included in the key that was being uploaded to S3.
If you have a form like this:
<form action="http://johnsmith.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
<input type="input" name="key" value="user/eric/" /><br />
(...)
</form>
Then the file will be uploaded to user/eric. What tripped me up is that the key defined this way was an existing S3 folder. AWS made it seem like the upload was successful but probably just dropped the uploaded files as the key already existed. The solution was to include the filename in the key thusly:
<form action="http://johnsmith.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
<input type="input" name="key" value="user/eric/${filename}" /><br />
(...)
</form>
Also see the Upload examples docs.
Whenever we are uploading the small parts of file using presigned url, at that time it will upload that parts in temp location of AWS.
Once successfully uploaded all parts of file, perform the CompleteMultipartUploadRequest and it will store the your file in s3 bucket.
I hope it will work for you.
CompleteMultipartUploadResult multipartCompleteResult = null;
List<PartETag> partETags = new new ArrayList<>();
partETags.add(new new PartETag(partNumber1, eTag1));
partETags.add(new new PartETag(partNumber2, eTag2));
partETags.add(new new PartETag(partNumber3, eTag3));
CompleteMultipartUploadRequest multipartCompleteRequest =
new CompleteMultipartUploadRequest(getAmazonS3BucketName(), objectKey, uploadId, partETags);
multipartCompleteResult = getAmazonS3Client().completeMultipartUpload(multipartCompleteRequest);