I'm uploading a file directly to an S3 bucket using a multipart form upload and a signed policy (with AWS Signature Version 2), as explained here and here.
The upload is successful (I get redirected to the success_action_redirect URL) but the file is not visible in the bucket, under the key it should be. Though the ACL of the uploaded file was set to public-read, I thought it might be a permission issue, but even the owner of the bucket does not see the file.
Does someone have a hint at might be wrong?
Thank you.
Turns out that all I needed to do was to make sure that the uploaded filename is included in the key that was being uploaded to S3.
If you have a form like this:
<form action="http://johnsmith.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
<input type="input" name="key" value="user/eric/" /><br />
(...)
</form>
Then the file will be uploaded to user/eric. What tripped me up is that the key defined this way was an existing S3 folder. AWS made it seem like the upload was successful but probably just dropped the uploaded files as the key already existed. The solution was to include the filename in the key thusly:
<form action="http://johnsmith.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
<input type="input" name="key" value="user/eric/${filename}" /><br />
(...)
</form>
Also see the Upload examples docs.
Whenever we are uploading the small parts of file using presigned url, at that time it will upload that parts in temp location of AWS.
Once successfully uploaded all parts of file, perform the CompleteMultipartUploadRequest and it will store the your file in s3 bucket.
I hope it will work for you.
CompleteMultipartUploadResult multipartCompleteResult = null;
List<PartETag> partETags = new new ArrayList<>();
partETags.add(new new PartETag(partNumber1, eTag1));
partETags.add(new new PartETag(partNumber2, eTag2));
partETags.add(new new PartETag(partNumber3, eTag3));
CompleteMultipartUploadRequest multipartCompleteRequest =
new CompleteMultipartUploadRequest(getAmazonS3BucketName(), objectKey, uploadId, partETags);
multipartCompleteResult = getAmazonS3Client().completeMultipartUpload(multipartCompleteRequest);
Related
Im trying to get images from an s3 bucket, and show them on a web page using flask (and boto3 to access the bucket).
I currently have a list of all the pictures from the bucket, but cant get the html to show them(gives me 404 error).
How do I do this without downloading the files?
this is what I have so far:
def list_files(bucket):
contents = []
for image in bucket.objects.all():
contents.append(image.key)
return contents
def files():
list_of_files = list_files(bucket)
return render_template('index.html', my_bucket=bucket, list_of_files=list_of_files)
and this is the html snippet:
<table class="table table-striped">
<br>
<br>
<tr>
<th>My Photos</th>
{% for f in list_of_files %}
<td> <img src="{{ f }}"></td>
{% endfor %}
Thanks a lot!
since loading an image to a html page requires a real image which exists in the directory. images from AWS S3 can be loaded onto a html page if you download them first in the directory, then use its url as a source in html <image> tag.
i found a solution to this but you need to modify it as your needs.
define a function that loads the image from S3 as:
import matplotlib.image as mpimg
import numpy as np
import boto3
import tempfile
s3 = boto3.resource('s3', region_name='us-east-2')
bucket = s3.Bucket('bucketName')
object = bucket.Object('dir/subdir/2015/12/7/img01.jpg')
tmp = tempfile.NamedTemporaryFile()
def imageSource(bucket, object, tmp):
with open(tmp.name, 'wb') as f:
object.download_fileobj(f)
src = tmp.name #dir/subdir/2015/12/7/img01.jpg
retrun src
Just ran into this problem as well, seems like this hasn't been updated for a while so will try to add it.
Your current approach below is right. The only issue is that in order to render an image that is not going to be downloaded to your server, you have to have a direct url to your S3 file. Currently, you only have the image name, not the full url.
def list_files(bucket):
contents = []
for image in bucket.objects.all():
contents.append(image.key)
return contents
def files():
list_of_files = list_files(bucket)
return render_template('index.html', my_bucket=bucket, list_of_files=list_of_files)
Currently, your items in the list of files will look like this:
['file_name1', 'file_name2', 'file_name3']
In order for them to render in your browser directly you need them to look like this:
['file_url1', 'file_url2', 'file_url3']
s3 file urls look something like this: https://S3BUCKETNAME.s3.amazonaws.com/file_name1.jpg
Therefore, instead of the line below
contents.append(image.key)
you need to replace the image.key with something that makes the URL
contents.append(f'https://{S3BUCKETNAME}.s3.amazonaws.com/{image.key})
That should do it, the html you have should work correctly as is. The only other big risk is the files you uploaded are not public, for that you'll need to look at the settings of your bucket on AWS.
Additional Resources and Sources:
Adding a public policy to your AWS S3 Bucket: https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html
Uploading and downloading files with Flask & S3: https://stackabuse.com/file-management-with-aws-s3-python-and-flask/
I have some Python that can request a presigned POST URL to upload an object into an S3 bucket. It works running it locally, under my IAM user with Admin abilities, and I can upload things to the bucket using Postman and cURL. However, when trying to run the same code in Lambda, it says "The AWS Access Key Id you provided does not exist in our records.".
The only difference is that the Lambda function runs without Admin-rights (but it does have a policy that allows it to run any S3 action on the bucket) and is using a different (older) version of Boto3.
This is the code I'm trying to use: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html#generating-a-presigned-url-to-upload-a-file
I've tried to use the details returned from the Lambda function in exactly the same way as I'm using the details returned locally, but the Lambda details don't work.
Here is 100% workable solution of AWS lambda
Attach policy AmazonS3FullAccess
Do not use multipart/form-data upload
Configure S3 CORS
Use next python code
import uuid
import boto3
def lambda_handler(event, context):
s3 = boto3.client('s3')
upload_key = 'myfile.pdf'
download_key = 'myfile.pdf'
bucket = 'mys3storage'
# Generate the presigned URL for download
presigned_download_url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': download_key,
'Expires': 3600
}
)
# Generate the presigned URL for upload
presigned_upload_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={
'Bucket': bucket,
'Key': upload_key,
'ContentType': 'application/pdf',
'Expires': 3600
}
)
# return the result
return {
"upload_url": presigned_upload_url
"download_url": download_url
}
This is a slight duplicate...
Essentially, the temporary lambda execution role's credentials are expiring once the Lambda function is complete. Therefore, but the time your client is using the signed URL, the credentials are no longer are valid/exist.
The solution here is to use AWS STS to use a different IAM role in the Lambda (aka AssumeRole) that has the necessary S3 permissions when creating the signed URL. This role's credentials will not expire, and thus the URL will remain valid.
See this example for further setup instructions.
Need to post x-amz-security-token value, when u use role
I had the same issue and it was driving me crazy. Locally all went smooth and once deployed into lambda I got 403 either using create_presigned_post or create_presigned_url.
Turns out the role the lambda is using was a different one than my local aws user is having. (The lambda role was automatically created with AWS SAM in my case) After granting the lambda role S3 permissions, the error was resolved.
Good question. You didn't describe how you are getting your credentials to the Lambda function. Your code, specifically this:
s3_client = boto3.client('s3')
expects to find default credentials using the ~/.aws/credentials file. You won't (nor should you) have that file in your Lambda execution environment, but you probably have it in your local environment. I suspect you are not getting your credentials to the Lambda function at all.
There are two options to use in Lambda in get the credentials in place.
don't use credentials, but use an IAM role for the Lambda function that provides the access to S3 required. This is the best practice. If you do this, you won't need the credentials. This is Best Practice.
set the credentials as environment variables for your lambda function. You can directly define AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, and then the code above should just pick those up and use them.
The Official python tutorial for this did not mention the x-amz-security-token with the use of lambda functions, however this needs to included as a form value when uploading a file to S3. So to recap, when using lambda, make sure the role attached to the function has s3 access, and the extra form field is present with the x-amz-security-token value.
<form action="URL HERE" method="post" enctype="multipart/form-data">
<input type="hidden" name="key" value="KEY HERE" />
<input type="hidden" name="AWSAccessKeyId" value="ACCESS KEY HERE" />
<!-- ADD THIS ONE -->
<input type="hidden" name="x-amz-security-token" value="SECURITY TOKEN HERE" />
<!-- ADD THIS ONE -->
<input type="hidden" name="policy" value="POLICY HERE" />
<input type="hidden" name="signature" value="SIGNATURE HERE" />
File:
<input type="file" name="file" /> <br />
<input type="submit" name="submit" value="Upload to Amazon S3" />
</form>
You can try below code to generate pre-signed URL for an object
import json
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client('s3')
bucket = 'test1'
download_key = 'path/to/Object.txt'
def lambda_handler(event, context):
try:
response = s3.generate_presigned_url('get_object',Params={'Bucket': bucket,'Key': download_key},ExpiresIn=3600)
except ClientError as e:
logging.error(e)
return None
url = response
print(url)
return{
'url' : url
}
I want to upload a video file to AWS S3 bucket using Boto3. I've already created a bucket named 'django-test' and given the required permissions. I am using Django and working on Windows 10 machine.
I've created a function called store_in_s3 in views.py file in my Django app.
The expected file size is lower than 200mbs. I am a bit confused with the several approaches I've tried. Below is the existing code
def store_in_s3(request):
transfer = S3Transfer(boto3.client(
's3',
region_name = settings.AWS_S3_REGION_NAME,
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY
))
client = boto3.client('s3')
bucket = "django-test"
file = request.FILES["file"]
filename = file.name
transfer.upload_file(filename, bucket, "test.mov")
At this point, I am getting the following error: FileNotFoundError: [WinError 2] The system cannot find the file specified: 'test.mov'
But test.mov is the file I've uploaded using HTML form.
My code in HTML form is below:
<form method="post" enctype="multipart/form-data">
{% csrf_token %}
{{ form.file }}
<button type="submit">Submit</button>
</form>
Additional Information: I was successful at uploading the video file at one point in this development process but on S3 its size was ridiculously small - only 28 Bytes. That's why I restarted building the uploader.
I'll be greateful for any help. Please feel free if you need any more information on the question. Thank you.
As you mentioned the file size is greater than 2 MB, its getting stored in the temp location by Django. From the error message, it seems the filename can't be found. So, try it by passing the temp location path, in this case, i.e. file.temporary_file_path()
I was following LPTHW ex51 by Zed shaw http://learnpythonthehardway.org/book/ex51.html, and was doing his study drills on web.py , i am a web.py beginner and was successful in uploading an image on Webpage form and then storing it in a local folder. The issue is each image I store , replaces the earlier one. Also I cant figure out how to upload multiple images on server and store them all.
Here is my class Upload in app.py:
class Upload(object):
def GET(self):
web.header("Content-Type","text/html; charset=utf-8")
return render.upload()
def POST(self):
x= web.input(myfile={})
filedir= "C:/Users/tejas/Documents/filesave"
if 'myfile' in x:
fout = open(filedir + '/' + 'myfile.jpg', 'wb') # creates the file where the uploaded file should be stored
fout.write(x.myfile.file.read()) # writes the uploaded file to the newly created file.
fout.close() # closes the file, upload complete
return "Success! Your image has been saved in the given folder."
raise web.seeother('/upload')
and my upload form- upload.html :
<html>
<head><title>
<div id="header" <h1 style="color:blue;">Upload image file</h1><div/>
</title></head>
<body background-color=light-blue,font-family=verdana,font-size=100%;>
<form method="POST" enctype="multipart/form-data" action="">
<input type="file" name="myfile"/>
<br/> <br/><br/>
<input type="submit"/>
</form>
</body>
</html>
I tried searching a lot for similar questions but all in PHP , and so I try something similar with the code but I could not get it working. Any suggestions to improve the code?
the reason your code is replacing the earlier on is because your hardcoding the path to save the image
fout = open(filedir + '/' + 'myfile.jpg', 'wb')
everytime you upload the file to be altered is the same this could be corrected by adding an new name each time your uploading new file or extracting the name from web input
fout = open(filedir + '/' + x.myfile.filename, 'wb');
according to python when opening for writing files with the same name will be erased
make sure each new file you upload has different name from the previous uploaded
I am trying upload image using rest web service in my symfony application. I have tried the following code but it is throwing the error undefined index photo. I want to know what is the right way to do it.
I have followed how to send / get files via web-services in php but it didn't worked.
Here is the my html file with which am hitting the application url:
<form action="http://localhost/superbApp/web/app_dev.php/upload" enctype='multipart/form-data' method="POST">
<input type="file" name="photo" ></br>
<input type="submit" name="submit" value="Upload">
</form>
And my controller method is like:
public function uploadAction(){
$request = $this->getRequest(); /*** get the request method ****/
$RequestMethod = $request->getMethod();
$uploads_dir = '/uploads';
foreach ($_FILES["photo"]["error"] as $key => $error) {
if ($error == UPLOAD_ERR_OK) {
$tmp_name = $_FILES["photo"]["tmp_name"][$key];
$name = $_FILES["photo"]["name"][$key];
move_uploaded_file($tmp_name, $uploads_dir."/".$name);
}
}
}
If you are using Symfony, you should use Symfony forms to do this. In your example, you put an URL which is pointing to app_dev.php, but that url doesn't work in production mode. In the Symfony cookbook there is an article explaining how upload files, which you should read:
http://symfony.com/doc/current/cookbook/doctrine/file_uploads.html
When you have done this, you can upload images via Rest WebService using the route specified for your action, specifying the Content-Type to multipart/form-data, and the name of the field which you add the image would be something like this package_yourbundle_yourformtype[file].