Is it possible to upload a file to S3 from a remote server?
The remote server is basically a URL based file server. Example, using http://example.com/1.jpg, it serves the image. It doesn't do anything else and can't run code on this server.
It is possible to have another server telling S3 to upload a file from http://example.com/1.jpg
upload from http://example.com/1.jpg
server -------------------------------------------> S3 <-----> example.com
If you can't run code on the server or execute requests then, no, you can't do this. You will have to download the file to a server or computer that you own and upload from there.
You can see the operations you can perform on amazon S3 at http://docs.amazonwebservices.com/AmazonS3/latest/API/APIRest.html
Checking the operations for both the REST and SOAP APIs you'll see there's no way to give Amazon S3 a remote URL and have it grab the object for you. All of the PUT requests require the object's data to be provided as part of the request. Meaning the server or computer that is initiating the web request needs to have the data.
I have had a similar problem in the past where I wanted to download my users' Facebook Thumbnails and upload them to S3 for use on my site. The way I did it was to download the image from Facebook into Memory on my server, then upload to Amazon S3 - the full thing took under 2 seconds. After the upload to S3 was complete, write the bucket/key to a database.
Unfortunately there's no other way to do it.
I think the suggestion provided is quite good, you can SCP the file to S3 Bucket. Giving the pem file will be a password less authentication, via PHP file you can validate the extensions. PHP file can pass the file, as argument to SCP command.
The only problem with this solution is, you must have your instance in AWS. You can't use this solution if your website is hosted in other Hosting Providers and you are trying to upload files straight to S3 Bucket.
Technically it's possible, using AWS Signature Version 4, Assuming your remote server is the customer in the image below, you could prepare a form in the main server, and send the form fields to the remote server, for it to curl it. Detailed example here.
you can use scp command from Terminal.
1)using terminal, go to the place where there is that file you want to transfer to the server
2) type this:
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
N.B. Add "ec2-user#" before your ec2blablbla stuffs that you got from the Ec2 website!! This is such a picky error!
3) your file will be uploaded and the progress will be shown. When it is 100%, you are done!
Related
I am trying to upload a file to the Shared Documents library of my SharePoint website. The files are of type PDF and HTML. I am running a Cold Fusion development environment and using CFHTTP commands to execute HTTP requests. I have been able push a POST command and a PUT command to the proper endpoints listed on this link below:
Link: https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#best-practices
I do not understand why but the first section that mentions the HTTP requests for creating an upload session is different than what was used in the example a little further. For my project, I am using the endpoint:
"/{variables.instance.microsoftGraphAPIURL}/drive/root:/{item-path}:/createUploadSession"
P.S. variables.instance.microsoftGraphAPIURL is a variable to a microsoft graph endpoint to our Sharepoint website
With better luck using PUT commands than POST commands for creating an Upload Session. I am able to receive an uploadURL, but the issue comes with trying to upload the file. For the file upload, I am trying to upload a file in the same directory with a file size of 114992 bytes. I keep getting "The Content-Range header length does not match the provided number of bytes." whenever I run my Put command to upload the file.
Thus, my Content-Range is "bytes 0-114991/114992" and my Content-Length is "114992". For the image below, I replaced the file with a pdf, but the original file was an HTML page at 114992 bytes. I want to use a resumable upload session to have one function for uploading image, HTML, and PDF files.
If anyone could tell me if there is an issue with my content headers or my upload session http request or anything else that is causing my issue, that would be amazing! Thank you.
I'm trying to host a static site through S3 and API Gateway (using S3 directly isn't an option due to security policy). One of the pages runs a client-side script to pull back a set of files from a specific folder on the server. I've set up the hosting following the Amazon tutorial.
For this to run, my script needs to be able to obtain the list of files for a specific folder.
If I was hosting the site on my own server using Apache, I could rely on the directory listing feature, where a GET on a folder with no index.html returns a file list. The tutorial suggests that this should be possible, but I can't seem to get it to work. If I submit a request to a particular {prefix}/{identifier}, I can retrieve the specific file, but sending a request to {prefix}/ returns an error.
Is there a way I can replicate directory listings so my Javascript can pull it down and read it, or is the only solution to write a server-side API in Lambda?
Our team is building a web app that allows users to download video files. We currently host our files on AWS S3, but since our site doesn't reside on AWS, we can't use <a href="blah"> to prompt download. If we use that html element, users simply get redirected to a video player - which is fine, but Safari on mobile doesn't allow for users to download the video file via the video player.
We found that manually setting the file's content disposition to attachment on S3 works, but we have not found a way to automate that. We tried adding a content-disposition: attachment key-value pairing in our payload, which works, but adds a "User defined" meta data in the form of x-amz-meta-content-disposition, which doesn't work as the file could not be downloaded as an attachment. It seems only "System defined" works.
Has anyone ever encountered this issue before and found a workaround?
see screenshot for what I'm referencing
You can set the content disposition when the file is created.
This is done by uploading the file via a presigned url.
See https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html for details on the presigned urls.
Alternatively you can use a presigned url to return to get the file from S3 and override the content disposition header on the GET request.
I'm trying to upload files to AWS S3 using Newman for backend automation tests.
But I don't how I can do it, because the only working way which I found is using the binary body type with selected local fie on my Desktop in the PUT request.
There are two requests for this:
The first request gets for us pre-signed URL:
Step 1
The second PUT request using for uploading the files:
Step 2
But could you help me: which decision for Newman? Because of binary files is not valid in this case.
My question came out when I experienced two different behaviors in object URL from json files stored in a s3 bucket.
Consider a json file: mydata.json
If I upload this file using s3 dashboard from AWS website, I am able to see data in browser: //s3-us-west-2.amazonaws.com/bucket/folder/mydata.json. I am also able to read this data from a different application if I create a specific configuration in s3 bucket.
For the other hand, if I use boto3 library for python and upload the same file in the same bucket (making file public in the process), when I click object URL it downloads the file, but it doesn't open data in browser.
This is the code I used:
# upload json file
bucket.upload_file(path, jsonkey)
object_acl = s3.ObjectAcl('bucket_name', jsonkey)
bucket_response = object_acl.put(ACL='public-read')
I explored file properties such as metadata. When I upload file via dashboard, the metadata assigned is Content-Type: application/json, and via boto3 is Content-Type: binary/octet-stream. I don't really know if metadata affects the object URL behavior.
In this context, how can I properly configure files in json format to be downloaded or to be read? I mean, what is the main configuration that affects object URL behavior?
I couldn't find a significant difference between both methods (dashboard and boto3) in properties or permissions, besides Content-Type in metadata. However, when I tried to change Content-Type, behavior was the same.
Any other information I can provide to clarify this question, be free to ask. Thanks in advance.
The documentation for the S3 bucket resource's upload_file() method is not ideal as it simply refers you to the equivalent S3Transfer docs for how extra arguments can be used.
Try the following:
bucket.upload_file(path, jsonkey, ExtraArgs={'ContentType': "application/json"})