I am trying to add a file in s3-bucket in my AWS account using postman. see below screenshot.
I pass Host in the header as a divyesh.vkinds.com.s3.amazonaws.com where divyesh.vkinds.com is my bucket name. and in Body I am giving file as index.html as file type like image below.
but it is giving me The provided 'x-amz-content-sha256' header does not match what was computed.
error. I searched for it but can't find anything.
Please check content-header. Add Content-Type as text/plain and date in this format XX-XX-XXXX
I have also faced the same problem. The issue was that, postman does not calculate the SHA. It defaults to a SHA of empty string e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
So in the postman headers, add an explicit key x-amz-content-sha256. Caluclate the value of SHA256 for your file using a sha command and provide as the value. Below command works on linux flavors
shasum -a 256 index.html
Couple of other observations in the question.
You can change the Body as binary and choose the file you want to upload.
Provide the complete path including the file name in the upload URL. E.g. if you provide the URL as <your bucket name>.s3.<region>.amazonaws.com/test/index.html, the file will be copied to test directory in the bucket with name as index.html
I encountered this situation recently, and the issue was that I was copying an active log file which changed between when my side calculated the hash and when the file was actually uploaded. My solution was to copy the file to a temporary location, then upload that stable file.
Related
We are using an S3 bucket to hold customer zip files they created and made ready for them to download. We are using CloudFront only to handle the SSL. We have caching disabled.
The customer receives an email to download their zip file, and that works great. The S3 lifecycle removes the file after 2 weeks. Now, if they add more photos to their account and re-request their zip file, it overwrites the current zip file with the new version. So the link is exactly the same. But when they download, it's the previous zip file, not the new one.
Additionally after the two weeks, the file is removed and they try to download they get an error that basically says they need to login and re-request their photos. So they generate a zip file but their link still gives them the error message.
I could have the lambda that creates the zip file invalidate the file when it creates it, but I didn't think I needed to invalidate since we aren't caching?
Below is the screenshot of the caching policy I have selected in CloudFront
Hi i am downloading the images stored in Amazon S3, but the downloaded names show some hash code. How to fix this situation?
Please help as i am new to AWS and not expert in the same how can i make changes so that it downloads the original file name
In your download code functionality you should add
Content-Disposition: attachment; filename="foo.bar"
Note Content-Disposition is case sensitive.
I am testing out the transfer function in GCP:
This is the open data in csv, https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2018-financial-year-provisional/Download-data/annual-enterprise-survey-2018-financial-year-provisional-csv.csv
My configuration in GCP:
The transfer failed as below:
Question 1: why the transfer failed?
Question 2: where is the error log?
Thank you very much.
[UPDATE]:
I checked log history, nothing was captured:
[Update 2]:
Error details:
Details: First line in URL list must be TsvHttpData-1.0 but it is: Year,Industry_aggregation_NZSIOC,Industry_code_NZSIOC,Industry_name_NZSIOC,Units,Variable_code,Variable_name,Variable_category,Value,Industry_code_ANZSIC06
I noticed in the transfer service if you choose the third option for source: it reads URL of TSV file. Essentially TSV, PSV are just variants of CSV, and I have no problem retrieving the source csv file. The error details seem to implicating something not expected there.
The problem is that in your example, you are pointing to a data file as the source of the transfer. If we read the documentation on GCS transfer, we find that the we must specify a file which contains the identity of the target URL that we want to copy.
The format of this file is called a Tab-Separated-Values (TSV) and contains a number of parameters including:
The URL of the source of the file.
The size in bytes of the source file.
An MD5 hash of the content of the source file.
What you specified (just the URL of the source file) ... is not what is required.
One possible solution would be to use gsutil. It has an option of taking a stream as input and writing that stream to a given object. For example:
curl http://[URL]/[PATH] | gsutil cp - gs://[BUCKET]/[OBJECT]
References:
Creating a URL list
Can I upload files to google cloud storage from url?
I have uploaded files to s3 bucket with a UUID as a key for each file name ,
I have a requirement to keep the files key as the stored uuid but when download i need to have the downloaded file name as the actual file name eg: Foo.png
stored file on aws s3 -0e8221b9-9bf4-49d6-b0c0-d99e86f91f8e.png
Downloading file name should be : foo.bar
I have tried with setting Content-Disposition meta Data but still when downloading the file contains the uuid.
Perform the below changes and try.
Update Content-Disposition = attachment;filename="abc.csv". Please note the file name is case sensitive and if you are using CDN then it will take some time after you apply the changes. After you update the metadata then download the file using the OBJECT URL. Direct download is not working.. If I download the file using Object URL then the downloaded file name is abc.csv instead of test.csv.
The question is using Lambda function is it possible to look through an S3 bucket with User folder's for a specific file name (Ex: Test1.txt and Text2.txt) Inside the file is just random number. Then basically write back a text file into the grabbed file respected folder basically saying "Test1.txt and Test2.txt has been touched.". If possible in python.
Yes! Use Amazon's AWS SDK. Here's an example for downloading a file from S3. The API for listing files and uploading files is pretty similar.