Zip file uploaded through Lambda+API gateway with POSTMAN is corrupted - amazon-web-services

I am doing following steps:
I have API gateway ( PUT Method) which integrated with AWS lambda.
It is direct mapping in multipart/form-data ( so big logic is happening here )
Now through POSTMAN, file is being uploaded.
File is getting uploaded.
When I download this ZIP file, It says "End-of-central-directory signature not found. Either this file is not a Zip file, or it constitutes one disk of a multi-part Zip file."
Then I opened the ZIP in notepad++ ( yes i did that ), I can see only few line with binary data, on other hand my original file has lot of it.
Please help, let me know if some more needed.

I had the same issue and it turned out I was sending the request without all needed headers. It worked fine with Postman. Please check if your request contains these postman-headers

Related

Why does my single part MP3 file fail to upload to AWS S3 due to an InvalidPart multipart file upload error?

I have a 121MB MP3 file I am trying to upload to my AWS S3 so I can process it via Amazon Transcribe.
The MP3 file comes from an MP4 file I stripped the audio from using FFmpeg.
When I try to upload the MP3, using the S3 object upload UI in the AWS console, I receive the below error:
InvalidPart
One or more of the specified parts could not be found. the part may not have been uploaded, or the specified entity tag may not match the part's entity tag
The error makes reference to the MP3 being a multipart file and how the "next" part is missing but it's not a multipart file.
I have re-run the MP4 file through FFmpeg 3 times in case the 1st file was corrupt, but that has not fixed anything.
I have searched a lot on Stackoverflow and have not found a similar case where anyone has uploaded a single 5MB+ file that has received the error I am.
I've also crossed out FFmpeg being the issue by saving the audio using VLC as an MP3 file but receive the exact same error.
What is the issue?
Here's the console in case it helps:
121MB is below the 160 GB S3 console single object upload limit, the 5GB single object upload limit using the REST API / AWS SDKs as well as the 5TB limit on multipart file upload so I really can't see the issue.
Considering the file exists & you have a stable internet-connected (no corrupted uploads), you may have incomplete multipart upload parts in your bucket somehow which may be conflicting with the upload for whatever reason so either follow this guide to remove them and try again or try creating a new folder/bucket and re-uploading again.
You may also have a browser caching issue/extension conflict so try incognito (with extensions disabled) or another browser if re-uploading to another bucket/folder doesn't work.
Alternatively, try the AWS CLI s3 cp command or a quick "S3 file upload" application in a supported SDK language to make sure that it's not a console UI issue.

can not add file in aws s3 bucket using postman

I am trying to add a file in s3-bucket in my AWS account using postman. see below screenshot.
I pass Host in the header as a divyesh.vkinds.com.s3.amazonaws.com where divyesh.vkinds.com is my bucket name. and in Body I am giving file as index.html as file type like image below.
but it is giving me The provided 'x-amz-content-sha256' header does not match what was computed.
error. I searched for it but can't find anything.
Please check content-header. Add Content-Type as text/plain and date in this format XX-XX-XXXX
I have also faced the same problem. The issue was that, postman does not calculate the SHA. It defaults to a SHA of empty string e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
So in the postman headers, add an explicit key x-amz-content-sha256. Caluclate the value of SHA256 for your file using a sha command and provide as the value. Below command works on linux flavors
shasum -a 256 index.html
Couple of other observations in the question.
You can change the Body as binary and choose the file you want to upload.
Provide the complete path including the file name in the upload URL. E.g. if you provide the URL as <your bucket name>.s3.<region>.amazonaws.com/test/index.html, the file will be copied to test directory in the bucket with name as index.html
I encountered this situation recently, and the issue was that I was copying an active log file which changed between when my side calculated the hash and when the file was actually uploaded. My solution was to copy the file to a temporary location, then upload that stable file.

Why Transfer in GCP failed on csv file and where is the error log?

I am testing out the transfer function in GCP:
This is the open data in csv, https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2018-financial-year-provisional/Download-data/annual-enterprise-survey-2018-financial-year-provisional-csv.csv
My configuration in GCP:
The transfer failed as below:
Question 1: why the transfer failed?
Question 2: where is the error log?
Thank you very much.
[UPDATE]:
I checked log history, nothing was captured:
[Update 2]:
Error details:
Details: First line in URL list must be TsvHttpData-1.0 but it is: Year,Industry_aggregation_NZSIOC,Industry_code_NZSIOC,Industry_name_NZSIOC,Units,Variable_code,Variable_name,Variable_category,Value,Industry_code_ANZSIC06
I noticed in the transfer service if you choose the third option for source: it reads URL of TSV file. Essentially TSV, PSV are just variants of CSV, and I have no problem retrieving the source csv file. The error details seem to implicating something not expected there.
The problem is that in your example, you are pointing to a data file as the source of the transfer. If we read the documentation on GCS transfer, we find that the we must specify a file which contains the identity of the target URL that we want to copy.
The format of this file is called a Tab-Separated-Values (TSV) and contains a number of parameters including:
The URL of the source of the file.
The size in bytes of the source file.
An MD5 hash of the content of the source file.
What you specified (just the URL of the source file) ... is not what is required.
One possible solution would be to use gsutil. It has an option of taking a stream as input and writing that stream to a given object. For example:
curl http://[URL]/[PATH] | gsutil cp - gs://[BUCKET]/[OBJECT]
References:
Creating a URL list
Can I upload files to google cloud storage from url?

Boto3 : Wait for the whole zip file to be uploaded to lambda

I was trying to create a command line tool to upload zip files to lambda. Basically boto3 provided direct zip upload function. But the response is instantaneous. It doesnt wait till the zip file is completely uploaded to lambda. So is there anyway to find the complete status of file upload? Like when it got finished, whether the uploaded was succesful
the boto3 function code looks like this
def run_lambda_upload():
response = client.update_function_code(
FunctionName='test_command_line_zip_upload',
S3Bucket='test',
S3Key='test_lambda.zip'
Publish=True
)
print response
run_lambda_upload()
It would be helpful if someone could provide a guidance on this
What you are trying to do is update the code from s3 bucket 'test', It doesn't pick the file locally from your system and hence the response is instantaneous. To do that you need to use the ZipFile param and send the contents of zip file to that parameter.

Amazon S3 PUT range header

I am currently using the range header for GET request on Amazon S3 but I can't find an equivalent for PUT requests.
Do I have to upload the entire file again or can I specify where in the file I want to update? Thanks
Need to upload it again. S3 does not have a concept of either append and/or editing afile
However, if its a long file, you can do something called "Multipart Upload", and send several pieces of file, and merge it back at AWS:
http://docs.amazonwebservices.com/AmazonS3/latest/dev/uploadobjusingmpu.html