How to upload file to lambda function or API Gateway? - amazon-web-services

I'm trying to upload a file from iOS to AWS API Gateway and pass it through to a Lambda function, How can I implement this scenario?
I can use multipart/form-data to upload to AWS API Gateway but how make input Model support binary data?
[Edit1] moved from answer by Spektre
Thanks For response, after a little of reading I figure out that's no way to upload file to lambda (and it's not logical because it's event based) and the only valid use case to upload to S3 and make S3 notify lambda.

I'd highly recommend using direct S3 upload using one of the AWS SDKs. AWS Lambda is best suited for processing events, not content transfers like uploads. You can check its billing and limits to make a more informed decision on if it's really something you're looking for.

API Gateway has added support for an S3 Proxy. This allows you to expose file uploading directly to S3.
http://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html

If want to upload file through lambda, one way is to open your AWS API Gateway console.
Go to
"API" -> {YourAPI} -> "Settings"
There you will find "Binary Media Types" section.
Add following media type:
multipart/form-data
Save your changes.
Then Go to "Resources" -> "proxy method"(eg. "ANY") -> "Method Request" -> "HTTP Request Headers" and add following headers "Content-Type", "Accept".
Finally deploy your api.
For more info visit: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html

Related

AWS API Gateway POST Request for daily data load

I am someone who is totally new to REST APIs, pardon the newbie-ish mistakes.
My requirement is:
The source Database people wants to send JSON data on an hourly basis to an API endpoint which I publish. I am not sure of what all do I need to build to make sure it happens seamlessly. My target is to receive the data and create CSV files and save in it AWS S3 for further downstream processing.
My plan is, creating an AWS API Gateway endpoint which will accept POST requests and whenever anyone sends data through POST, the API Gateway will trigger AWS Lambda Function which will run Python to parse the JSON data to CSV and store in AWS S3. Is this thought valid? What all am I missing out? Are there best practices which needs to be implemented?
This architecture seems to be what you wanna do.
You wanna make sure that your API is secured with a key or via Cognito (more complex) and that your Lambda have the IAM permissions needed in order to access your bucket.
This post will help you understand the Lambda blueprint that is triggered when an object is upload to s3. Just change the Lambda trigger and a little bit the Python code and you're done.
Yes,this is a simple, typical serverless stack and it works perfectly fine.
Additionally, you may also focus on the authentication on the API Gateway end point to make it secure.

AWS S3 Muitipart Upload via API Gateway or Lambda

I'm trying to create a reusable large-file serverless upload service in AWS (we host a number of sites). What I would like to do is to set up an API Gateway in AWS and use CORS to control which sites can upload, allowing the sites to use client-side code. Here is what I've tried and the roadblocks I've run into. Wondering if anybody has any suggested workarounds?
Calling S3 from client-code upload would require me to expose authentication information in client-side land, which seems bad
API Gateway does not appear to support calling S3 multipoint through its AWS Service integration type (URL is fixed to generic S3 service URL, and IAM isn't supported in HTTP integration type)
Leveraging Lambda to call the multipart API won't work, because it can only take in 6 MB of invoke request payload, and to get the 5 MB worth of minimal upload part size, base64 will make the data way more than 6 MB
I could do my own partial upload functionality in Lambda, storing the chunks in S3, but I can't figure out how to merge them together within Lambda's memory and tmp storage space (still PassThrough streams do not appear to work with AWS SDK)
Any ideas? Is any of these worth digging into? Or is serverless a no-go for this use case?
So, after further follow-up with Amazon, it's sort-of possible to use pre-signed URLs with the multipart API, but it's not very practical. Steps involved would include the following:
Create a new file, and split it into parts.
Generate a presigned URL to initiate the multiart upload.
Use the presigned URL to initiate the upload.
Generate a presigned URL for each part, using a part number.
Use the URLs to send the PutPart requests. Keep track of the Etag that is returned for the part number.
Combine all of the parts and corresponding ETAGs to form the request body.
Generate a presigned URL to complete the MP upload.
Complete the multipart upload by sending the request with the presigned complete multipart upload URL.
Will accept Angelo's answer since it did point in this direction which, technically, seems possible
You might be able to use presigned urls for the upload. In this case the client would hit your API, which would do whatever validation is necessary, and then generated a presigned url to S3 that is returned to the client. The client then directly uploads to s3.
You can see some information here: https://sanderknape.com/2017/08/using-pre-signed-urls-upload-file-private-s3-bucket/

upload binary from api gateway to S3 bucket

i was trying to create a rest api which can take zip file as input (PUT requst) and store that on S3.
I'm following the tutorial on http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-content-encodings-examples-image-s3.html
I'm getting 500 error ad the cloudwatch logs are as follows
Verifying Usage Plan for request:
c2140431-1a10-11e7-9f32-0df3853848fe. API Key: API Stage:
xjjd186a30/rd
API Key authorized because method 'PUT /s3' does not require API Key.
Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage xjjd186a30/rd
Starting execution for request: c2140431-1a10-11e7-9f32-0df3853848fe
HTTP Method: PUT, Resource Path: /s3
Successfully completed execution
Method completed with status: 500
when i try the api from post man i get
AccessDeniedAccess
DeniedF55D45C185A5BF11HXopfmxAxGNvmdi7PRp4c1j/wPYmGVTrkKbGXfZwofLOn7TRBPs3uFjer/2UCIktynKtGeNU1Xw=
my roles i have given AmazonS3FullAccess permission and have assigned the role to api gateway settings and the integration request.
can anyone help please
It looks like you are attempting to put to the bucket named rest.
Is that the correct bucket?
This documentation will probably be a little more helpful for you:
Integrating API with AWS S3
In the example in used in the documentation, a bucket and object are provided in the path override for the PUT item method. These are mapped from the path params folder and item.
Here is a helpful screenshot:
If you want to upload the binary files like mp3,audio, documents etc..., you can add an entry with value multipart/form-data in the Binary support in AWS API Gateway settings and post/put the binary file using the header Content-Type = multipart/form-data from postman or api client. It should work..!!
api gateway-binary support image

Upload file to s3 with custom response to client

I am trying to upload a file to s3 and then have lambda generate id, date.
I then want to return this data back to the client.
I want to avoid generating id and date on the client for security reasons.
Currently, I am trying to use API Gateway which invokes a lambda to upload into s3. However, I am having problems setting this up. I know that this is not a preferred method.
Is there another way to do this without writing my own web server. (I would like to use lambda).
If not, how can I configure my API Gateway method to support file upload to lambda?
You have a couple of options here:
Use API Gateway as an AWS Service Proxy to S3
Use API Gateway to invoke a Lambda function, which uses the AWS SDK to upload to S3
In either case, you will need to base64 encode the file content before calling API Gateway, and POST it in the request body.
We don't currently have any documentation on this exact use case but I would refer you to the S3 API and AWS SDK docs for more information. If you have any specific questions we'd be glad to help.
Thanks,
Ryan

AWS Gateway API and file response

Is it possible for AWS Gateway API to respond with a file (zip file) from a HTTP endpoint integration? I heard somewhere AWS Gateway API doesn't support binary formats but wasn't sure if that was for input or input and output.
I have an existing HTTP endpoint and I want to add AWS Gateway API over it; it currently returns a file (zip) on the response.
You cannot respond with a Zip(any binary type) file using API Gateway so far. (As stated in AWS official forum)
As a work around, you can store your file on S3 and dispatch the link of the file using API Gateway.
Binary payloads are not yet natively supported as API Gateway currently encodes content as UTF-8. For the purposes of serving files, serving them via S3 may be an appropriate workaround. You could configure your API to return a link to S3 or to redirect to the public S3 URL.
Thanks,
Ryan