Bit of an odd case where my API is sent a presigned URL to write to, and a presigned URL to download the file from.
The problem is that if they send a very large file, the presigned url we need to write to can expire before it gets to that step (some processing happens in between the read/write).
Is it possible to 'open' the connection for writing early to make sure it doesn't expire, and then start writing once the earlier process is done? Or maybe there is a better way of handling this.
The order goes:
Receive API request with a downloadUrl and an uploadUrl
Download the file
Process the file
Upload the file to the uploadUrl
TL;DR: How can I ensure the url for #4 doesn't expire before I get to it?
When generating the pre-signed URL, you have complete control over the time duration. For example, this Java code shows how to set the time when creating a GetObjectPresignRequest object:
GetObjectPresignRequest getObjectPresignRequest = GetObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(10))
.getObjectRequest(getObjectRequest)
.build();
So you can increase the time limit in such situations.
Related
I'm building the Alexa skill that sends the request to my web server,
then web server will do some process and upload a file to Amazon S3.
During the period of web server process, I make skill keep getting the file from Amazon S3 per 10 seconds till get the file. And the response is based on the file content.
But unfortunately, the web server process takes more than 1 minute. That means skill must stay more than 1 minute to get the file to response.
For now, I used progressive response with async await in my code,
and skill did keep waiting for the file on S3.
But I found that the skill will send the second request to Lambda after 50 seconds automatically. That means for the same skill, i got the two lambda function running at the same time.
And the execution result is : After the first response that progressive response made, 50 seconds later will hear another response that also made by the progressive response which belongs to the second request.
And nothing happened till the end.
I know it is bad to let skill waits this long, but i still want to figure out the executable way if skill needs to wait this long.
There are some points I want to figure out.
Is there anyway to prevent the skill to send the second
requests to Lambda?
Is there another way I can try to accomplish the goal?
Thanks
Eventually, I found that the second invoke of Lambda is not from Alexa, is from AWS Lambda itself. Refer to the following artical
https://cloudonaut.io/your-lambda-function-might-execute-twice-deal-with-it/
So you have to deal with this kind of situation in your Lambda code. One thing can be used is these two times invoke's request id is the same. So you can tell if this is the first time execution by checking your storage for the same request id which you store at the first time execution.
Besides, I also found that once the Alexa Skill waits for more than 1 minutes, it will crash and return the error by speaking (test by Amazon Echo). And there is nothing different in the AWS Lambda log compare to the normal execution one. That meaning the Log seems to be fine but actually the execution result is not.
Hope this can help someone is also struggled at this problem.
I've set up an AWS api which obtainins a pre-signed URL for uploading to an AWS S3 bucket.
The pre-signed url has a format like
https://s3.amazonaws.com/mahbukkit/background4.png?AWSAccessKeyId=someaccesskeyQ&Expires=1513287500&x-amz-security-token=somereallylongtokenvalue
where backgournd4.png would be the file I'm uploading.
I can successfully use this URL through Postman By:
configuring it as a PUT call,
setting the body to Binary so I can select the file,
setting the header to Content-Type: image/png
HOWEVER, I'm trying to make this call using BrightScript running on a BrightSign player. I'm pretty sure I'm supposed to be using the roURTransfer object and PutFromFile function described in this doucmentation:
http://docs.brightsign.biz/display/DOC/roUrlTransfer
Unfortunately, I can't find any good working examples showing how to do this.
Could anyone who has experience with BrightScript help me out? I'd really appreciate it.
you are on the right track.
i would do
sub main()
tr = createObject("roUrlTransfer")
headers = {}
headers.addreplace("Content-Type","image/png")
tr.AddHeaders(headers)
info = {}
info.method = "PUT"
info.request_body_file = <fileName>
if tr.AsyncMethod(info)
print "File put Started"
else
print "File put did not start"
end if
delay(100000)
end sub()
note i have used two different methods to populate the two associative arrays. you need to use the addreplace method (rather then the shortcut of .) when the key contains special characters like '-'
this script should work , though i don't have a unit on hand to do a syntax check.
also you should set up a message port etc and Listen to the event that is generated to confirm if the put was successful and/or what the response code is.
note when you read responses from url events. if the response code from the server is anything other then 200 the BrightSign will trash the response body and you can not read it. This is not helpful as services like dropbox like to do a 400 response with more info on what was wrong (bad API key etc) in the body. so in that case you are left in the dark doing trial and error to figure out what was wrong.
good luck, sorry i didn't see this question sooner.
While uploading file to S3 , we are getting this random error msg for one single case
"If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)"
source being : https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/AmazonS3Client.java
As per AWS SDK for Java 1.8.10
We can set maximum stream buffer size to be configured per request via
request.getRequestClientOptions().setReadLimit(int)
We are using com.amazonaws.services.s3.AmazonS3 object to upload data.
Can anyone suggest how we can set ReadLimit() via com.amazonaws.services.s3.AmazonS3
https://aws.amazon.com/releasenotes/0167195602185387
It sounds like you're uploading data from an InputStream, but some sort of transient error is interrupting the upload. The SDK isn't able to retry the request because InputStreams are mark/resetable by default. The error message is trying to give guidance on buffer sizes, but for large data, you probably don't want to load it all into memory anyway.
If you're able to upload from a File source, then you shouldn't see this error again. Because Files are resettable, the SDK is able to retry your request if it encounters an error during the first attempt.
A little bit necroing, but you need to create a PutObjectRequest and use the setReadLimit on that:
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, key, fileInputStream, objectMetadata);
putObjectRequest.getRequestClientOptions().setReadLimit(xxx);
s3Client.putObject(putObjectRequest);
If you look in the implementation of the putObjectRequest(String, String, InputStream, ObjectMetadata), you can see that it just creates a PutObjectRequest and passes that to putObject(PutObjectRequest)
I have an application with very short-lived(5s) access tokens, paranoid client, and some of their users are accessing the S3 stored files using mobile connections so the lag can be quite high.
I've noticed that Amazon forcefully sends out the Accept-Ranges header on all requests, and I'd like to disable that for the files in question. So it would always download the entire file the first time around instead of downloading it chunks.
The main offender I've noticed for this is Chromes built-in PDF viewer. It'll start viewing the PDF, get a 200 response. Then it'll reconnect with a 206 header and start downloading the file in two chunks. If Chrome is too slow to start the download of all chunks before the access token expires it'll keep spamming requests towards S3 (600+ requests when I closed the window).
I've tried setting the header by changing it in the S3 console but while it says it saved it successfully it gets cleared instantly. I also tried to set the header with the signed request, as you can do for Content-Disposition for example, but S3 ignored the passed in header.
Or is there any other way to force a client to download the entire file at once?
Seems like it's not possible. Made the token expire later in hope it would take care of most cases.
But in case it doesn't make the client happy I will try and proxy it locally and remove all headers I don't like. Following this guide, https://coderwall.com/p/rlguog.
I am using the TemporaryFileUploadHandler to upload files. If a user is uploading a large file and cancels the upload, the file remains in my temporary directory.
Is there a way to trap a cancelled upload (connection reset before a file was fully uploaded) in order to cleanup these files?
The only alternative I can think of is a cron job which looks at the temp directory and deletes files which have not been updates in some reasonable amount of time.
Not sure if it helps, but you may try to connect to django request signals:
request_finished - Sent when Django finishes processing an HTTP request.
got_request_exception - This signal is sent whenever Django encounters an exception while processing an incoming HTTP request.
I think Django should raise error if connection is aborted, so the usage of second one is probably a solution. Please let me know it it helps.