I am trying write an application to continuously upload large data (multipart uploads) to Amazon's S3 Storage. However, my application needs to be able to shut down mid-transfer and pick up from where it left off the next time it's restarted.
From playing around a little bit with the C++ SDK, the TransferManager class provides a RetryUpload function that requires a shared pointer to the TransferHandle object that is returned upon issuing an initial UploadFile call. However, the transfer handle object will no longer be in existence if the application crashes or has to shut down mid-operation.
In such a case, is it possible to resume a multipart upload using the TransferManager class? In effect, this probably requires the reconstruction of the transfer handle object, which I am not quite sure how to do. It seems that the TransferManager class is just a nice wrapper around the S3Client, which seems to be more clearer on how to resume the operation but also seems to be more painful to use for general multipart uploading
Related
Im a little confused since AWS has a lot of features and I do not know what to use.
So, I was creating a Lambda function that does a lot of stuff with a remote web, process could last at least a minute.
So my idea was creating an API that calls this lambda, have lambda create an unique ID and return a response right away to the client with a token., save this token to a DB.
Then have lambda process all this stuff with a remote web and, when it finishes, save the results to the DB and a bucket (a file), so this result is ready to deliver when the client makes another call to another API that makes a query to the DB to know the status of this process.
Thing is, it seems that if a response is sent from the handler, lambda terminates the execution, Im afraid the processing with the remote web will never finish.
I have read that step functions is the way to go, but I cant figure out which service will take the processing, ¿maybe another lambda?
Is there another service that is more suitable for this type of work?, this process involves scrapping a page and downloading files, is written in python.
I have read that step functions is the way to go, but I cant figure
out which service will take the processing, ¿maybe another lambda?
Yes, another Lambda function would do the background processing. You could also just have the first Lambda invoke a second Lambda directly, without using Step Functions. Or the first Lambda could place the task info in an SQS queue that another Lambda polls.
Is there another service that is more suitable for this type of work?
Lambda functions are fine for this. You just need to split your task into multiple functions somehow, either by one Lambda calling the other directly, or by using Step Functions, or an SQS queue or something.
I'm testing a new SWF workflow, and I've got some activity that makes a RESTful call out to another service. Problem is, I can see through logging that the actual call takes less than a second to complete, but the Activity always times out in SWF (START_TO_CLOSE of 5 mins). Being more specific, the RESTful call is a list call, and when I limit the batch size to a small number, the Activity completes and moves on very quickly. But at some seemingly arbitrary threshold, it chokes completely.
Does anyone have any insight into this? I've read that SWF calls have a size limitation of 1 MB, does anyone know how to find the size of data my workers are trying to pass SWF?
After some remote debugging, it turns out the response from the task is too big and the activity is failing silently. The failure occurs when the framework tries to report the response back to SWF, and the SDK calls RespondActivityTaskCompleted. That API has a length restriction on the internal result param:
Length Constraints: Maximum length of 32768.
This is a validation error that throws an uncaught exception and is swallowed internally until the Activity times out.
I wouldn't recommend using activity input and output parameters for passing large data sets. SWF is an orchestration technology, not the data passing one. The standard workarounds are:
Storing result in a separate store (S3 for example) and passing reference to it.
Caching result locally on a machine and route all following activities to the same host for them to have access to the cached result. See fileprocessing sample for the details of routing approach.
BTW. Have you checked out Cadence which is an open source version of SWF with much better client side libraries?
I have a serverless backend running on lambda. The runtime usually varies betweeen 40-250s which is over the apigateway max allowed runtime (29s). As such I think my only option is to resort to asynchronous processing. I get the idea behind it, but help online seems sparse and I'd like to know if there are any best practices out there? Or what would be the simplest way for me to get around this timeout problem–using asynchronous processing or other?
It really depends on your use case. But probably an asynchronous approach is best fitted for this scenario given that it's not usually a good idea from the calling side of your API to wait 250 seconds to get the reply back (probably that's why the 29s limitation on API Gateway).
Asynchronous simply means that you will be replying back from Lambda saying that you received the request and you are going to work on it but it will be available only later.
Then, you will be changing the logic on the client side, too, to check back after some time or perform some checks in a loop until the requested resource is ready.
Depending on what work needs to be done you could create an S3 bucket on the fly and reply back to the client with an S3 presigned URL. Then your worker will upload their results to the S3 bucket and the client will poll that bucket for the results until they are present.
I'm currently implementing a C++ backend server using azure-storage-cpp to download blob files locally. Azure Storage Cpp works on top of cpprestsdk (casablanca), which provides parallel tasking.
The simple example from the documentation allows me to start a blob download. Fine. Now I'd like to know how can I cancel the download/task on demand?
I'm using this method to download into a file.
This method returns a pplx::task<void>. So my guess was I could use this to properly stop the download.
But the documentation for pplx::task constructor says:
The version of the constructor that takes a cancellation token creates a task that can be canceled using the <c>cancellation_token_source</c> the token was obtained from. Tasks created without a cancellation token are not cancelable.
Window Azure Storage Cpp creates the task for us when calling download_to_file_async. So is there a way to cancel/stop a pplx:task created by azure storage cpp?
If not, I think I'm going to use the REST api with libcurl.
Im writing a server that regularly needs to change the format of the send/received messages. when this happens the server should send a notification that all future messages have the new format and read all received in the old format until the client sends his ack.
i thought about keeping a reference to the decoder shared by all pipelines and reconfigure it from the outside as needed. I'm worried about concurrency in this case.
how can i make sure that no writes are handled by the pipeline while
i'm working on the decoder?
and how to be sure that the notification
is the first message handled after reconfiguration?
the only other way i see is to send a "notification" object through the pipeline (by using channel.write), catch the object in the decoder and do the reconfig then while forwarding the notification message. In this case there shouldn't be any concurrency in the pipeline.
would this be the better/state of the art way to do this?
i decided to use the second way. A StateHandler catches ConfigurationEvents reconfigures the pipeline. Unfortunately this means that i can't be sure that all channels use the same configuration because race conditions between the reconfiguration and extremely young channels can happen. but i'm pretty sure this won't matter in my case.