I'm downloading around 400 files asynchronously in my iOS app using Swift from my bucket in Amazon S3, but sometimes i get this error for several of these files. The maximum file size is around 4 MBs, and the minimum is few KBs
Error is Optional(Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo={NSUnderlyingError=0x600000451190 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 "(null)" UserInfo={_kCFStreamErrorCodeKey=-2102, _kCFStreamErrorDomainKey=4}}, NSErrorFailingURLStringKey=https://s3.us-east-2.amazonaws.com/mybucket/folder/file.html, NSErrorFailingURLKey=https://s3.us-east-2.amazonaws.com/mybucket/folder/file.html, _kCFStreamErrorDomainKey=4, _kCFStreamErrorCodeKey=-2102, NSLocalizedDescription=The request timed out.})
How can I prevent it?
Try to increase timeout:
let urlconfig = URLSessionConfiguration.default
urlconfig.timeoutIntervalForRequest = 300 // 300 seconds
Related
I've this requirement to read a folder in a bucket that exactly contains 13.7 TB of data to AWS GLUE...
I used the below code to read all those data from that folder
datasource0 = glueContext.create_dynamic_frame_from_options(connection_type='s3', connection_options=s3_options,
format="json")
Then under Job Details, I set the Worker Type to G.2X with Requested number of workers as 50 which will be it's max capacity of 100 DPU's.
But, it ran for 13hrs trying to read those data and failed with the below error
An error occurred while calling o94.getDynamicFrame. Cannot call methods on a stopped SparkContext. caused by Unable to execute HTTP request: Request did not complete before the request timeout configuration.
So, is it possible for AWS GLUE to read such sized data from S3!?
Thanks In Advance...
I'm having issues uploading files, that take a bit of time, using presigned put object urls together with transfer acceleration. Sometimes it works and sometimes it doesn't.
I've performed the following tests using Java SDK 2.15.9 to generate urls and curl to upload. Also I'm uploading from Sweden to an s3 bucket located in the us-east-2 region using transfer acceleration.
file size
url expire time
transfer speed
time to upload
status
20mb
1min
100k/s
x
failed after 1min 23sec on 403 forbidden
42mb
10min
50k/s
13min
success
42mb
10min
10k/s
x
failed after 12min on 403 forbidden
42mb
10min
25k/s
28min
success
What is going on here? First theory of mine was that the expire time needed to be longer than the upload time. However reading through the docs it seems that expire time is validated on start of request. Is that true when transfer acceleration is enabled as well? Also using an expire time of 10min worked even tho one upload took 28min.
Do I need to set a longer expire time?
Used the following curl command:
curl -v -H "Content-Type: $contentType" --limit-rate $rateLimit --upload-file $file "$url"
Code to generate URL:
private val presigner by lazy {
S3Presigner.builder()
.credentialsProvider(DefaultCredentialsProvider.create())
.region(s3Region)
.serviceConfiguration(S3Configuration.builder()
.accelerateModeEnabled(true)
.checksumValidationEnabled(false)
.build())
.build()
}
override fun run() {
val url = presigner.presignPutObject { builder ->
builder.putObjectRequest {
it.bucket(s3Bucket)
it.key(UUID.randomUUID().toString())
it.contentType(contentType)
}.signatureDuration(Duration.ofSeconds(expire))
}.url()
println(url)
}
Got a response from AWS support that Cloudfront does not send the request to S3 until it has received the entire body. So the resolution is to increase the expire time of the upload URL.
I have a very strange issue with uploading to S3 from Boto. In our (Elastic-Beanstalk-)deployed instances, we have no problems uploading to S3, and other developers with the same S3 credentials also have no issues. However, when locally testing, using the same Dockerfile, I can upload files up to exactly 1391 bytes, but anything 1392 bytes and above just gives me a connection that times out and retries a few times.
2018-03-27 18:14:34 botocore.vendored.requests.packages.urllib3.connectionpool INFO Starting new HTTPS connection (1): xxx.s3.amazonaws.com
2018-03-27 18:14:34 botocore.vendored.requests.packages.urllib3.connectionpool INFO Starting new HTTPS connection (1): xxx.s3.xxx.amazonaws.com
2018-03-27 18:15:14 botocore.vendored.requests.packages.urllib3.connectionpool INFO Resetting dropped connection: xxx.s3.xxx.amazonaws.com
I've tried this with every variant of uploading to S3 from Boto, including boto3.resource('s3').meta.client.upload_file, boto3.resource('s3').meta.client.upload_fileobj, and boto3.resource('s3').Bucket('xxx').put_object.
Any ideas what could be wrong here?
I'm trying to upload a video from a Cordova app to an Amazon AWS S3 bucket from an Android/iPhone. But it's failing sometimes, giving sporadic reports of this error from AWS bucket:
http_status:400,
<Code>EntityTooLarge</Code>
Some of the files are tiny, some around 300mb or so.
What can I do to resolve this at the AWS end?
The 400 Bad Request error is sometimes used by S3 to indicate conditions that make the request in some sense invalid -- not just syntactically invalid, which is the traditional sense of 400 errors.
EntityTooLarge
Your proposed upload exceeds the maximum allowed object size.
400 Bad Request
http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
Note the word "proposed." This appears to be a reaction to the Content-Length request header you are sending. You may want to examine that. Perhaps the header is inconsistent with the actual size of the file, or the file is being detected as larger than it actually is.
Note that while the maximum object size in S3 is 5 TiB, the maximum upload size is 5 GiB. (Objects larger than 5 GiB have to be uploaded in multiple parts.)
413 errors occur when the request body is larger than the server is configured to allow. I believe its not the error which AWS S3 is throwing because they support 5 TB size of object.
If you are first accepting this video in your app and from there you are making request to amazon S3, then your server is not configure to accept the large entities in request.
Refer -set-entity-size for different servers. if your server is not listed here, then you need to figure out how to increase entity size for your server.
Update: Keep-alive wasn't set on the AWS client. My fix was
var aws = require('aws-sdk');
aws.config.httpOptions.agent = new https.Agent({
keepAlive: true
});
I finally managed to debug it by using the Node --prof flag. Then using the node-tick-processor to analyze the output (it's a packaged version of a tool distributed in the Node/V8 source code). Most of the processing time was spent in SSL processing and that's when I thought to check whether or not is used keep-alive.
TL;DR Getting throttled by AWS when the number of requests is less than the configured DynamoDB throughput. Is there a request rate limit for all APIs?
I'm having a hard time finding documentation about the rate limiting of AWS APIs.
An application that I'm testing now is making about 80 requests per second to DynamoDB. This is a mix of PUTs and GETs. My DynamoDB table is configured with a throughput of: 250 reads / 250 writes. In the table CloudWatch metrics, the reads peak at 24 and the writes at 59 during the test period.
This is a sample of my response times. First, subsecond response times.
2015-10-07T15:28:55.422Z 200 in 20 milliseconds in request to dynamodb.us-east-1.amazonaws.com
2015-10-07T15:28:55.423Z 200 in 22 milliseconds in request to dynamodb.us-east-1.amazonaws.com
A lot longer, but fine...
2015-10-07T15:29:33.907Z 200 in 244 milliseconds in request to dynamodb.us-east-1.amazonaws.com
2015-10-07T15:29:33.910Z 200 in 186 milliseconds in request to dynamodb.us-east-1.amazonaws.com
The requests are piling up...
2015-10-07T15:32:41.103Z 200 in 1349 milliseconds in request to dynamodb.us-east-1.amazonaws.com
2015-10-07T15:32:41.104Z 200 in 1181 milliseconds in request to dynamodb.us-east-1.amazonaws.com
...no...
2015-10-07T15:41:09.425Z 200 in 6596 milliseconds in request to dynamodb.us-east-1.amazonaws.com
2015-10-07T15:41:09.428Z 200 in 5902 milliseconds in request to dynamodb.us-east-1.amazonaws.com
I went and got some tea...
2015-10-07T15:44:26.463Z 200 in 13900 milliseconds in request to dynamodb.us-east-1.amazonaws.com
2015-10-07T15:44:26.464Z 200 in 12912 milliseconds in request to dynamodb.us-east-1.amazonaws.com
Anyway, I stopped the test, but this is a Node.js application so a bunch of sockets were left open waiting for my requests to AWS to complete. I got response times > 60 seconds.
My DynamoDB throughput wasn't used much, so I assume that the limit is in API requests but I can't find any information on it. What's interesting is that the 200 part of the log entries is the response code from AWS which I got by hacking a bit of the SDK. I think AWS is supposed to return 429s -- all their SDKs implement exponential backoff.
Anyway -- I assumed that I could make as many requests to DynamoDB as configured throughput. Is that right? ...or what?