How do I read data from s3n/data directly from AWS. At the moment I can trace my path to only data under s3 but not s3n.
I'm having this connection issues below when reading data from Qubole into Jupiter notebook
ConnectionError: HTTPSConnectionPool(host='api.qubole.com', port=443): Max retries exceeded with url: /api/v1.2/commands (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f6bdb5e5650>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
Thanks for your time.
Related
I was able to retrive data from (BSRN) after receiving username and password! After a week, it led to the issue shown below! Any suggestions on how to solve it?
data, metadata = pvlib.iotools.get_bsrn(
start=pd.Timestamp(2020,2,1), end=pd.Timestamp(2020,2,2),
station='BOS', username='redacted', password='redacted')
data
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
we are making stream inserts directly to a bigquery table and we are randomly receiving timeouts. The google cloud status page doesn't present any problems and we are respecting the quotas and limitations.
Google\Cloud\Core\Exception\ServiceException: cURL error 7: Failed to connect to oauth2.googleapis.com port 443: Connection timed out (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
Google\Cloud\Core\Exception\ServiceException: cURL error 7: Failed to connect to bigquery.googleapis.com port 443: Connection timed out (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
Is anyone having the same problem?
Currently, I have a problem in download data from google cloud storage. Here is the error: HTTPSConnectionPool(host='oauth2.googleapis.com', port=443): Read timed out. (read timeout=60)
Although I increase the timeout to 600 -> The error still shows 60. Here is my code:
storage_client = storage.Client.from_service_account_json(SERVICE_ACCOUNT_JSON_FILENAME)
bucket = storage_client.get_bucket(BUCKET_NAME)
blob = bucket.blob('blob_name')
blob.download_to_filename('dest_name', timeout=600)
Moreover, I also changed the timeout as a tuple (connect_timeout, read_timeout)
blob.download_to_filename('dest_name', timeout=(600, 600))
but, It still not working.
Could you please help me solve this problem?
Thank you
I'm trying to upload file to AWS s3 bucket. I'm continuously getting this exception "SdkClientException: Unable to execute HTTP request: Connection reset".
I'm trying to upload by inputstream
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_2)
.withCredentials(new AWSStaticCredentialsProvider(cred))
.build();
try {
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(bytesArray.length);
metadata.setContentType("text/csv");
s3Client.putObject(new PutObjectRequest(appConfig.getBucketName(), fileName, inStream, metadata).withCannedAcl(CannedAccessControlList.PublicRead));
}
----
Below is the exception in detail. What could be the reason? My S3 bucket has IAM user with S3 full access policy. Is this issue related to network or any other settings?
com.amazonaws.SdkClientException: Unable to execute HTTP request: Connection reset
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1136)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1082)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:745)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
-------
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:396)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:142)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142
I found it is occurring due to the firewall. If you set logging.level.com.amazonaws at DEBUG then it will show SSL handshake failing.
If you are running the application behind a proxy, this may happens. You can set the proxy as,
public static void setProxy() {
System.setProperty("http.proxyHost", "<http-proxy-url>");
System.setProperty("http.proxyPort", "<port>");
System.setProperty("https.proxyHost", "<https-proxy-url>");
System.setProperty("https.proxyPort", "<port>");
}
I'm downloading around 400 files asynchronously in my iOS app using Swift from my bucket in Amazon S3, but sometimes i get this error for several of these files. The maximum file size is around 4 MBs, and the minimum is few KBs
Error is Optional(Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo={NSUnderlyingError=0x600000451190 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 "(null)" UserInfo={_kCFStreamErrorCodeKey=-2102, _kCFStreamErrorDomainKey=4}}, NSErrorFailingURLStringKey=https://s3.us-east-2.amazonaws.com/mybucket/folder/file.html, NSErrorFailingURLKey=https://s3.us-east-2.amazonaws.com/mybucket/folder/file.html, _kCFStreamErrorDomainKey=4, _kCFStreamErrorCodeKey=-2102, NSLocalizedDescription=The request timed out.})
How can I prevent it?
Try to increase timeout:
let urlconfig = URLSessionConfiguration.default
urlconfig.timeoutIntervalForRequest = 300 // 300 seconds