I am trying to connect to Athena using SQL workbench. I followed all the instructions from page 15 to 19 mentioned in this PDF file:
https://s3.amazonaws.com/athena-downloads/drivers/JDBC/SimbaAthenaJDBC_2.0.7/docs/Simba+Athena+JDBC+Driver+Install+and+Configuration+Guide.pdf
If I use the default athena bucket name, I get this error:
S3://aws-athena-query-results-51346970XXXX-us-east-1/Unsaved
[Simba]AthenaJDBC An error has been thrown from the AWS SDK
client. Unable to execute HTTP request: No such host is known
(athena.useast-1.amazonaws.com) [Execution ID not available]
For any other bucketname I get this error:
s3://todel162/testfolder-1
[Simba]AthenaJDBC An error has been thrown from the AWS SDK
client. Unable to execute HTTP request: athena.useast-1.amazonaws.com
[Execution ID not available]
How do I connect to Athena using JDBC client?
Using copy-paste had an issue with the string on page 16:
jdbc:awsathena://AwsRegion=useast-1;
It should have a - like this...
jdbc:awsathena://AwsRegion=us-east-1;
Once I corrected this, I was able to connect.
Related
When I'm trying to connect Snowflake to Power BI I get an error:
"ODBC: ERROR [HY000] [Microsoft][Snowflake] (4)
REST request for URL https://eda87722.snowflakecomputing.com:443/session/v1/login-request?requestId=9eb99320-1fc6-49ee-859b-0fca3e70638b&request_guid=5f7d40b7-7025-410c-8905-da5b7e7e78cd&warehouse=COMPUTE_WH failed: CURLerror (curl_easy_perform() failed) - code=5 msg='Couldn't resolve proxy name' osCode=9 osMsg='Bad file descriptor'."
Interesting that I'm able to establish the connection on another machine with no issues. But I can't find the reason for such issue.
What I've tried:
Established the connection with vpn on and vpn off - no difference
Checked environmental variables http_proxy and https_proxy (http://proxyserver.internal) - they exist
Checked my access to data in Snowflake - I have access
Reinstalled Power BI
Tried to install Snowflake ODBC driver, but it is not visible in the list of my drivers
Tried to connect to different warehouses in Snowflake - same issue
'Auto resume' option for my warehouse in on
Could you, please, advise what else can I do?
Thank you.
I'm wanting to create a one-way real-time copy of a Salesforce (SF) object in Redshift. The idea being that when fields are updated in SF, those fields will be updated in Redshift as well. The history of changes are irrelevant in AWS/Redshift, that's all being tracked in SF - I just need a real-time read-only copy of that particular object to query. Preferably without having to query the whole SF object, clearing the Redshift table, and piping the data in.
I thought AWS AppFlow listening for SF Change Data Capture events might be a good setup for this:
When I try to create a flow, I don't have any issues with the SF source connection:
so I click "Connect" in the Destination details section to setup Redshift and I fill out this page and click "Connect" again:
About 5 seconds goes by and I receive this error pop-up:
An error occurred while creating the connection
Error while communicating to connector: Failed to validate Connection while attempting "select distinct(table_schema) from information_schema.tables limit 1" with connector failure Can't connect to JDBC database with message: Amazon Error setting/closing connection: SocketTimeoutException. (Service: null; Status Code: 400; Error Code: Client; Request ID: null; Proxy: null)
I know my connection string, username, password, etc are all good - I'm connected to Redshift in other apps. Any idea what the issue could be? Is this even the right solution for what I'm trying to do?
I solved this by adding the AppFlow IP ranges for my region to my Redshift VPC's security group inbound rules.
This is a really strange one as it started throwing errors over night - it's been working fine up until yesterday - this morning it's been playing all day.
I'm using illuminate/filesystem in my project and for the endpoint I was using:
https://s3.eu-west-2.amazonaws.com
This morning we started getting errors saying:
Error executing "ListObjects" on "bucket-01.https://s3.eu-west-2.amazonaws.com"; AWS HTTP error: cURL error 1: Protocol "bucket-01.https" not supported or disabled in libcurl (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
File: .../vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php
Line: 195
Seeing that it tries to prepend bucket name before the protocol of the endpoint I've decided to remove protocol from the endpoint - making it
s3.eu-west-2.amazonaws.com
Now I'm getting error saying
Error executing "ListObjects" on "//bucket-01.s3.eu-west-2.amazonaws.com/bucket-01.s3.eu-west-2.amazonaws.com";
AWS HTTP error: Client error: GET http://bucket-01.s3.eu-west-2.amazonaws.com/bucket-01.s3.eu-west-2.amazonaws.com resulted in a 404 Not Found response
NoSuchKey The specified key does not exist.
As you can see now it appends endpoint after the initial endpoint.
Does anyone know what might have happened?
After hours of searching for the solution I've came across this issue on laravel/framework repository https://github.com/laravel/framework/issues/36694
Following the steps in https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries#setting_up_database_connections after the API was activated, I tried to set up a connection between our BigQuery database and our Postgresql CloudSQL database. I entered everything correctly:
When I try to connect it, I get the following error, without any error code or other information as to why it might have went wrong:
I'm trying to connect to a kerberized hadoop cluster via Livy to execute Spark code. The requests call im making is as below.
kerberos_auth = HTTPKerberosAuth(mutual_authentication=REQUIRED, force_preemptive=True)
r = requests.post(host + '/sessions', data=json.dumps(data), headers=headers, auth=kerberos_auth)
This call fails with the following error
GSSException: No valid credentials provided (Mechanism level: Failed
to find any Kerberos credentails)
Any help here would be appreciated.
When running Hadoop service daemons in Hadoop in secure mode, Kerberos tickets are decrypted with a keytab and the service uses the keytab to determine the credentials of the user coming into the cluster. Without a keytab in place with the right service principal inside of it, you will get this error message. Please refer to Hadoop in Secure Mode for further details on setting up the keytab.