Error code 422 while connecting from superset to Athena - amazon-athena

Getting Error
422 UNPROCESSABLE ENTITY while connecting from superset to Athena (superset==1.0.1)
Using below format for connections string:
awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}#athena.{region_name}.amazonaws.com:443/{schema_name}?s3_staging_dir={s3_staging_dir}

Can you try this
awsathena+rest://<aws_access_key_id>:<aws_secret_access_key>#athena.<region name>.amazonaws.com/<database name>?s3_staging_dir=s3://<s3 bucket>/&work_group=<work group>

Related

Unable to refresh Power BI data connected to Databricks due to ODBC error

My Power BI dashboard is connected in Databricks.
When I try to update it, this error appears:
The error
Erro on OLE DB or ODBC : [DataSource.Error] ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'Unable to continue fetch after reconnect. Retry limit exceeded.'..
I believe that it has some relation with ODBC.
Can anyone help me, please?
Thanks in advance.

AWS AppFlow Salesforce to Redshift Error Creating Connection

I'm wanting to create a one-way real-time copy of a Salesforce (SF) object in Redshift. The idea being that when fields are updated in SF, those fields will be updated in Redshift as well. The history of changes are irrelevant in AWS/Redshift, that's all being tracked in SF - I just need a real-time read-only copy of that particular object to query. Preferably without having to query the whole SF object, clearing the Redshift table, and piping the data in.
I thought AWS AppFlow listening for SF Change Data Capture events might be a good setup for this:
When I try to create a flow, I don't have any issues with the SF source connection:
so I click "Connect" in the Destination details section to setup Redshift and I fill out this page and click "Connect" again:
About 5 seconds goes by and I receive this error pop-up:
An error occurred while creating the connection
Error while communicating to connector: Failed to validate Connection while attempting "select distinct(table_schema) from information_schema.tables limit 1" with connector failure Can't connect to JDBC database with message: Amazon Error setting/closing connection: SocketTimeoutException. (Service: null; Status Code: 400; Error Code: Client; Request ID: null; Proxy: null)
I know my connection string, username, password, etc are all good - I'm connected to Redshift in other apps. Any idea what the issue could be? Is this even the right solution for what I'm trying to do?
I solved this by adding the AppFlow IP ranges for my region to my Redshift VPC's security group inbound rules.

Connect to Athena using SQL workbench

I am trying to connect to Athena using SQL workbench. I followed all the instructions from page 15 to 19 mentioned in this PDF file:
https://s3.amazonaws.com/athena-downloads/drivers/JDBC/SimbaAthenaJDBC_2.0.7/docs/Simba+Athena+JDBC+Driver+Install+and+Configuration+Guide.pdf
If I use the default athena bucket name, I get this error:
S3://aws-athena-query-results-51346970XXXX-us-east-1/Unsaved
[Simba]AthenaJDBC An error has been thrown from the AWS SDK
client. Unable to execute HTTP request: No such host is known
(athena.useast-1.amazonaws.com) [Execution ID not available]
For any other bucketname I get this error:
s3://todel162/testfolder-1
[Simba]AthenaJDBC An error has been thrown from the AWS SDK
client. Unable to execute HTTP request: athena.useast-1.amazonaws.com
[Execution ID not available]
How do I connect to Athena using JDBC client?
Using copy-paste had an issue with the string on page 16:
jdbc:awsathena://AwsRegion=useast-1;
It should have a - like this...
jdbc:awsathena://AwsRegion=us-east-1;
Once I corrected this, I was able to connect.

Adding a source in a gateway

This is the error im facing when trying to add a datasource in a gateway:
Unable to connect: We encountered an error while trying to connect to
. Details: "We could not register this data source for any gateway
instances within this cluster. Please find more details below about
specific errors for each gateway instance.
Activity ID:
66610131-d0fc-4787-9432-36b2bbc95dbb
Request ID:
b9231dc4-dd80-8b86-6301-c171aad3b879
Cluster URI:
https://wabi-south-east-asia-redirect.analysis.windows.net
Status code:
400
Error Code:
DMTS_PublishDatasourceToClusterErrorCode
Time:
Wed Oct 17 2018 12:48:44 GMT-0700 (Pacific Daylight Time)
Version:
13.0.6980.207
434Gateway:
Invalid connection credentials.
Underlying error code:
-2147467259
Underlying error message:
The credentials provided for the File source are invalid. (Source at c:\users\rohan\documents\latest 2018\sara\new folder\2018_sales.xls.)
DM_ErrorDetailNameCode_UnderlyingHResult:
-2147467259
Microsoft.Data.Mashup.CredentialError.DataSourceKind:
File
Microsoft.Data.Mashup.CredentialError.DataSourcePath:
c:\users\rohan\documents\latest 2018\sara\new folder\2018_sales.xls
Microsoft.Data.Mashup.CredentialError.Reason:
AccessUnauthorized
Microsoft.Data.Mashup.MashupSecurityException.DataSources:
[{"kind":"File","path":"c:\\users\\rohan\\documents\\latest 2018\\sara\\new folder\\2018_sales.xls"}]
Microsoft.Data.Mashup.MashupSecurityException.Reason:
AccessUnauthorized
Troubleshoot connection problems

DMS Source connection issue

Error Details: [errType=ERROR_RESPONSE, status=1022506,
errMessage=Failed to connect Network error has occurred, errDetails=
RetCode: SQL_ERROR SqlState: HYT00 NativeError: 0 Message:
[unixODBC][Microsoft][ODBC Driver 13 for SQL Server]Login timeout
expired ODBC general error.
I am getting this while creating DMS source. I have created a firewall inbound rules also. Target successfully tested.
My goal is migrate on premises SQL server db to aws sql instance, which is installed in aws-EC2.
Anyone please help me.
Thanks in Advance.