Error Details: [errType=ERROR_RESPONSE, status=1022506,
errMessage=Failed to connect Network error has occurred, errDetails=
RetCode: SQL_ERROR SqlState: HYT00 NativeError: 0 Message:
[unixODBC][Microsoft][ODBC Driver 13 for SQL Server]Login timeout
expired ODBC general error.
I am getting this while creating DMS source. I have created a firewall inbound rules also. Target successfully tested.
My goal is migrate on premises SQL server db to aws sql instance, which is installed in aws-EC2.
Anyone please help me.
Thanks in Advance.
Related
My Power BI dashboard is connected in Databricks.
When I try to update it, this error appears:
The error
Erro on OLE DB or ODBC : [DataSource.Error] ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'Unable to continue fetch after reconnect. Retry limit exceeded.'..
I believe that it has some relation with ODBC.
Can anyone help me, please?
Thanks in advance.
When I'm trying to connect Snowflake to Power BI I get an error:
"ODBC: ERROR [HY000] [Microsoft][Snowflake] (4)
REST request for URL https://eda87722.snowflakecomputing.com:443/session/v1/login-request?requestId=9eb99320-1fc6-49ee-859b-0fca3e70638b&request_guid=5f7d40b7-7025-410c-8905-da5b7e7e78cd&warehouse=COMPUTE_WH failed: CURLerror (curl_easy_perform() failed) - code=5 msg='Couldn't resolve proxy name' osCode=9 osMsg='Bad file descriptor'."
Interesting that I'm able to establish the connection on another machine with no issues. But I can't find the reason for such issue.
What I've tried:
Established the connection with vpn on and vpn off - no difference
Checked environmental variables http_proxy and https_proxy (http://proxyserver.internal) - they exist
Checked my access to data in Snowflake - I have access
Reinstalled Power BI
Tried to install Snowflake ODBC driver, but it is not visible in the list of my drivers
Tried to connect to different warehouses in Snowflake - same issue
'Auto resume' option for my warehouse in on
Could you, please, advise what else can I do?
Thank you.
I am new at Redis Enterprise and can't fix this problem:
I have a Redis Enterprise cluster (v.6.0) in AWS with two nodes. When I have only one node I can enter UI, but after adding other (second) nodes always throws me out to the login page after entering credentials. Meanwhile, the cluster works fine (information is taken from rladmin).
In what direction I should investigate the issue?
P.S.: Can this error from logs cause an issue?
ERROR redis_mgr MainThread: Connect failed: connect: connection failed: Error 2 connecting to unix socket: /var/opt/redislabs/run/ccs.sock. No such file or directory.: retrying
Possibly, this solution will help anybody:
the reason was that ALB before UI didn't use sticky sessions.
the solution was to enable a sticky session and it works.
I'm trying to migrate and synchronize a PostgreSQL database using AWS DMS and I'm getting the following error.
Last Error Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2673] [1020487]
RetCode: "SQL_ERROR SqlState: 42703 NativeError: 1
Message: ERROR: column "xlog_position" does not exist; No query has been executed with that handle; RetCode: SQL_ERROR SqlState: 42P01 NativeError: 1
Message: ERROR: relation "pglogical.replication_set" does not exist; No query has been executed with that handle; RetCode: SQL_ERROR SqlState: 42703 NativeError: 1 Message: ERROR: column "xlog_position" does not exist; No query has been executed with that handle;
Could not find any supported plugins available on source; Could not resolve default plugin; Could not assign a postgres plugin to use for replication; Failure in setting Postgres CDC agent control structure; Error executing command; Stream component failed at subtask 0, component st_0_JX7ONUUGB4A2AR2VQ4FMEZ7PFU ; Stream component 'st_0_JX7ONUUGB4A2AR2VQ4FMEZ7PFU' terminated [reptask/replicationtask.c:2680] [1020487] Stop Reason FATAL_ERROR Error Level FATAL
I'm using two PostgreSQL instances as both source and target. I have already tested and verified that both database instances are accessible by replication instance. Target instance user has full access to the database. Do I need to install any plugins or do additional configurations to get this migration setup working?
I managed to resolve the issue by following the steps mentioned at
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html.
The issue was due to the fact that I was using DMS engine v3.1.4 which required some additional configuration for the replication process to start. These instructions can be found at https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.v10
If you are experiencing this issue double check the DMS replication engine version. This can be viewed under Replication Instances in Resource Management.
To enable logical decoding for an Amazon RDS for PostgreSQL DB instance
The user account requires the rds_superuser role to enable logical
replication. The user account also requires the rds_replication role
to grant permissions to manage logical slots and to stream data using
logical slots.
Set the rds.logical_replication static parameter to 1. As part of
applying this parameter, we also set the parameters wal_level,
max_wal_senders, max_replication_slots, and max_connections. These
parameter changes can increase WAL generation, so you should only set
the rds.logical_replication parameter when you are using logical
slots.
Reboot the DB instance for the static rds.logical_replication
parameter to take effect.
Create a logical replication slot as explained in the next section.
This process requires that you specify a decoding plugin. Currently
we support the test_decoding output plugin that ships with
PostgreSQL.
The last item can be done with the following command:
SELECT * FROM pg_create_logical_replication_slot('test_slot', 'test_decoding');
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.FeatureSupport.LogicalReplication
Getting this error when connecting Power BI with Azure Databricks through spark build in connector:-
Details: "ODBC: ERROR [HY000] [Microsoft][DriverSupport] (1170)
Unexpected response received from server. Please ensure the server
host and port specified for the connection are correct."
I have checked many times host and port of the databrick cluster , and also tried after restarting of cluster .
Guide for the connection:-
https://docs.azuredatabricks.net/user-guide/bi/power-bi.html
Got the same problem today. I followed these instructions and it worked.
The user was not able to import SQL data Power BI and getting this error, while testing connection in ODBC was successful.
It turned out that he has old credentials stored in PowerBI, and that caused identification issues. Purging cached data sources (Power BI: Home >Edit Queries > Data source settings" resolved the issue.