I have a redshift cluster launched and running on aws and the inbound query is authorized by configuring the VPC security group
Then I try to connect to the redshift with pgAdmin and received following error
An error has occurred:
ERROR: permission denied to set parameter "client_min_messages" to "notice"
and
An error has occurred:
Column not found in pgSet: "datlastsysoid"
PgAdmin is mainly a Postgres client and is not a supported client for redshift. Due to its incompatibility, opening a connection always tries to set client_min_messages, but Redshift refuses to accept such a setting. This causes the error you experienced.
Redshift supports only the below parameters which have to be set at the cluster -
dev=# show all;
name | setting
---------------------------+----------------------
analyze_threshold_percent | 10
datestyle | ISO, MDY
extra_float_digits | 0
query_group | default
search_path | $user, public, admin
statement_timeout | 0
wlm_query_slot_count | 1
(7 rows)
You can use other clients like psql or SQLWorkbench/J as pgAdmin has deviations and doesn't support connections to redshift. You can also refer to this where an issue is reported on Github.
Related
I'm wanting to create a one-way real-time copy of a Salesforce (SF) object in Redshift. The idea being that when fields are updated in SF, those fields will be updated in Redshift as well. The history of changes are irrelevant in AWS/Redshift, that's all being tracked in SF - I just need a real-time read-only copy of that particular object to query. Preferably without having to query the whole SF object, clearing the Redshift table, and piping the data in.
I thought AWS AppFlow listening for SF Change Data Capture events might be a good setup for this:
When I try to create a flow, I don't have any issues with the SF source connection:
so I click "Connect" in the Destination details section to setup Redshift and I fill out this page and click "Connect" again:
About 5 seconds goes by and I receive this error pop-up:
An error occurred while creating the connection
Error while communicating to connector: Failed to validate Connection while attempting "select distinct(table_schema) from information_schema.tables limit 1" with connector failure Can't connect to JDBC database with message: Amazon Error setting/closing connection: SocketTimeoutException. (Service: null; Status Code: 400; Error Code: Client; Request ID: null; Proxy: null)
I know my connection string, username, password, etc are all good - I'm connected to Redshift in other apps. Any idea what the issue could be? Is this even the right solution for what I'm trying to do?
I solved this by adding the AppFlow IP ranges for my region to my Redshift VPC's security group inbound rules.
I have created PostgreSQL (target) RDS on AWS , did schema conversion using SCT and now I am trying to move data using Data Migration task from database (DB2) placed at EC2 instance (source) to target DB. The data is not loading and task is giving following error:
Last Error ODBC general error. Task error notification received from subtask 1, thread 0 [reptask/replicationtask.c:2800] [1022502] Error executing source loop; Stream component failed at subtask 1, component st_1_5D3OUPDVTS3BLNMSQGEXI7ARKY ; Stream component 'st_1_5D3OUPDVTS3BLNMSQGEXI7ARKY' terminated [reptask/replicationtask.c:2807] [1022502] Stop Reason RECOVERABLE_ERROR Error Level RECOVERABLE
I was getting the same error and the issue was related to database user rights for REPLICATION CLIENT and REPLICATION SLAVE as mentioned in AWS Documentation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Prerequisites
I resolved it by setting the above mentioned REPLICATION rights using the following statements in MySQL (replacing {dbusername} with the actual database user name which was being used in DMS Endpoint):
GRANT REPLICATION CLIENT ON *.* to {dbusername}#'%';
GRANT REPLICATION SLAVE ON *.* to {dbusername}#'%';
I'm trying to migrate and synchronize a PostgreSQL database using AWS DMS and I'm getting the following error.
Last Error Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2673] [1020487]
RetCode: "SQL_ERROR SqlState: 42703 NativeError: 1
Message: ERROR: column "xlog_position" does not exist; No query has been executed with that handle; RetCode: SQL_ERROR SqlState: 42P01 NativeError: 1
Message: ERROR: relation "pglogical.replication_set" does not exist; No query has been executed with that handle; RetCode: SQL_ERROR SqlState: 42703 NativeError: 1 Message: ERROR: column "xlog_position" does not exist; No query has been executed with that handle;
Could not find any supported plugins available on source; Could not resolve default plugin; Could not assign a postgres plugin to use for replication; Failure in setting Postgres CDC agent control structure; Error executing command; Stream component failed at subtask 0, component st_0_JX7ONUUGB4A2AR2VQ4FMEZ7PFU ; Stream component 'st_0_JX7ONUUGB4A2AR2VQ4FMEZ7PFU' terminated [reptask/replicationtask.c:2680] [1020487] Stop Reason FATAL_ERROR Error Level FATAL
I'm using two PostgreSQL instances as both source and target. I have already tested and verified that both database instances are accessible by replication instance. Target instance user has full access to the database. Do I need to install any plugins or do additional configurations to get this migration setup working?
I managed to resolve the issue by following the steps mentioned at
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html.
The issue was due to the fact that I was using DMS engine v3.1.4 which required some additional configuration for the replication process to start. These instructions can be found at https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.v10
If you are experiencing this issue double check the DMS replication engine version. This can be viewed under Replication Instances in Resource Management.
To enable logical decoding for an Amazon RDS for PostgreSQL DB instance
The user account requires the rds_superuser role to enable logical
replication. The user account also requires the rds_replication role
to grant permissions to manage logical slots and to stream data using
logical slots.
Set the rds.logical_replication static parameter to 1. As part of
applying this parameter, we also set the parameters wal_level,
max_wal_senders, max_replication_slots, and max_connections. These
parameter changes can increase WAL generation, so you should only set
the rds.logical_replication parameter when you are using logical
slots.
Reboot the DB instance for the static rds.logical_replication
parameter to take effect.
Create a logical replication slot as explained in the next section.
This process requires that you specify a decoding plugin. Currently
we support the test_decoding output plugin that ships with
PostgreSQL.
The last item can be done with the following command:
SELECT * FROM pg_create_logical_replication_slot('test_slot', 'test_decoding');
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.FeatureSupport.LogicalReplication
I've installed a single node Hadoop Cluster on EC2 instance. I then stored some test data on hdfs and I'm trying to load the hdfs data to SAP Vora. I'm using SAP Vora 2.0 for this project.
To create the table and load the data to Vora, this is the query I'm running:
drop table if exists dims;
CREATE TABLE dims(teamid int, team string)
USING com.sap.spark.engines.relational
OPTIONS (
hdfsnamenode "namenode.example.com:50070",
files "/path/to/file.csv",
storagebackend "hdfs");
When I run the above query, I get this error message:
com.sap.vora.jdbc.VoraException: HL(9): Runtime error.
(could not handle api call, failure reason : execution of scheduler plan failed:
found error: :-1, CException, Code: 10021 : Runtime category : an std::exception wrapped.
Next level: v2 HDFS Plugin: Exception at opening
hdfs://namenode.example.com:50070/path/to/file.csv:
HdfsRpcException: Failed to invoke RPC call "getFsStats" on server
"namenode.example.com:50070" for node id 20
with error code 0, status ERROR_STATUS
Hadoop and Vora are running on different nodes.
You should specify the HDFS Namenode port, which is typically 8020. 50700 is the port of the WebUI. See e.g. Default Namenode port of HDFS is 50070.But I have come across at some places 8020 or 9000
I am getting Error while creating table in zeppelin as , "Could not load table TEST_TABLE_1 : There are no Velocity servers up."
All services in AMBARI are in green status. Tried restarting servers , instances and all services but still error exists.
Please comment.
Please verify if the Vora servers are running on your Spark worker nodes:
ps -efa | grep v2server
Could you provide the command you are executing and a bit more details about the error message (e.g. 1-2 lines before; 1-2 lines after).
Where are you running your cluster (AWS, datacenter, laptop,...)?