SYNC with master failed: -ERR unknown command 'SYNC' - amazon-web-services

I was trying to dump my Redis data that is hosted via AWS. I can log into the interactive mode via redis-cli, but when I tried dumping the data to an RDB file I received the following error in the title...
user#awshost:~/TaoRedisExtract$ redis-cli -h myawsredis.amazonaws.com --rdb redis.dump.rdb
SYNC with master failed: -ERR unknown command 'SYNC'
I'm not sure if this is a bug, a configuration issue, or known/expected behavior for AWS redis? I've searched and searched and not found any other reports of users getting this error message.

according to reply of similar question of aws forum
From redis version 2.8.22 SYNC has been disabled:
"To maintain enhanced replication performance in Multi-AZ replication groups and for increased cluster stability, non-ElastiCache replicas are no longer supported"

Related

AWS Spark images not supporting HDFS

I am currently working on running our daily ETLS On EKS instead of EMR. How ever i see few of the jobs are failing (few of them use HDFS). I see the official Docker image provided by AWS doesn't support HDFS.
I am using 895885662937.dkr.ecr.us-west-2.amazonaws.com/spark/emr-5.32.0 image.
bash-4.2$ hdfs
/usr/bin/hdfs: line 8: /usr/lib/hadoop-hdfs/bin/hdfs: No such file or directory
This are logs for Job fail
Jobrun failed. Main Spark container terminated with errors. Last error seen in logs - Caused by: java.lang.Exception: java.io.IOException: Incomplete HDFS URI, no host: hdfs:/bucket-name/process-store/checkpoints/086461cd-df44-4ab4-a2ee-da2c5671f9b4\nCaused by: java.io.IOException: Incomplete HDFS URI, no host: hdfs:/bucket-name/process-store/checkpoints/086461cd-df44-4ab4-a2ee-da2c5671f9b4. Please refer logs uploaded to S3/CloudWatch based on your monitoring configuration
Not sure how to proceed from here.

DMS task getting failed on Oracle on-going replication (Full load works fine)

We're using AWS DMS to migrate oracle databases into s3 buckets and after successfully running the full load on Oracle Database 19c Standard Edition 2 hosted in rds, the on-going replication is failing with error:
Failed to add the REDO sequence xxxx; to LogMiner in thread 1;. Replication task could not find the required REDO log on the source database to read changes from. Please check redo log retention settings and retry
I already checked that the archivelog retention hours was set to 24
Have anyone came across the same issue!? Any help will be much appreciated.
We managed to fix the issue after rerunning the grants script as documented in aws dms. We could not find the root cause but some privilege was not assigned at first and impacted the redologs access https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.Amazon-Managed

AWS DMS task error for Aurora PostgreSQL migration

I am trying to migrate all the data present in my old RDS Aurora PostgreSQL cluster to the new RDS Aurora PostgreSQL cluster using AWS DMS. I have created the source and target endpoints and tested the connection successfully. However when I am trying to create a migration task in DMS, it is continuously failing with the error:
Last Error ODBC general error. Error executing command; Stream component failed at subtask 0,
component st_0_PWDKKAMFPUY2RHV ; Stream component
'st_0_PWDKKAMFPUY2RHV' terminated [reptask/replicationtask.c:3171] [1022502]
Stop Reason RECOVERABLE_ERROR Error Level RECOVERABLE
Even after enabling CloudWatch logs, I am not able to figure out what's missing? What does the error signify or what am I doing wrong?
I had faced the same error and the issue seems related to database user rights for
Replication Client and Replication Slave
I have fixed it by setting the Replication rights using the below statements in SQL
GRANT REPLICATION CLIENT ON *.* to {dbusername}#'%';
GRANT REPLICATION SLAVE ON *.* to {dbusername}#'%';
Note: replacing {dbusername} with the actual database user name which was being used in DMS Endpoint

AWS ECS task fails to start becasue daemon can't create Logstream

I have 2 versions of a service that run in the same cluster. I'm using the awslogs driver
The v2 logs works fine however the v1 task fails to start because it can't create a log stream.
The setup is identical between services except for the container being used.
The log group exists and the role has permissions to create a "logstream" and can "putevents" as this is pretty much the same setup for the v2 in a different group.
CannotStartContainerError: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: RequestError: send request failed caused by: Post https://logs.eu-west-1-v1.amazonaws.com/: dial tcp: lookup logs.eu-west-1
I've setup a new service and tried to spin it up again but it failed so I thought that this was to do with the container setup.
On the official documentation here it recommends adding this to the environment variables
ECS_AVAILABLE_LOGGING_DRIVERS '["json-file","awslogs"]'
After adding this, it still failed. I've been searching for a while on this and would appreciate any help or preferably guidance.

Google Cloud SQL Read Replica Failing to replicate

I have created a new read replica from the GCP Cloud SQL Console, using the create read replica option
I am getting following error after creation of replica, replica instance is creating successfully but the replication not starting as expected.
Here is the error message I am getting in the error log.
"2020-05-05T05:11:30.747872Z 4 [ERROR] Slave I/O for channel '': error
connecting to master 'cloudsqlreplica#172.17.112.4:3306' - retry-time:
60 retries: 1, Error_code: 2003"
binlog is already enable on master.
Database version is MySQL 5.7
Auto storage increase is enabled
Automated backups are enabled
Point-in-time recovery is enabled
Please let me know if anyone came across this issue and if you know how to solve this problem.