changes on primary are not getting replicated to standby in SAP ASE - ASE replication - database-replication

2 SAP ASE servers are configured to replicate using SAP RS on a testdb with logical connection as a warm standby. In my case, all the threads and rep agents are running fine but data changes on PDS.testdb are not getting replicated to RDS.testdb.
On Replication Server:
1> admin logical_status
2> go
Logical Connection Name
Active Connection Name
Active Conn State
Standby Connection Name
Standby Conn State
Controller RS
Operation in Progress
State of Operation in Progress
Spid
---------------------------------------------------------------------------
[102] manvith6605t1.testdb
[103] MANVITH6605T1.testdb
Active/
[104] APMW2K8R2641.testdb
Active/Waiting for Enable Marker
[16777317] SAMPLE_RS
None
None
1> admin who_is_down
2> go
Spid Name State Info
---- --------------- ---------- ----------
On primary: deleted some rows of data.
1> select count(*) from mytable
2> go
-----------
24
(1 row affected)
On standby:
1> select count(*) from mytable
2> go
-----------
64
(1 row affected)
Feel free to ask for any clarifications.

Is this a new warm standby setup?
Have you been able to successfully replicate into the standby database in the past with this warm standby setup?
Did you by any chance run a switch active command recently?
Assuming this is a new setup, and switch active has not been run, I'm going to assume this may be an issue with how the standby connection was added to the setup ...
If the database connections have been added successfully and SRS thinks it should be replicating then admin logical_status should show both connections with a status of Active but this isn't the case ... the standby connection is showing Active/Waiting for Enable Marker.
I'm guessing that when you added the standby connection you opted to initialize the standby database via a dump-n-load operation. If you created the standby connection via a resource file you probably had something like rs.rs_init_by_dump: yes in the resource file; if you ran rs_init from the command line there would've been a related question that you probably said yes to.
When you tell rs_init that the standby database will be initialized via a dump-n-load operation the standard scenario looks like:
standby connection is created
standby DSI is configured to discard all incoming transactions until it sees a dump marker
admin logical_status should show the standby connection with a status of Active/Waiting for Enable Marker
operator performs a dump database in the active database (or dump transaction if this is a largish db and a db dump has already been dumped and loaded into the standby database)
the dump {database|transaction} command places a dump marker in the active database's transaction log
the repagent forwards this dump marker to SRS
SRS forwards the dump marker to the DSI
the DSI, upon receiving the dump marker will suspend the connection into the standby database and start saving incoming transactions
operator loads the database (or transaction log) dump file into the standby database
operator issues the online database command against the standby database
operator resumes the DSI connection into the standby database
admin logical_status should show the standby connection with a status of Active/
the DSI starts applying transactions against the standby database
NOTE: I don't recall, off the top of my head, if the standby connection's status changes to Active/ a) upon receiving the dump marker (and shutting down the DSI) or b) upon resuming the standby DSI connection.
Your admin logical_status is showing us that the DSI is still waiting for a dump marker.
If this is a production database you'll likely need to perform a new database dump ... at which point the standby DSI should go down ... and then you would need to load the database dump into the standby database, online said database, then resume the DSI connection into the standby database. This is the only way to ensure your standby database will be brought online with no missing transactions (ie, the current setup is discarding transactions).
If this is a development/test environment and/or you don't care about the possibility of the active and standby databases being out of sync, you should be able to:
suspend the DSI into the standby database
resume the DSI into the standby database
verify admin logical_status now shows a status of Active/ for the standby database and if so ... and assuming no other issues ...
newly performed transactions in the active database should start replicating
NOTE: Your previous DELETE performed against the active database has probably been discarded by now so you'll probably want to start by manually running the same DELETE against the standby in order to get the table in sync (assuming that's the only difference in the two tables, eg, you haven't run any UPDATEs in the active database since turning on replication).

Related

What should I do if I cannot connect because the storage of cloudsql is full?

The storage capacity of cloudsql DB is set to a maximum of 64 TB.
The storage currently in use is about 58 TB.
However, the following error occurs in the current log and the connection itself cannot be established.
"2022-01-06 02:50:39.648 UTC [34]: [2-1] db=,user= FATAL: could not write to file "pg_wal/xlogtemp.34": No space left on device"
It seems to be running out of capacity because it is full.
The point-in-time recovery function seems to be unavailable because the DB is not running. Currently, a new DB is created and the backup data from a few days ago is duplicated.
Is there any other way to recover faster?

Database Migration Task fails to load the data into the source database

I have created PostgreSQL (target) RDS on AWS , did schema conversion using SCT and now I am trying to move data using Data Migration task from database (DB2) placed at EC2 instance (source) to target DB. The data is not loading and task is giving following error:
Last Error ODBC general error. Task error notification received from subtask 1, thread 0 [reptask/replicationtask.c:2800] [1022502] Error executing source loop; Stream component failed at subtask 1, component st_1_5D3OUPDVTS3BLNMSQGEXI7ARKY ; Stream component 'st_1_5D3OUPDVTS3BLNMSQGEXI7ARKY' terminated [reptask/replicationtask.c:2807] [1022502] Stop Reason RECOVERABLE_ERROR Error Level RECOVERABLE
I was getting the same error and the issue was related to database user rights for REPLICATION CLIENT and REPLICATION SLAVE as mentioned in AWS Documentation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Prerequisites
I resolved it by setting the above mentioned REPLICATION rights using the following statements in MySQL (replacing {dbusername} with the actual database user name which was being used in DMS Endpoint):
GRANT REPLICATION CLIENT ON *.* to {dbusername}#'%';
GRANT REPLICATION SLAVE ON *.* to {dbusername}#'%';

SQLite with in-memory and isolation

I want to create an in-memory SQLite DB. I would like to make two connections to this in-memory DB, one to make modifications and the other to read the DB. The modifier connection would open a transaction and continue to make modifications to the DB until a specific event occurs, at which point it would commit the transaction. The other connection would run SELECT queries reading the DB. I do not want the changes that are being made by the modifier connection to be visible to the reader connection until the modifier has committed (the specified event has occurred). I would like to isolate the reader's connection to the writer's connection.
I am writing my application in C++. I have tried opening two connections like the following:
int rc1 = sqlite3_open_v2("file:db1?mode=memory", pModifyDb, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_URI, NULL);
int rc2 = sqlite3_open_v2("file:db1?mode=memory", pReaderDb, SQLITE_OPEN_READONLY | SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_URI, NULL);
I have created a table, added some rows and committed the transaction to the DB using 'pModifyDb'. When I try to retrieve the values using the second connection 'pReaderDb' by calling sqlite3_exec(), I receive a return code of 1 (SQLITE_ERROR).
I've tried specifying the URI as "file:db1?mode=memory&cache=shared". I am not sure if the 'cache=shared' option would preserve isolation anymore. But that did not work either when the reader connection is trying to exec a SELECT query the return code was 6 (SQLITE_LOCKED). Maybe because the shared cache option unified both the connections under the hood?
If I remove the in-memory requirement from the URI, by using "file:db1" instead, everything works fine. I do not want to use file-based DB as I require high throughput and the size of the DB won't be very large (~10MB).
So I would like to know how to set up two isolated connections to a single SQLite in-memory DB?
Thanks in advance,
kris
This is not possible with an in-memory DB.
You have to use a database file.
To speed it up, put it on a RAM disk (if possible), and disable synchronous writes (PRAGMA synchronous=off) in every connection.
To allow a reader and a writer at the same time, you have to put the DB file into WAL mode.
This is seems possible since version 3.7.13 (2012-06-11):
Enabling shared-cache for an in-memory database allows two or more database connections in the same process to have access to the same in-memory database. An in-memory database in shared cache is automatically deleted and memory is reclaimed when the last connection to that database closes.
Docs

Django: 2 connections to default DB?

In a long-running management command I'd like to have two connections to the same DB. One connection will hold a transaction to lock a certain row (select for update) and the other connection will record some processing info. If the process crashes, a new run of the management command can use the processing info to skip/simplify some processing steps, hence the need to record it in a different connection.
How do I go about creating a 2nd connection in the same thread? My first thought was to add a default2 entry to DATABASES with the same connection info as default and use .using("default2") in one of the queries, but wasn't sure if that would cause issues in Django

How to Implement Resume from Last Checkpoint for Powerexchange CDC sessions?

We are having 37 Sessions, each sessions having tables varying between 20 to 25, our Target DB is Greenplum. Due to huge queries running at different times on DB few segments nodes are going down so few of our CDC sessions gets failed.
So, we are planning to enable Resume from Last Checkpoint for CDC sessions. Do we need to check "Enable HA recovery" at workflow to use Resume from Last Checkpoint for Sessions?
The workflows should be started in warm mode, this way it automatically takes cares of resume from last point.
Please note that if you started the PowerExchange logger in cold mode, you need to start the logger also with cold mode.
A Restart folder is created at $INFA_HOME/server/infa_shared/ which stores the init and term tokens for workflow to resume from last failure.