We would like to back up a SQL Server cluster at the DC site to another standalone SQL Server at the DR site. We would like to use SymmetricDS and we want all DB objects from the source to be mirrored to the DR (including new tables, triggers and stored procedures). Some tables do not have primary keys.
We would like to know the type of architecture best suited to our needs.
The configuration for SymmetricDS would be two nodes that sync with each other. You could use one node group and link them, like "primary pushes to primary". By using bi-directional, you can use your mirror database when needed, and it will capture changes to get the other one back in sync when it becomes available.
SymmetricDS will replicate tables and data, but it does not replicate triggers and stored procedures. Also, the table replication works for most common cases, but misses details like computed columns and defaults that call functions.
Related
Is there anything (system procedure,function or other) in SQL Server that will provide the functionality of DBMS_ALERT package of ORACLE (and DBMS_PIPE respectively)?
I work in a plant and I'm using an extension-product of SQL-Server called InSQL Server by Wonderware which is specialized in gothering data from plant controllers and HumanMachineInterface(SCADA) software.
This system can record events happening in the plant (like a high-temperature alarm, for example). It stores sensor values in extension tables of SQL Sever, and other less dense information in normal SQL Server tables.
I want to be able to alert some applications running on operator PCs that an event has been recorded in the database.
An after insert trigger in the events table seems to be a good place to put something equivalent to DBMS_ALERT (if it exists), to wake up other applications that are waiting for the specific alert and have the operators type in some data.
In other words - I want to be able to notify other processes (that have connection to SQL Server) that something has happened in the database.
All Wonderware (InSQL but now called Aveva) Historian data is stored in the history blocks EXCEPT for the actual tag storage configuration and dedicated event data. The time series data for analog, discrete and strings is NOT in SQL tables at all - unless someone is doing custom configuration to. create tables of their own.
Where are you wanting these notifications to come up? Even though the historical data is NOT stored in SQL tables, Wonderware has extensive documentation on how to use SQL queries to appropriately retrieve data (check for whatever condition you are looking for)
You can easily build a stored procedure and configure it for a maintenance plan.
But are you just trying to alarm (provide notification) on the scada itself?
Or are you truly utilizing historical data (looking for a data trend - average, etc.)?
Or trying to send the notification to non-scada interfaces?
Depending on your specific answer, the scada itself should probably be able to do it.
But there is software that already does this type of thing Win-911, SeQent, Scadatec are a couple in the OT space. But also things like Hip Link or even DeskAlert which can connect to any SQL via it's own API.
So where does the info need to go (email, text, phone, desktop app...) and what is the real source of the data>
we are considering using AWS Neptune as graphdb solution.
I am coming from Django world so I used to use db migrations a lot.
I could not find any info about how AWS Neptune does change management on DB?
ie. what happens if I want to reload a backup from a month ago and there has been schema changes since then? How do we track these changes?
Should we write custom scripts?
Unlike something like an RDBMS and some other data stores, Amazon Neptune, and many other graph dbs for that matter, are called "schemaless" meaning there is no need to explicitly define or maintain a schema. The schema is implicitly defined by the data stored in the database. In the case you mentioned, restoring a backup, there is no need for a migration/change script to be run. When you restore the backup the schema will be defined by the restored data.
This "schemaless" nature of the database allows applications to begin adding new entity types and data properties without any sort of ETL process. However, this also means that the application does need to manage some sort of schema internally to maintain sanity over the data being stored (e.g. first_name and firstName could be used and would be separate properties.).
I have an application that requires me to pull certain information from DB#1 and push it to DB#2 every time a certain entry in a table from DB#1 is updated. The polling rate doesn't need to be extremely fast, but it probably shouldn't be any slower than 1 second.
I was planning on writing a small service using the C++ Connector library, but I am worried about putting too much load on DB#1. Is there a more efficient way of doing this? Such as built in functionality within an SQL script?
There are many methods to accomplish this, so it may be other factors you prefer that drive the approach.
If the SQL Server databases are on the same server instance:
Trigger on the DB1 tables that push to the DB2 tables
Stored procedure (in DB1 or DB2) that uses MERGE to identify changes and sync them to DB2, then use SQL job to call the procedure on your schedule
Enable Change Tracking on database and desired tables, then use stored proc + SQL job to send changes without any queries on source tables
If on different instances or servers (can also work if on same instance though):
SSIS Package to identify changes and push to DB2 (bonus can work with change data capture)
Merge Replication to synchronize changes
AlwaysOn Availability Groups to synchronize entire dbs
Microsoft Sync Framework
Knowing nothing about your preferences or comfort levels, I would probably start with Merge Replication - can be a bit tricky and tedious to setup, but performs very well.
You can create a trigger in DB1 and dblinks in between DB1 and DB2. So you can natively invoke trigger within DB1 and transfer data directly to DB2.
We have a PostgreSQL server running in production and a plenty of workstations with an isolated development environments. Each one has its own local PostgreSQL server (with no replication with the production server). Developers need to receive updates stored in production server periodically.
I am trying to figure out how to dump the contents of several selected tables from server in order to update the tables on development workstations. The biggest challenge is that the tables I'm trying to synchronize may be diverged (developers may add - but not delete - new fields to the tables through the Django ORM, while schema of the production database remains unchanged for a long time).
Therefore the updated records and new fields of the tables stored on workstations must be preserved against the overwriting.
I guess that direct dumps (e.g. pg_dump -U remote_user -h remote_server -t table_to_copy source_db | psql target_db) are not suitable here.
UPD: If possible I would also like to avoid the use of third (intermediate) database while transferring the data from production database to the workstations.
I would recommend the following approach.
I'll outline example based on a single table customer.
We want to copy some entries from this table on production. Obviously, full table dump will break new stuff that exists on development envs;
Therefore, create a table with the similar structure, but a different name, say customer_$. Another way is to create a dedicated schema for such “copying” tables. You might also want to include a couple of extra columns there, like copy_id and/or copy_stamp;
Now you can INSERT INTO customer_$ SELECT ... to populate your copying table with wanted data. You might need to think of the way how to do this, though. In the tool we use here we can supply predicate data via the -w switch, like -w "customer_id IN (SELECT id FROM cust2copy)";
After you've populated your copying table(s), you can dump them. Make sure to use the following switches to the pg_dump:
--column-inserts to explicitly list target columns, for on development env copying table might have changed it's structure. This might be “slow” for big volumes though;
--table / -t to specify tables to dump.
On the target env, make sure to (1) empty copying tables and (2) prevent parallel activities of similar nature;
Load date into the copying tables;
The most interesting part comes: you need to check, that data you're bout to INSERT into the main tables will not conflict with any of the constraints defined on the tables. You might have:
PRIMARY KEY violations. You can (1) replace existing entries or (2) merge entries together or (3) skip entries from the copying tables or (4) choose to assign different ID in the copying tables;
UNIQUE KEY violations, most likely you'll have to UPDATE some columns in the copying tables;
FOREIGN KEY violations, you'll have either to give up on such entries, or to copy over missing stuff from the production as well;
CHECK violations, you'll have to investigate this ones manually.
After checks are done and data in the copying tables is fixed, you can copy it into the main tables.
This is a very formal description of the approach. Say, for step #7 we have a huge pile of extra tools to do ID or ID ranges remapping, to manipulate data in the copying tables, adjusting security settings, ownership, some defaults, etc.
Also, we have a so-called catalogue for this tool, which allows us to group logically tied tables under common names. Say, to copy customers from production we have to check round 50 tables in order to satisfy all possible dependencies.
I haven't seen similar tools in the wild though so far.
I work with SQL Server database with ODBC, C++. I want to detect modifications in some tables of the database: another application inserts or updates rows and I have to detect all these modifications. It does not have to be the immediate trigger, it is acceptable to use polling to periodically check database tables for modifications.
Below is the way I think this can be done, and need your opinions whether this is the standard/right way of doing this, or any better approaches exist.
What I've thought of is this: I add triggers in SQL Server, which, on any modification, will insert the identifiers of modified/added rows into special table, which I will check periodically from my application. Suppose there are 3 tables: Customers, Products, Services. i will make three additional tables: Change_Customers, Change_Products, Change_Services, and will insert the identifiers of modified rows of the respective tables. Then I will read these Change_* tables from my application periodically and delete processed records.
Now if you agree that above solution is right, I have another question: Is it better to have separate Change_* tables for each of my tables I wish to monitor, or is it better to have one fat Changes table which will contain the changes from all tables.
Query Notifications is the technology designed to do exactly what you're describing. You can leverage Query Notifications from managed clients via the well known SqlDependency class, but there are native Ole DB and ODBC ways too. See Working with Query Notifications, the paragraphs about SSPROP_QP_NOTIFICATION_MSGTEXT (OleDB) and SQL_SOPT_SS_QUERYNOTIFICATION_MSGTEXT (ODBC). See The Mysterious Notification for an explanation how Query Notifications work.
This is the only polling-free solution that work with any kind of updates. Triggers and polling for changes has severe scalability and performance issues. Change Data Capture and Change Tracking are really covering a different topic (synchronizing datasets for occasionally connected devices, eg. Sync Framework).
Change Data Capture(CDC)--http://msdn.microsoft.com/en-us/library/cc645937.aspx
First you will need to enable CDC in database
::
USE db_name
GO
EXEC sys.sp_cdc_enable_db
GO
Enable CDC on table then
:: sys.sp_cdc_enable_table
Then you can query changes
If your version of Sql Server is 2005 - you may use Notification Services
If your Sql Server is 2008+ - there is most preferrable way to use triggers and log changes to log tables and periodically poll these tables from application to see the changes