Replay events from Akka.net Persistence Journal - akka

I'm implementing a CQRS/ES solution with Akka.Net and Akka.Net.Persistence with a SQL Server Journal. So far everything seems to work great with the default sql-server plugin.
Last thing to verify was the ability to be able to reload/replay events from a specific AR, e.g. to rebuild a read model or to fill a newly implemented projection for a read model. The way I would go about this is reading the events from de DB and putting them on the eventbus or directly into the mailbox of the "projection actor".
I can't seem to find any examples of manually reloading events and besides querying the Journal table myself (executing sql query) and using the built-in serializer I'm basically stuck with this.
Is there anyone trying to do, more or less, the same thing?

Depending on your needs there are few ways:
Using PersistentView - it's a dedicated actor, which is correlated with some specific persistent actor, and it's able to receive it's events to build some different state from them. It's readonly. Pros: it's keeping itself up to date with events produced by peristent actor (however some delay between updates applies). Cons: it's related with a single actor, it cannot be used to aggregate event's from many of them.
Using journal query (SQL journals only) - it allows you to query journal using some specific filters. Pros: it can be used across multiple aggregates. Cons: it's not automatically kept up to date, you need to send subsequent queries to get updates. I'm not sure, if it has official documentation, but flow itself is described here.
PS: some of the journal implementations have their own dedicated serializers, but not SQL-based ones. And belive me, you never want to rely on default serializer for persisting events.

Related

What is the best practice to write an API for an action that affects multiple tables?

Consider the example use case as below.
You need to invite a Company as your connection. The sub actions that needs to happen in this situation is.
A Company need to be created by adding an entry to the Company table.
A User account needs to be created for the staff member to login by creating an entry in the User table.
A Staff object is created to ensure that the User has access to the Company by creating an entry in the Staff table.
The invited company is related to the invitee company, so a relation similar to friendship is created to connect the two companies by creating an entry in the Connection table.
An Invitation object is created to store the information as to who invited who onto the system, with other information like invitation time, invite message etc. For this, and entry is created in the Invitation table.
An email needs to be sent to the user to accept invitation and join by setting password.
As you can see, entries are to be made in 5 Tables.
Is it a good practice to do all this in a single API call?
If not, what are the other option.
How do I maintain data integrity if it is to be split into multiple APIs?
If the actions need to be atomic, then it's definitely best to do this in a single API call. Otherwise, you run the risk of someone not completing all the tasks required and leaving the resources in a potentially conflicting state.
That said, you're not updating a single resource, so this isn't a good fit for a single RESTful resource creation call (e.g., POST /companyInvitations) -- as all these other things being created and stitched together might lead to quite a bit of confusion.
If the action you're doing is "inviting a Company", then one option is to use Google's "custom method" syntax (POST /resources/1234:action) as defined in AIP-136. In this case, you might do POST /companies/1234:invite which says "I want to invite Company #1234 to be my connection".
Under the hood, this might atomically upsert (create if resources don't already exist) all the right things that you've listed out.
Something to consider when approaching an API call where multiple things happen when called, is how long those downstream actions take. Leaving the api call blocked isn't the best idea in the world while things are processing in the background.
You could consider (depending on your usecase) taking in the api request, immediately responding with a 200 status, and dropping the request onto an internal queue for processing. When your background service picks up the request it can update whatever needs to be updated and manage the transactions appropriately etc. This also caters for horizontal scaling scenarios where lots of "worker" services can be deployed to process the requests.
As part of this you could consider adding another "status" endpoint where requests can be made to find out how things are going. To avoid lots of polling status requests you could also take in callback details as part of the original api call which then gets called when the background processing is complete. Or you could do both!

ODBC Equivalent of DBMS_ALERT in Oracle

Is there anything (system procedure,function or other) in SQL Server that will provide the functionality of DBMS_ALERT package of ORACLE (and DBMS_PIPE respectively)?
I work in a plant and I'm using an extension-product of SQL-Server called InSQL Server by Wonderware which is specialized in gothering data from plant controllers and HumanMachineInterface(SCADA) software.
This system can record events happening in the plant (like a high-temperature alarm, for example). It stores sensor values in extension tables of SQL Sever, and other less dense information in normal SQL Server tables.
I want to be able to alert some applications running on operator PCs that an event has been recorded in the database.
An after insert trigger in the events table seems to be a good place to put something equivalent to DBMS_ALERT (if it exists), to wake up other applications that are waiting for the specific alert and have the operators type in some data.
In other words - I want to be able to notify other processes (that have connection to SQL Server) that something has happened in the database.
All Wonderware (InSQL but now called Aveva) Historian data is stored in the history blocks EXCEPT for the actual tag storage configuration and dedicated event data. The time series data for analog, discrete and strings is NOT in SQL tables at all - unless someone is doing custom configuration to. create tables of their own.
Where are you wanting these notifications to come up? Even though the historical data is NOT stored in SQL tables, Wonderware has extensive documentation on how to use SQL queries to appropriately retrieve data (check for whatever condition you are looking for)
You can easily build a stored procedure and configure it for a maintenance plan.
But are you just trying to alarm (provide notification) on the scada itself?
Or are you truly utilizing historical data (looking for a data trend - average, etc.)?
Or trying to send the notification to non-scada interfaces?
Depending on your specific answer, the scada itself should probably be able to do it.
But there is software that already does this type of thing Win-911, SeQent, Scadatec are a couple in the OT space. But also things like Hip Link or even DeskAlert which can connect to any SQL via it's own API.
So where does the info need to go (email, text, phone, desktop app...) and what is the real source of the data>

Beginner Questions about django-viewflow

I am working on my first django-viewflow project, and I have some very basic questions. I have looked at the docs and the cookbook examples.
My question is which fields go into the "normal" django models (models.Model) and which fields go into the Process models? For example, I am building a publishing model, so a document that is uploaded starts in a private state, then goes into a pending state after some processing, and then an editor can update the documents state to publish, and the document is available through the front facing web site. I would assume the state field (private, pending, publish) are part of a process model, but what about the other fields related to the document (author, date, source, topic, etc.)? Do they go into the process model or the models.Model model? Does it matter? What are the considerations in building the models and flows for separation of data between the two types of models?
Another example - why in the Hello World example is the text field in the Process model and not a model.Models model? This field does not seem to have anything to do with the process, but I am probably not understanding how viewflow works.
Thanks!
Mark
That's your choice. Viewflow is the library and has no restriction on data alignment. The only thing that needs to be done is the link between process_pk and the process data. HelloWord is the minimal working sample, that demonstrates a workflow.
You can put everything in the separate mode and provide an FK to in the Process model.
But the state field itself is the antipattern since eventually, you can have several tasks executed in parallel. And even sequential workflow could be constantly changed, new tasks could be added or deleted. You can have only published Boolean or DateTime field in the POST model to filter on that on the front end.
The general rule could be - keep all people workflow decisions in the Process model, and build all data models in a declarative way, keep separated workflow and actual data.

Multiple SqlDependencies firing the same OnChange Event

I have a product catalog with a few hundred categories in it and I am dynamically creating an SqlDependency for each category in the catalog. The SqlCommands that these dependencies will be based on, will differ only on the categoryID. The problem that I have is that I want all these dependencies to perform different actions depending on the SqlDependency that fired them. How can I do that? Do I have to create a different OnChange event for each SqlDependency? Is there a way all these dependencies to fire the same OnChange event and this event to know which dependency fired it or receive a parameter which will be passed during the dependency creation?
This problem arised trying to create a Sql Dependency mechanism that will work with AppFabric Cache.
Thank you in advance.
See if you can look into the cache tables that asp.net is creating for you and the triggers that are being created on the original tables. Once you see what is going on, you can create the tables and triggers yourself and can implement the caching through your own asp.net code. It really is not that hard. Then, not when a table is updated(when you use SQLDependency), but relevant rows in that table are updated, you can refresh the relevant cache or write your own code to perform the whatever unique actions you want. Better off doing it yourself when you learn how to!

How to monitor database updates from application?

I work with SQL Server database with ODBC, C++. I want to detect modifications in some tables of the database: another application inserts or updates rows and I have to detect all these modifications. It does not have to be the immediate trigger, it is acceptable to use polling to periodically check database tables for modifications.
Below is the way I think this can be done, and need your opinions whether this is the standard/right way of doing this, or any better approaches exist.
What I've thought of is this: I add triggers in SQL Server, which, on any modification, will insert the identifiers of modified/added rows into special table, which I will check periodically from my application. Suppose there are 3 tables: Customers, Products, Services. i will make three additional tables: Change_Customers, Change_Products, Change_Services, and will insert the identifiers of modified rows of the respective tables. Then I will read these Change_* tables from my application periodically and delete processed records.
Now if you agree that above solution is right, I have another question: Is it better to have separate Change_* tables for each of my tables I wish to monitor, or is it better to have one fat Changes table which will contain the changes from all tables.
Query Notifications is the technology designed to do exactly what you're describing. You can leverage Query Notifications from managed clients via the well known SqlDependency class, but there are native Ole DB and ODBC ways too. See Working with Query Notifications, the paragraphs about SSPROP_QP_NOTIFICATION_MSGTEXT (OleDB) and SQL_SOPT_SS_QUERYNOTIFICATION_MSGTEXT (ODBC). See The Mysterious Notification for an explanation how Query Notifications work.
This is the only polling-free solution that work with any kind of updates. Triggers and polling for changes has severe scalability and performance issues. Change Data Capture and Change Tracking are really covering a different topic (synchronizing datasets for occasionally connected devices, eg. Sync Framework).
Change Data Capture(CDC)--http://msdn.microsoft.com/en-us/library/cc645937.aspx
First you will need to enable CDC in database
::
USE db_name
GO
EXEC sys.sp_cdc_enable_db
GO
Enable CDC on table then
:: sys.sp_cdc_enable_table
Then you can query changes
If your version of Sql Server is 2005 - you may use Notification Services
If your Sql Server is 2008+ - there is most preferrable way to use triggers and log changes to log tables and periodically poll these tables from application to see the changes