On some developer PC's at my organisation (which have local installs of AppFabric server and the underlying monitoring database), AppFabric is failing to populate the ASEventSourcesTable table, therefore resulting in no events arriving in the ASWcfEventsTable table.
If I manually insert what is required in the ASEventSourcesTable table (going off another AppFabric install etc where ASEventSourcesTable is populated automatically), then events are arriving and are visible through the dashboard (therefore suggesting all the moving parts are working - service, sql agent etc).
Any ideas on what would be stopping AppFabric 'parsing' IIS to determine what is a valid event source? Something in the config?
In fact, the process is a bit different. The Event Collection service (installed on the hosting server) captures the WCF ETW event data and writes it to the staging table (ASStagingTable) in the monitoring database.A SQL Agent job continuously runs and checks for new event records in the staging table, parses the event data, and moves it to the long-term storage WCF event table.
So, Check first the ASStagingTable and that every appfabric monitoring client have access to the monitoring db (network and connectionstring). Event logs can also give you more informations.
I would highly suggest you read this article.
Related
I have general question regarding Amazon SWF and web application which has a reactive style. For example, I have a shopping website where user ad products to cart, validate the quantity, enter the shipping and billing address, payment processing, order shipping and tracking. If I implement a work flow for the order fulfillment, how this should be designed in the SWF. Do this order fulfillment work flow begin only after all inputs received? How this work flow notifies to the customer on the progress of order process, any validation issues etc. How this should be distributed?
The simplest approach is to use SWF to perform backend order fulfillment and a separate data store to hold the order information and status. When an order is configured through the website the data store is updated. Later when the order is placed a workflow instance is created for it. The workflow uses information (by loading it using activities) from the data store. Then the workflow updates the data store using activities and the website queries the status and other progress information of the workflow from the data store.
Another option is to use execution state feature of SWF. See Exposing Execution State from SWF Developer Guide.
The Cadence (which is open sourced version of SWF) in the near future is going to add a query feature. It would allow synchronously query for the workflow state through the service API. It is different from execution state as it would allow multiple query types and query parameters.
before asking the actual question I will briefly explain my current situation:
I have a C++ application written using the Qt framework, the application itself runs on a network and several instances of the same application can live on different machines, they will all reflect the same data in the common shared database.
In this application I have several UIs and other internal processes that need to be aware when changes happen to the database (Sql Server 2012). But they don't watch the database so they need to be told when something changes. So the way it works now is that whenever in once instance of the application I execute a Stored procedure in the database, I also create an xml based event that is being broadcast to all other applications instances. Once they receive this event they will update the corresponding UI or start the internal processing they need to do.
I'm working to extend this application to be cross systems which means that I want another network somewhere else, which is running another database which should be identical, and so changes on a database in the first network needs to be mirrored in the database in the second network. Whenever this happens I want the same events to be fired under the second network so all UIs and internal processing can start in the second system.
My solution so far is to use SQL Server replication tool. This seems very nice for synchronizing the two databases (and admittedly seems quite smooth as a solution) but the problem that I am facing now is that the other system, where the database is being synchronized, will not have the events fired. This is because the changes to the database does not happen through the application code so no-one on the second network is creating the xml based event and so none of the application instances on the second network will know the database has changed.
The solution that I've been thinking for this problem is to add triggers to each table so when that table changes I will populate another table that one application instance will watch by polling it periodically (every second let's say) and every time a change happens to that table I will create the xml event and broadcast in the network.
Questions: is my solution maintainable over time? Are triggers really the only way to achieve what I want? (Fire events in the synchronized system network) Is there some other way (possibly sql server built-in solution) to get updates from the database in a C++ application when something changes so I don't have to use triggers and watch tables populated by them?
I just installed Sitecore Experience Platform and configured it according to the Sitecore scaling recommendations for processing servers.
But I want to know the following things:
1.How can I use the sitecore processing server?
2.How can I check whether processing server is working fine?
3.How collections DB data is processed and send to reporting server?
The processing server is a piece of the whole analytics (xDB) part of the Sitecore solution. More info can be found here.
Snippet:
"The processing and aggregation component extracts information from
captured, raw analytics data and transforms it into a form suitable
for use in reporting applications. It also performs specific tasks on
the collection database that involve mass updates.
You implement processing and aggregation on a Sitecore application
server connected to both the collection and reporting databases. A
processing server can run independently on a dedicated server, or on
the same server together with other Sitecore components. By
implementing multiple processing or aggregation servers, it is
possible to achieve higher performance on high-traffic solutions."
In short: the processing server will aggregate the data in Mongo and processes it (to the reporting database). This can be put on a separate server in order to spare resources on your other servers. I'm not quite sure what it all does behind the scenes and how to check exactly and only that part of the process, but you could check the the reporting tools in the Sitecore backend, like Experience Analytics. If those are working, you probably are fine. Also, check the logs on the processing server - that will give you an indication what he is doing and if any errors occur.
Suppose I have a server setup w/ one load-balancer that routes traffic between two web servers who both connect to a database that is a RAM cloud. For whatever reason I want to upgrade my database, and this will require me to have it down temporarily. During this downtime I want to put an "upgrading" notice on the front page of the site. I have a specific web app that displays that message.
Should I:
(a) - spin up a new ec2 instance with the web app "upgrading" on it and point the LB at it
(b) - ssh into each web server and pull down the main web app, put up the "upgrading" app
(c) - I'm doing something wrong since I have to put a "upgrading" sign up in the first place
If you go the route of the "upgrading" (dummy/replacement) web app, I would be inclined to run that on a different machine so you can test and verify its behavior in isolation, point the ELB to it, and point the ELB back without touching the real application.
I would further suggest that you not "upgrade" your existing instances, but, instead, bring new instances online, copy as much as you can from the live site, and then take down the live site, finish synching whatever needs to be synched, and then cut the traffic over.
If I were doing this with a single MySQL server-backed site (which I mention only because that is my area of expertise), I would bring the new database server online with a snapshot backup of the existing database, then connect it to the live replication stream generated by the existing database server, beginning at the point-in-time where the snapshot backup was taken, then let it catch up to the present by executing the transactions that occurred since the snapshot. At this point, after the new server caught up to real time playing back the replication events, I would have my live data set in essentially real time, new database hardware. I could then stop the application, reconfigure the application server settings to use the new database server, verify that all of the replication events had propagated, disconnect from the replication stream, and restart the app server against the new database, for a total downtime so short that it would be unlikely to be noticed if done during off-peak time.
Of course, with a Galera cluster, these gyrations would be unnecessary since you can just do a rolling upgrade, one node at a time, without ever losing synchronization of the other two nodes with each other (assuming you had the required minimum of 3 running nodes to start with) and each upgraded node would resync its data from one of the other two when it came back online.
To whatever extent the platform you are using doesn't have comparable functionality to what I've described (specifically, the ability to do database snapshots and playback a stream of a transaction log against a database restored from a snapshot... or quorum-based cluster survivability), I suspect that's the nature of the limitation that makes it feel like you're doing it wrong.
A possible workaround to help you minimize the actual downtime if your architecture doesn't support these kinds of actions would be to enhance your application with the ability to operate in a "read only" mode, where the web site can be browsed but the data can't be modified (you can see the catalog, but not place orders; you read the blogs, but not edit or post comments; you don't bother saving "last login date" for a few minutes; certain privilege levels aren't available; etc.) -- like Stack Overflow has the capability of doing. This would allow you to stop the site just long enough to snapshot it, then restart it again on the existing hardware in read-only mode while you bring up the snapshots on new hardware. Then, when you have the site back to available status on the new hardware, cut the traffic over at the load balancer and you'd be back to normal.
Microsoft Sync Framework with SQL 2005? Is it possible? It seems to hint that the OOTB providers use SQL2008 functionality.
I'm looking for some quick wins in relation to a sync project.
The client app will be offline for a number of days.
There will be a central server that MUST be SQL Server 2005.
I can use .net 3.5.
Basically the client app could go offline for a week. When it comes back online it needs to sync its data. But the good thing is that the data only needs to push to the server. The stuff that syncs back to the client will just be lookup data which the client never changes. So this means I don't care about sync collisions.
To simplify the scenario for you, this smart client goes offline and the user surveys data about some observations. They enter the data into the system. When the laptop is reconnected to the network, it syncs back all that data to the server. There will be other clients doing the same thing too, but no one ever touches each other's data. Then there are some reports on the server for viewing the data that has been pushed to the server. This also needs to use ClickOnce.
My biggest concern is that there is an interim release while a client is offline. This release might require a new field in the database, and a new field to fill in on the survey.
Obviously that new field will be nullable because we can't update old data, that's fine to set as an assumption. But when the client connects up and its local data schema and the server schema don't match, will sync framework be able to handle this? After the data is pushed to the server it is discarded locally.
Hope my problem makes sense.
I've been using the Microsoft Sync framework for an application that has to deal with collisions, and it's miserable. The *.sdf file is locked for certain schema changes (like dropping of a column, etc.).
Your scenario sounds like the Sync Framwork would work for you, out of the box... just don't enforce any referential integrity, since the Sync Framework will cause insert issues in these instances.
As far as updating schema, if you follow the Sync Framework forums, you are instructed to create a temporary table the way the table should look at the end, copy your data over, and then drop the old table. Once done, then go ahead and rename the new table to what it should be, and re-hookup the stuff in SQL Server CE that allows you to handle the sync classes.
IMHO, I'm going to be working on removing the Sync functionality from my application, because it is more of a hinderance than an aid.