NService Bus Distributor Using a Clustered DB - nservicebus-distributor

We currently use NService bus with a distributor but I'd like to make it HA without moving to clustered MSMQs.
We already have a clustered SQL server. Is there a way to have NService bus Distributor use the DB instead of the MSMQs

Since the MSMQ service is what is actually receiving the messages, unless it is highly available as well, it's not enough for just HA for the distributor. This can be achieved using fault-tolerant hardware for better failover time than with a cluster. If you want to use the database, you'd need to implement a DB transport for NServiceBus and recompile the distributor - but that still wouldn't give you HA for the distributor as you'd need something to failover the distributor process.
Have you considered deploying the distributor to the cluster that is running your SQL Server? That might be the simplest solution for you.

Related

How to set up cloud with application and connected IoT devices via 3G/LTE

I am a beginner at cloud computing, and I'm hoping to get some guidance or advice as to how I can set up a cloud connected to IoT devices and a running application to control the behavior of these devices.
Firstly, there are 5 devices that have to connected via 3G or LTE because of the distance among the devices, so the way I am thinking of is connecting them to the internet using dynamic public ip addresses and using a dynamic DNS server. It seems like I should be using AWS-IoT service to manage these devices. How should I go about doing that, or is there a better approach? The devices all use MQTT and/or REST API.
The next step is to write an application and I was suggested to use AWS Lambda, am I heading towards the correct direction? How do I link the connected devices on AWS-IoT to AWS Lambda?
I know the question may sound vague but I am still new and exploring different solutions. Any guidance or recommendations for the right step forward is appreciated.
I assume your devices (or, one of them) has 64-bit CPU (x86 or Arm) that run Linux.
It's a kind of 70:30 balance where:
- 70% of the work needs to focus on building and testing edge-logic.
- 30% of the work on the rest (IoT Cloud, Lambda etc).
Here is what I suggest.
1/ Code your edge-logic first! (the piece of code that you want to execute ultimately on your devices).
2/ Test it on-the-edge by logging on to the devices (if you can) via SSH and running it.
3/ Once you have that done, 70% of the job is over.
4/ Rest 30% is to complete the jigsaw in cloud. Best place to start: Lambda and Greengrass.
5/ To summarize it all, you will create greengrass components on cloud, install AWS Greengrass Core software on your device, followed by deploying your configuration on your device over-the-air (OTA).
Now, you can use any MQTT client (or) biult-in MQTTTester of AWS IoT -> Test wizard to send a message to your topic to trigger your edge-logic on the device!
Good luck!
cheers,
ram

Changes to ignite cluster membership unexplainable

I am running a 12 node jvm ignite cluster. Eeach jvm runs on its own vmware node. I am using zookeeper to keep these ignite nodes in sync using tcp discovery. I have been seeing lot of node failures in zookeeper logs
although the java processes are running, I don't know why some ignite nodes leave the cluster with "node failed" kind of errors. Vmware uses vmotion to do something what they call as "migration".I am assuming that is some kind of filesystem sync process between vmware nodes.
I am also seeing pretty frequent "dumping pending object" and "Failed to wait for partition map exchange" kind of messages in the jvm logs for ignite.
My env setup is as follows:
Apache Ignite 1.9.0
RHEL 7.2 (Maipo) runs on each of the 12 nodes
Oracle Jdk1.8.
Zookeeper 3.4.9
Please let me know your thoughts.
TIA
There are generally two possible reasons:
Memory issues. For example, if a node goes to long GC pause, it can become unresponsive and therefore removed from topology. For more details read here: https://apacheignite.readme.io/docs/jvm-and-system-tuning
Network connectivity issues. Check if the network between your VMs is stable. You may also want to try increasing the failure detection timeout: https://apacheignite.readme.io/docs/cluster-config#failure-detection-timeout
VM Migrations sometimes involve suspending the VM. If the VM is suspended, it won't have a clean way to communicate with the rest of the cluster and will appear down.

WSO2 - Clustering AS on Custom Polling Applications

We have developed a custom JAX-WS application that essentially achieves two things.
Exposes a few web service methods to perform some functionality.
Utilizes org.quartz.Scheduler to schedule and execute some polling tasks that monitors and processes data on a few database tables. (The logic here is slightly complex, hence a custom application was chosen over the use of WSO2 DSS)
This application is uploaded on WSO2 AS 5.2.1 and runs quite seamlessly. However, I'm unsure what will happen if we have to cluster the AS application server. Logically, I would think that each node will have its own instance of the custom application running within it, and hence its own scheduler. Would this not increase the risk of processing the same record, across both instances. Is my interpretation of the above scenario correct, from a clustering perspective?
Yes.You are correct.In cluster of app server nodes each nodes will have its own instance of the application.In your case each node will have seperate scheduler.You may consider using tasks from ESB 4.9.0. there WSO2 has added coordination support to work in cluster environment.

Best JMS implementation for AWS

I have a Java/Spring application running in the Amazon AWS cloud.
My server instances are using load balancing and runs the same image of a Linus OS, with a Tomcat application server.
They are also connected to S3 as a shared file system (s3fs), and an RDS database.
My concern is to be sure the state of the different applications is synchronized. Today, the point of synchronization is the database, but when memory caching is needed, out of sync problems appear.
The solution I would like to use is to put in place a messaging system between the applications. For specific reasons, I cannot use Amazon SQS service, then JMS seems to fit my needs. After some reading, HornetQ seems also a very good implementation of it. Once an application state change, it communicates the change to all other applications. Each application is producer and consumer of the same queue.
As we are in a dynamic system where servers and IPs are automatically created and deleted, the automatic discovery of instances seems to be the best solution to use.
But in AWS, broadcast is not possible!
For HornetQ, I saw a kind of work around which is using JGroups additionally. But for me, this is a second framework to investigate and learn. Twice the work. And no more an out-of-the-box solution.
What is your opinion? Does anyone already build a solution for similar needs?
Maybe other out-of-the-box solutions exists?
Thanks in advance for your answer!
In my experience you could try to use TCPGOSSIP, that is a HornetQ configuration.
See https://docs.jboss.org/jbossclustering/cluster_guide/5.1/html/jgroups.chapt.html

Cache data on Multiple Hosts in AppFabric

Let me first explain that I am very new when it comes to use AppFabric for improving the Responsiveness of your application. I am trying to configure the Server Cluster with 2 Nodes over XML provider over Network Shared location.
My requirement is that the cached data should be created on both the Hosts so that If One of the host is down my other host in the Cluster should be able to serve the request and provide the cached data. As I said I have 2 Host in my Cluster and one of them is defined as Lead Host. Now when I am saving the data in cache I could not see the data in both the hosts (Not sure is there any specific command where you can see the data in a specific host). So what I want to test is that I’ll stop one of the Cache host and try to see if still I able to get the data from the second cache host.
thanks in advance
-Nitin
What you're talking about here is High Availability. To enable this, you'll need to be running Windows Server Enterprise Edition - if you're on Standard Edition then you just can't do it. You also really need a minimum of three hosts, so that if one goes down there are still two copies of your cached data to provide failover. If you can meet these requirements then the only extra step to create a highly-available cache is to set the Secondaries flag when you call new-cache e.g.
new-cache myHACache -Secondaries 1
There's no programmatic way to query what data is held on a specific host, because you only ever address the logical cache, not an individual physical host.
From our experience, using SQL authentication to the database does not work. Its clearly stated that only Integrated Security option is supported. Also we faced issues with the service running with "Integrated Security" since our SQL cluster was running under a domain account and AppFabric needs to run under "Network service" and we couldnt successfully connect to the sql cluster from AppFabric service.
This was a painful experience for us and I hope AppFabric caching improves the way it sends out "error messages and error codes". And also allows us to decide how we want to connect to the sql. KInd of stupid having to undergo this pain of "has to run as Network Service" and "no SQL authentication".