ZerocICE vs DDS - c++

i know this question maybe seems duplicate, but i think the new versions of RPC frameworks better to compare again, after all i m a newbie in RPC and HLA
my requirements:
Real-time pub/sub messaging architecture, i have 12 nodes connect each other and i want each process of my application run multiple times in different VM servers on each node
each process must know about its replicated process too, if memory in one VM goes up a replicated process must help this process in parallel
log of error occurred for each one of processes for tracing problem, and number of lost messages
i need supporting RTI and HLA for my simulation objects
why DDS is more used for critical systems like Military or Air Traffic management? is opensplice dds is that much good or OMG supported and created by military and DARPA guys too :D ?
do these frameworks provide such an options (for DDS opensource dds based on TAO ACE)?
what is my another options (like thrift)?
good compare of these frameworks? thanks a lot.

Related

Using ZeroMQ in a crossplatform desktop/mobile app-suite for architecture concerns

I need to make an architecture decision on a cross-platform app-suite. I basically want to try new way of decoupling modules and implement network I/O using ZeroMQ, knowing it's a message queue for in-process, inter-process and networking applications. But I'm not sure how it can fit in with my case.
I'd appreciate it if anyone could clarify a few things before I spend the next week reading their intriguing book: http://zguide.zeromq.org/page:all
I've checked these questions but didn't get my answers:
How to use zeroMQ in Desktop application
How to use ZeroMQ in an GTK/QT/Clutter application?
My requirements:
Desktop hosts on Windows and macOS, as separated console backend and GUI frontend; the backend must be written in C++;
Mobile guests on iOS and Android, backend written in C++;
Desktop talks with mobile using TCP;
Old Way
As for the desktop backend (the console app), a few years back, my first step would be writing a multithreaded architecture based on Observer/Command patterns:
Set the main thread for UI and spawn a few threads.
One "scheduler" thread for message handling: a queue to get notifications from other modules and another queue for commands. Each command type introduces its own dependencies. The scheduler pumps messages and issues commands accordingly.
Other "executor" threads for device monitoring, multiplex network I/O between one desktop and multiple mobile devices, all sending messages to scheduler to have real work scheduled.
I would then need to implement thread-safe message queues, and will inevitably have coupling between schedulers and a bunch of Command classes that are essentially just function wrappers of those executors' behaviors. With C++, this would be a lot of boilerplate code.
New Way to Validate
But it's 2019 so I expect less hand-written code and would try something new. With ZeroMQ, I'd love to see if my expectation holds. I'd love to ...
Remove the need of writing a scheduler and message/command queues from scrach, by just passing ZeroMQ requests between in-process modules across threads, because writing scheduling from scratch is tedious and unproductive.
Simplify network I/O between desktop and mobile devices. For this part I've tried ASIO and it wasn't significantly more convenient than raw socket and select, plus it's C++-only.
Decouple GUI and console app with ZeroMQ-based IPC, so that GUI can be rewritten using different technologies in various languages.
Perceive low-latency for both desktop and mobile users.
Is my expectation reasonable?
If new to ZeroMQ domains, feel free to review this and best enjoy a very first look at "ZeroMQ Principles in less than Five Seconds" before diving into further details
An above referred post has presented an assumption, that:
ZeroMQ is based on the assumption that there is an while (1) ... loop that it is inserted into
is completely wrong and misleading any Architecture planning / assessment efforts.
ZeroMQ is a feature-rich signaling/messaging metaplane, that is intended to provide a lot of services for the application-level code, that may enjoy a light-weight re-use of the smart, complex on low-level, efficient handling of signaling/messaging infrastructure, be it used for in-process, inter-process and inter-node multi-agent distributed fashion, using for that goal many already available transport-class protocols:
{ inproc:// | ipc:// | tipc:// | vmci:// | tcp:// | pgm:// | epgm:// | udp:// }
This said, let's follow your shopping-list :
My requirements:
c++ ZeroMQ: [PASSED] Desktop hosts on Windows and macOS, as separated console backend and GUI frontend; the backend must be written in C++;
c++ ZeroMQ: [PASSED] Mobile guests on iOS and Android, backend written in C++;
tcp ZeroMQ: [PASSED] Desktop talks with mobile using TCP;
I'd love to ...
Remove the need of writing a scheduler and message/command queues from scrach, by just passing ZeroMQ requests between in-process modules across threads, because writing scheduling from scratch is tedious and unproductive.
Simplify network I/O between desktop and mobile devices. For this part I've tried ASIO and it wasn't significantly more convenient than raw socket and select, plus it's C++-only.
Decouple GUI and console app with ZeroMQ-based IPC, so that GUI can be rewritten using different technologies in various languages.
Perceive low-latency for both desktop and mobile users.
Is my expectation reasonable?
Well :
there is obviously no need to write scheduler+Queues from scratch. Queue-management is built-in ZeroMQ and actually hidden inside the service-metaplane. Scheduling things among many-actors is on the other hand your design-decision and has nothing to do with ZeroMQ or other technology of choice. Given your system-design intentions, you decide the way ( "autogenerated magics" are still more a wishful thinking than any near-future system design reality )
[PASSED] QUEUES : built-in ZeroMQ
[NICE2HAVE] SCHEDULER : auto-generated for any generic distributed many-agent-wide ecosystem (yet, hard to expect in any near future)
network ( and any in principle ) I/O is simplified already in the ZeroMQ hierarchy of services
[PASSED] : SIMPLIFIED NETWORK I/O - ZeroMQ provides already all abstracted Transport-Class related services hidden to the transparent use of the signaling/messaging metaplane,so the application code enjoys to "just" { .send() | .poll() | .recv() }
[PASSED] : Decoupling GUI from any other part of the ParcPlace-Systems-pioneered-MVC-architecture. Using this since ZeroMQ v2.11 for a (far)remote keyboard over TCP/IP network and even possible to integrate into actor-based GUI, like Tkinter-GUI actors may well serve this distributed local-Visual/remote-distributed-Controller/remote-distributed-Model. If mobile-terminal O/S introduces more complex constraints on the local-Visual MVC-component, proper adaptations ought be validated with domain-experts on that particular O/S properties. ZeroMQ signaling/messaging metaplane has not been considered so far to contain any constraints per se.
[PASSED] : LATENCY - ZeroMQ was designed from the very start for delivering ultimately low-latency as a must. Given it can feed HFT-tranding ecosystems, the Desktop/Mobile systems are orders of magnitude less restrictive in the sense of E2E lump sum accumulation of all the visited transport + O/S-handling latencies.

Tensorflow Setup for Distributed Computing

Can anyone provide guidance on how to setup tensorflow to work on many CPUs across a network? All of the examples I have found thus far use only one local box and multi-gpus at best. I have found that I can pass in a list of targets in the session_opts, but I'm not sure how to setup tensorflow on each box to listen for networked nodes/tasks. Any example would be greatly appreciated!
The open-source version (currently 0.6.0) of TensorFlow supports single-process execution only: in particular, the only valid target in the tensorflow::SessionOptions is the empty string, which means "current process."
The TensorFlow whitepaper describes the structure of the distributed implementation (see Figure 3) that we use inside Google. The basic idea is that the Session interface can be implemented using RPC to a master; and the master can partition the computation across a set of devices in multiple worker processes, which also communicate using RPC. Alas, the current version depends heavily on Google-internal technologies (like Borg), so a lot of work remains to make it ready for external consumption. We are currently working on this, and you can follow the progress on this GitHub issue.
EDIT on 2/26/2016: Today we released an initial version of the distributed runtime to GitHub. It supports multiple machines and multiple GPUs.

ZeroC ICE vs 0MQ/ZeroMQ vs Crossroads IO vs Open Source DDS

How does ZeroC ICE compare to 0MQ? I know that 0MQ/Crossroads and DDS are very similar, but cant seem to figure out where ICE comes in.
I need to quickly implement a system that offloads real-time market-data from C++ to C#, as a first phase of my project. The next phase will be to implement an Event Based architecture with an underlying Pub/Sub design.
I am willing to use TCP.. but the the system is currently running on a single 24 core server.. so an IPC option would be nice. From what I understand ICE is only TCP, while DDS and 0mq have an IPC option.
Currently ,I am leaning towards using Protobuf with either ICE or Crossroads IO. Got turned off from the OpenSplice DDS website. Ive done lots research on the various options, was originally considering OpenMPI + boost:mpi, but there does not seem to be MPI for .NET.
My question is:
How does ICE compare to 0MQ? I cant wrap my head around this. Was unable to find anything online that compares the two.
thanks in advance.
........
More about my project:
Currently using CMAKE C++ on Windows, but the plan is to move to CentOS at some point. An additional desired feature is to store the tic data and all the messages in a "NoSql" database such as Hbase/Hadoop or HDF5. Do any of these middleware/messaging/pub-sub libraries have any database integration?
Some thoughts about ZeroC:
Very fast; Able to have multiple endpoints; Able to load balance on the endpoints; Able to reconnect to a different endpoint in case one of the node goes down. This is transparent to the end user; Has good tool chain (IceGrid, IceStorm, IceBox, etc); Distributed, high availability, multiple failover, etc
Apart from that, I have used it for hot swapping code modules (something similar to Erlang) by having the client create the proxy with multiple endpoints, and later on bring down each endpoint for a quick upgrade one by one. With the transparent retry to a different endpoint, I could have the system up and running the whole time i did an upgrade. Not sure if this is an advertised feature or an unadvertised side-effect :)
Overall, it is very easy to scale out your servers if need be using ZeroC Ice.
I know ZeroMQ provides a fantastic set of tools and messaging patterns and I would keep using it for my pet projects. However, The problem that i see is that it is very easy to go overboard and lose track of all your distributed components. This is a must have in a distributed environment. How will you know where your clients/server are when you need to upgrade? If one of components down the chain does not receive a message, how to identify where the issue is? the publisher? the client? or any one of the bridges (REP/REQ, XREP/XREQ, etc) in between?
Overall, ZeroC provides a much better toolset and ecosystem for enterprise solutions.
And it is open source :)
Jaybny,
ZMQ:
If you want real good performance and the only job for Phase 1 of your job is to move data from C++ to C#, then Zmq is the best option.
Having a pub/sub model for event driven architecture is also something that Zmq can help you with, with its in-built messaging pattern.
Zmq also supports your IPC requirements in this case. Eg: you can have one instance of your application that consumes 24 cores by multithreading and communicating via IPC.
ZeroC Ice:
Ice is a RPC framework very much like CORBA.
Eg.
Socket/ZMQ - You send message over the wire. Read it at the other end, parse the message, do some action, etc.
ZeroC Ice - Create a contract between client and server. Contract is nothing but a template of a class. Now the client calls a proxy method of that class, and the server implements/actions it and returns the value. Thus, int result = mathClass.Add(10,20) is what the client calls. The method, parameters, etc is marshalled and sent to the server, server implements the Add method, returns the result, and the client gets 30 as the result. Thus on the client side, the api is nothing but a proxy for a servant running on a remote host.
Conclusion:
ZeroC ICE has some nice enterprisy features which are really good. However, for your project requirements, ZMQ is the right tool.
Hope this helps.
For me.. the correct answer was Crossroads I/O . It does everything I need.. but still unable to pub/sub when using protobufs... im sure ZeroC ICE is great for distributed IPC, but 0MQ/Crossroads, gives you the added flexibility to use Inter-Thread-Communication.
Note: on windows, 0mq does not have IPC.
So, all in all, the crossroads fork of 0mq is the best. but you will have to roll your own windows/ipc (or use tcp::127..) , and publisher side topic filtering features for pub/sub.
nanomsg, from the guy who wrote crossroads and 0mq (i think).
http://nanomsg.org/

Register to the DeviceManager of Linux

I read some questions here but couldn't really find the specific problem I'am faced with here...
I need to implement a "DeviceCache" in a particular project which caches all device-names found in /proc/net/dev .
The Language is C/++
So I thought about a seperate thread looking every X seconds in the directory mentioned above but was encouraged to find a more direct way.
How can I register a method of my process to the device manager of linux?
Is there a similar way like events/signals?
I looked in other sites but couldn't find any helpful code... Im relatively new to Linux-programming but willing to learn new things :)
Based on your comments, what you really want is to track which network interfaces are operational at any given time.
The only true way to determine if a network interface is up is to test it - after all, the router on the other end may be down. You could send pings out periodically, for example.
However, if you just want to know if the media goes down (ie, the network cable is unplugged), take a look at these SO questions:
Linux carrier detection notification
Get notified about network interface change on Linux
If you just want to be notified of the actual hardware-level registration of interfaces (eg, when a USB NIC is plugged in), you can use udev events if your platform has udev; otherwise, I believe there's another netlink category for hardware addition/removal events.

DLL Injection/IPC question

I'm work on a build tool that launches thousands of processes (compiles, links etc). It also distributes executables to remote machines so that the build can be run accross 100s of slave machines. I'm implementing DLL injection to monitor the child processes of my build process so that I can see that they opened/closed the resources I expected them to. That way I can tell if my users aren't specifying dependency information correctly.
My question is:
I've got the DLL injection working but I'm not all that familiar with windows programming. What would be the best/fastest way to callback to the parent build process with all the millions of file io reports that the children will be generating? I've thought about having them write to a non-blocking socket, but have been wondering if maybe pipes/shared memory or maybe COM would be better?
First, since you're apparently dealing with communication between machines, not just within one machine, I'd rule out shared memory immediately.
I'd think hard about trying to minimize the amount of data instead of worrying a lot about how fast you can send it. Instead of sending millions of file I/O reports, I'd batch together a few kilobytes of that data (or something on that order) and send a hash of that packet. With a careful choice of packet size, you should be able to reduce your data transmission to the point that you can simply use whatever method you find most convenient, rather than trying to pick the one that's the fastest.
If you stay in the windows world (None of your machines is linux or whatever) named pipes is a good choice, because it is fast and can be accessed across the machine boundary. I think shared memory is out of the race, because it can't cross the machine boundary. Distributed com allows to formulate the contract in IDL, but i think XML Messages via pipes are also ok. The xml messages have the benefit to work completely independent from the channel. If yo need linux later you can switch to tcp/ip transport and send your xml messages.
Some additional techniques with limitations:
Another forgotten but hot candidate is RPC (remote procedure calls). Lot of windows services rely on this. But i think it is hard to program RPC
If you are on the same machine and you only need to send some status information, you can regisier a windows message via RegisterWindowMessage() and send messages vie SendMessage()
apart from all the suggestions from thomas, you might also just use a common database to store the results. And if that is too slow use one of the more modern(and fast) key/value databases (like tokyo cabinet/memcachedb/etc).
This sounds like a lot of overkill for the task of verifying the files used in a build. How about, just scanning the build files? or capturing the output from the build tools?