Streaming, Asynchronous, Language Independent technology to transfer Object data - web-services

What are the best-practice / industry standard technologies for the folowing requirements
Allow transfer of business objects from one client / server to another
Language and platform independent
Supports Streaming to alow passing large data (e.g. a connected statefull conversation)
Is Asynchronous in nature (doesn't block, allows monitoring progress)
SOAP workaround
1,2 point on SOAP web services, but 3 and 4 make it a little hard to implement (don't they?)
I was thinking of the following "hack", but I both don't like it, and I'm sure there is a better solution.
To support 3 and 4 the SOAP web service can have methods that pass the data in chunks, e.g.
void startObjTransfer(String objectId);
void addObjChunk(String objectId, ObjData currentChunk);
void endObjTransfer(String objectId);
Where ObjData contains a partial graph of the data, and knowledge of its location in the graph.
to better support 4 a method like this can be used to ask how much progress was made
void getObjTransferProgress(String objectId);
What do you think about the above? isn't there (hopefully there is) a better one? (even non SOAP)
RMI / COM / .NET Remoting / DCOM
Not language independed
CORBA
Well, no.
REST
Not answering 3 and 4, (SOAP + Buzz?)
AJAX / COMETD
Related to question: Asynchronous web service streaming
Not sure how this will work, please explain
Message Queue?
Will that work for this purpose?

I think Coucho Hessian should fulfill your needs (including streaming, platform independence...). You might also take a look Thrift from the Facebook guys.

Related

Is there any alternative approach to implement WebRTC SFU, to have only 1 upload stream?

I have a server which is able to relay the WebRTC media data from A to B. For the video conferencing, if we go with P2P approach then a mesh network is created. Whenever P2P doesn't work, we can have this relay server.
The main problem is that in the mesh network, the number of upload link is "N - 1" for N participants. Hence the number of connection goes upto N * (N-1). Usually mesh network allows 5-6 stable connections.
Many online sources suggest to implement SFU. If SFU decrypts the media data and then re-encrypts for every peer then that virtually requires a WebRTC component on the server side.
Are there any lightweight C/C++ based library which helps in this regard?
Any better alternative strategy for the same?
BTW, I tried to share the same offer to all the peers with their own answer, but as expected it didn't work. The peer gets disconnected after receiving few chunks.
I have referred below related posts:
WebRTC - scalable live stream broadcasting / multicasting
Multi-party WebRTC without SFU
Can I re-use an "offer" in WebRTC for mulitple connections?
There are quite a few free and open source projects that implement an SFU:
Jitsi is probably the best known, but it is written in Java, and might therefore be unsuitable in some deployments;
Janus is written in C; it is small, efficient, and well supported, but might not be the easiest to understand;
Ion-SFU and Galène are written in Go, and might be easier to adapt to your needs.

Using ZeroMQ in a crossplatform desktop/mobile app-suite for architecture concerns

I need to make an architecture decision on a cross-platform app-suite. I basically want to try new way of decoupling modules and implement network I/O using ZeroMQ, knowing it's a message queue for in-process, inter-process and networking applications. But I'm not sure how it can fit in with my case.
I'd appreciate it if anyone could clarify a few things before I spend the next week reading their intriguing book: http://zguide.zeromq.org/page:all
I've checked these questions but didn't get my answers:
How to use zeroMQ in Desktop application
How to use ZeroMQ in an GTK/QT/Clutter application?
My requirements:
Desktop hosts on Windows and macOS, as separated console backend and GUI frontend; the backend must be written in C++;
Mobile guests on iOS and Android, backend written in C++;
Desktop talks with mobile using TCP;
Old Way
As for the desktop backend (the console app), a few years back, my first step would be writing a multithreaded architecture based on Observer/Command patterns:
Set the main thread for UI and spawn a few threads.
One "scheduler" thread for message handling: a queue to get notifications from other modules and another queue for commands. Each command type introduces its own dependencies. The scheduler pumps messages and issues commands accordingly.
Other "executor" threads for device monitoring, multiplex network I/O between one desktop and multiple mobile devices, all sending messages to scheduler to have real work scheduled.
I would then need to implement thread-safe message queues, and will inevitably have coupling between schedulers and a bunch of Command classes that are essentially just function wrappers of those executors' behaviors. With C++, this would be a lot of boilerplate code.
New Way to Validate
But it's 2019 so I expect less hand-written code and would try something new. With ZeroMQ, I'd love to see if my expectation holds. I'd love to ...
Remove the need of writing a scheduler and message/command queues from scrach, by just passing ZeroMQ requests between in-process modules across threads, because writing scheduling from scratch is tedious and unproductive.
Simplify network I/O between desktop and mobile devices. For this part I've tried ASIO and it wasn't significantly more convenient than raw socket and select, plus it's C++-only.
Decouple GUI and console app with ZeroMQ-based IPC, so that GUI can be rewritten using different technologies in various languages.
Perceive low-latency for both desktop and mobile users.
Is my expectation reasonable?
If new to ZeroMQ domains, feel free to review this and best enjoy a very first look at "ZeroMQ Principles in less than Five Seconds" before diving into further details
An above referred post has presented an assumption, that:
ZeroMQ is based on the assumption that there is an while (1) ... loop that it is inserted into
is completely wrong and misleading any Architecture planning / assessment efforts.
ZeroMQ is a feature-rich signaling/messaging metaplane, that is intended to provide a lot of services for the application-level code, that may enjoy a light-weight re-use of the smart, complex on low-level, efficient handling of signaling/messaging infrastructure, be it used for in-process, inter-process and inter-node multi-agent distributed fashion, using for that goal many already available transport-class protocols:
{ inproc:// | ipc:// | tipc:// | vmci:// | tcp:// | pgm:// | epgm:// | udp:// }
This said, let's follow your shopping-list :
My requirements:
c++ ZeroMQ: [PASSED] Desktop hosts on Windows and macOS, as separated console backend and GUI frontend; the backend must be written in C++;
c++ ZeroMQ: [PASSED] Mobile guests on iOS and Android, backend written in C++;
tcp ZeroMQ: [PASSED] Desktop talks with mobile using TCP;
I'd love to ...
Remove the need of writing a scheduler and message/command queues from scrach, by just passing ZeroMQ requests between in-process modules across threads, because writing scheduling from scratch is tedious and unproductive.
Simplify network I/O between desktop and mobile devices. For this part I've tried ASIO and it wasn't significantly more convenient than raw socket and select, plus it's C++-only.
Decouple GUI and console app with ZeroMQ-based IPC, so that GUI can be rewritten using different technologies in various languages.
Perceive low-latency for both desktop and mobile users.
Is my expectation reasonable?
Well :
there is obviously no need to write scheduler+Queues from scratch. Queue-management is built-in ZeroMQ and actually hidden inside the service-metaplane. Scheduling things among many-actors is on the other hand your design-decision and has nothing to do with ZeroMQ or other technology of choice. Given your system-design intentions, you decide the way ( "autogenerated magics" are still more a wishful thinking than any near-future system design reality )
[PASSED] QUEUES : built-in ZeroMQ
[NICE2HAVE] SCHEDULER : auto-generated for any generic distributed many-agent-wide ecosystem (yet, hard to expect in any near future)
network ( and any in principle ) I/O is simplified already in the ZeroMQ hierarchy of services
[PASSED] : SIMPLIFIED NETWORK I/O - ZeroMQ provides already all abstracted Transport-Class related services hidden to the transparent use of the signaling/messaging metaplane,so the application code enjoys to "just" { .send() | .poll() | .recv() }
[PASSED] : Decoupling GUI from any other part of the ParcPlace-Systems-pioneered-MVC-architecture. Using this since ZeroMQ v2.11 for a (far)remote keyboard over TCP/IP network and even possible to integrate into actor-based GUI, like Tkinter-GUI actors may well serve this distributed local-Visual/remote-distributed-Controller/remote-distributed-Model. If mobile-terminal O/S introduces more complex constraints on the local-Visual MVC-component, proper adaptations ought be validated with domain-experts on that particular O/S properties. ZeroMQ signaling/messaging metaplane has not been considered so far to contain any constraints per se.
[PASSED] : LATENCY - ZeroMQ was designed from the very start for delivering ultimately low-latency as a must. Given it can feed HFT-tranding ecosystems, the Desktop/Mobile systems are orders of magnitude less restrictive in the sense of E2E lump sum accumulation of all the visited transport + O/S-handling latencies.

ZeroC ICE vs 0MQ/ZeroMQ vs Crossroads IO vs Open Source DDS

How does ZeroC ICE compare to 0MQ? I know that 0MQ/Crossroads and DDS are very similar, but cant seem to figure out where ICE comes in.
I need to quickly implement a system that offloads real-time market-data from C++ to C#, as a first phase of my project. The next phase will be to implement an Event Based architecture with an underlying Pub/Sub design.
I am willing to use TCP.. but the the system is currently running on a single 24 core server.. so an IPC option would be nice. From what I understand ICE is only TCP, while DDS and 0mq have an IPC option.
Currently ,I am leaning towards using Protobuf with either ICE or Crossroads IO. Got turned off from the OpenSplice DDS website. Ive done lots research on the various options, was originally considering OpenMPI + boost:mpi, but there does not seem to be MPI for .NET.
My question is:
How does ICE compare to 0MQ? I cant wrap my head around this. Was unable to find anything online that compares the two.
thanks in advance.
........
More about my project:
Currently using CMAKE C++ on Windows, but the plan is to move to CentOS at some point. An additional desired feature is to store the tic data and all the messages in a "NoSql" database such as Hbase/Hadoop or HDF5. Do any of these middleware/messaging/pub-sub libraries have any database integration?
Some thoughts about ZeroC:
Very fast; Able to have multiple endpoints; Able to load balance on the endpoints; Able to reconnect to a different endpoint in case one of the node goes down. This is transparent to the end user; Has good tool chain (IceGrid, IceStorm, IceBox, etc); Distributed, high availability, multiple failover, etc
Apart from that, I have used it for hot swapping code modules (something similar to Erlang) by having the client create the proxy with multiple endpoints, and later on bring down each endpoint for a quick upgrade one by one. With the transparent retry to a different endpoint, I could have the system up and running the whole time i did an upgrade. Not sure if this is an advertised feature or an unadvertised side-effect :)
Overall, it is very easy to scale out your servers if need be using ZeroC Ice.
I know ZeroMQ provides a fantastic set of tools and messaging patterns and I would keep using it for my pet projects. However, The problem that i see is that it is very easy to go overboard and lose track of all your distributed components. This is a must have in a distributed environment. How will you know where your clients/server are when you need to upgrade? If one of components down the chain does not receive a message, how to identify where the issue is? the publisher? the client? or any one of the bridges (REP/REQ, XREP/XREQ, etc) in between?
Overall, ZeroC provides a much better toolset and ecosystem for enterprise solutions.
And it is open source :)
Jaybny,
ZMQ:
If you want real good performance and the only job for Phase 1 of your job is to move data from C++ to C#, then Zmq is the best option.
Having a pub/sub model for event driven architecture is also something that Zmq can help you with, with its in-built messaging pattern.
Zmq also supports your IPC requirements in this case. Eg: you can have one instance of your application that consumes 24 cores by multithreading and communicating via IPC.
ZeroC Ice:
Ice is a RPC framework very much like CORBA.
Eg.
Socket/ZMQ - You send message over the wire. Read it at the other end, parse the message, do some action, etc.
ZeroC Ice - Create a contract between client and server. Contract is nothing but a template of a class. Now the client calls a proxy method of that class, and the server implements/actions it and returns the value. Thus, int result = mathClass.Add(10,20) is what the client calls. The method, parameters, etc is marshalled and sent to the server, server implements the Add method, returns the result, and the client gets 30 as the result. Thus on the client side, the api is nothing but a proxy for a servant running on a remote host.
Conclusion:
ZeroC ICE has some nice enterprisy features which are really good. However, for your project requirements, ZMQ is the right tool.
Hope this helps.
For me.. the correct answer was Crossroads I/O . It does everything I need.. but still unable to pub/sub when using protobufs... im sure ZeroC ICE is great for distributed IPC, but 0MQ/Crossroads, gives you the added flexibility to use Inter-Thread-Communication.
Note: on windows, 0mq does not have IPC.
So, all in all, the crossroads fork of 0mq is the best. but you will have to roll your own windows/ipc (or use tcp::127..) , and publisher side topic filtering features for pub/sub.
nanomsg, from the guy who wrote crossroads and 0mq (i think).
http://nanomsg.org/

Mac (or c++) connection to binary WCF

I've got a WCF service being hosted using TCP/IP (netTcpBinding):
var baseWcfAddress = getWcfBaseUri();
host = new ServiceHost(wcfSingleton, baseWcfAddress);
var throttlingBehavior = new System.ServiceModel.Description.ServiceThrottlingBehavior();
throttlingBehavior.MaxConcurrentCalls = Int32.MaxValue;
throttlingBehavior.MaxConcurrentInstances = Int32.MaxValue;
throttlingBehavior.MaxConcurrentSessions = Int32.MaxValue;
host.Description.Behaviors.Add(throttlingBehavior);
host.Open();
I'd like to write a Mac client in Objective C or C++. Are there any existing classes that can facilitate the connection to my WCF service? If not, what are my steps & options to making it happen?
Every binding starting with net is considered as not interoperable. Even pure .NET client without WCF is not able to communicate with the service without enormous effort by reimplementing whole binary protocol and encoding. You should probably start with:
.NET Message Framing protocol
.NET Binary Format: XML Data Structure
Your option for Mac is using Mono which should have support for netTcpBinding.
Your real option for Objective-C / C++ on Mac is creating interoperable WCF service exposing data over HTTP. If you are not the owner of the service you can create routing WCF service which will be bridge between interoperable HTTP and netTCP.
Edit:
One more thing - if the service uses netTcpBinding with default configuration it is secured with windows security. I expect that it can be another show stopper on Mac.
In the context of the comment:
netTcpBinding was found to be one of the quicker options -- certainly much faster than the vanilla BasicHttpBinding/WS binding that was tried. That's the only real need since netTcpBinding used binary vs straight text it was faster.
Firstly, I have looked at this many, many times - and oddly enough, every time I test it, NetTcpBinding completely fails to be any quicker than the basic xml offering. However, since performance is your goal I have options...
I'm a bit biased (since I wrote it), but I strongly recommend "protobuf-net" here; since it is designed along the same idioms as most .NET serializers, it is pretty easy to swap in, but it is faster (CPU) and smaller (bandwitdh) in every test I make for this - or tests that other people make. And because the protobuf format is an open specification, you don't have to worry about the "Net" bindings being non-interoperable.
For MS .NET, I have direct WCF hooks that can be used purely from config making enabling it a breeze. I honestly don't know how well that will work with the Mono equivalent - I haven't tried. It might work, but if not the other option is to simply throw a byte[] or Stream over the network and worry about (de)serialization manually.
My preferred layout here is basic-http binding with MTOM enabled, which gives you the simplicity and portability of the simplest xml binding, without the overhead of base-64 for the binary data.

How to create simple http server with boost capable of receiving data editing it and sharing?

So using any free opensource cross platform library like boost how to create a web service capable of reciving a data stream (for example stream of mp3 frames) on one URL like http://adress:port/service1/write/ and capable of sharing latest recived data to all consumers on http://adress:port/service1/read/ so of course mp3 is just an example of packed stream-able data - generally it can be anything packed. How to create such thing?
Generaly I am honesly triing to understend how to do such thing with C++ Network Library but it is just quite unclear to me.
The boost::asio documentation has four examples of complete HTTP server implementations, each with a slightly different threading architecture.
http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/examples.html
You do not say what platform to use, but if Windows is an alternative, the Windows HTTP API easy to use and a great performer.
http://msdn.microsoft.com/en-us/library/aa364510(VS.85).aspx