Mac (or c++) connection to binary WCF - c++

I've got a WCF service being hosted using TCP/IP (netTcpBinding):
var baseWcfAddress = getWcfBaseUri();
host = new ServiceHost(wcfSingleton, baseWcfAddress);
var throttlingBehavior = new System.ServiceModel.Description.ServiceThrottlingBehavior();
throttlingBehavior.MaxConcurrentCalls = Int32.MaxValue;
throttlingBehavior.MaxConcurrentInstances = Int32.MaxValue;
throttlingBehavior.MaxConcurrentSessions = Int32.MaxValue;
host.Description.Behaviors.Add(throttlingBehavior);
host.Open();
I'd like to write a Mac client in Objective C or C++. Are there any existing classes that can facilitate the connection to my WCF service? If not, what are my steps & options to making it happen?

Every binding starting with net is considered as not interoperable. Even pure .NET client without WCF is not able to communicate with the service without enormous effort by reimplementing whole binary protocol and encoding. You should probably start with:
.NET Message Framing protocol
.NET Binary Format: XML Data Structure
Your option for Mac is using Mono which should have support for netTcpBinding.
Your real option for Objective-C / C++ on Mac is creating interoperable WCF service exposing data over HTTP. If you are not the owner of the service you can create routing WCF service which will be bridge between interoperable HTTP and netTCP.
Edit:
One more thing - if the service uses netTcpBinding with default configuration it is secured with windows security. I expect that it can be another show stopper on Mac.

In the context of the comment:
netTcpBinding was found to be one of the quicker options -- certainly much faster than the vanilla BasicHttpBinding/WS binding that was tried. That's the only real need since netTcpBinding used binary vs straight text it was faster.
Firstly, I have looked at this many, many times - and oddly enough, every time I test it, NetTcpBinding completely fails to be any quicker than the basic xml offering. However, since performance is your goal I have options...
I'm a bit biased (since I wrote it), but I strongly recommend "protobuf-net" here; since it is designed along the same idioms as most .NET serializers, it is pretty easy to swap in, but it is faster (CPU) and smaller (bandwitdh) in every test I make for this - or tests that other people make. And because the protobuf format is an open specification, you don't have to worry about the "Net" bindings being non-interoperable.
For MS .NET, I have direct WCF hooks that can be used purely from config making enabling it a breeze. I honestly don't know how well that will work with the Mono equivalent - I haven't tried. It might work, but if not the other option is to simply throw a byte[] or Stream over the network and worry about (de)serialization manually.
My preferred layout here is basic-http binding with MTOM enabled, which gives you the simplicity and portability of the simplest xml binding, without the overhead of base-64 for the binary data.

Related

How to use WebRTC in a C++ application?

I'm trying to write a C++ command line program for peer-to-peer file transfer. My idea is to establish a connection with another machine, and send file data directly. My target platform is Windows, but interoperability with Linux and MacOS would be nice. I want this program to be standalone and not require a web browser.
I did some research and it seems that WebRTC would fit the bill, but I can't find much information on using it with C++.
Is it possible to build a standalone executable that utilizes WebRTC without requiring users to download any dependencies in order to use my program?
As the name suggests - to have the "RTC", it requires "Web" component, either in form of browser or a library.
The C++ library is quite huge and it's not a trivial task to understand and write it in a short period. Browser provides APIs in form of JavaScript calls, which are relatively easier to implement.
There might be commercial APIs in C++ available over internet.

Use Go within a Qt C++ project

Is it possible to use a Go API in a Qt C++ project?
I would like to use the following Google API written in Go: https://cloud.google.com/speech-to-text/docs/reference/libraries#client-libraries-install-go
Is it possible to use a Go API in a Qt C++ project?
It could be possible, but it might not be easy and would be very brittle to run Go and Qt code in the same process, since Go and Qt have very different thread (goroutine) and memory models.
However, Go has (in its standard library) many powerful packages to ease the development of server programs, in particular of HTTP or JSONRPC servers.
Perhaps you might consider running two different processes using inter-process communication facilities. Details are operating system specific. I assume you run Linux. Your Qt application could then start the Go program using QProcess and later communicate with it (behaving as a client to your Go specialized "server"-like program).
Then you could use HTTP or JSONRPC to remotely call your Go functions from your Qt application. You need some HTTP client library in Qt (it is there already under Qt Network, and you might also use libcurl) or some JSONRPC client library. Your Go program would be some specialized HTTP or JSONRPC server (and some Google Speech to Text client) and your Qt program would be its only client (and would start it). So your Go program would be some specialized proxy. You could even use pipe(7)-s, unix(7) sockets, or fifo(7)-s to increase the "privacy" of the communication channel.
If the Google Speech to Text API is huge (but it probably is not) you might use Go reflective or introspective abilities to generate some C++ glue code for Qt: go/ast, go/build, go/parser, go/importer, etc
BTW, it seems that Google Speech to Text protocol is using JSON with HTTP (it seems to be some Web API) and has a documented REST API, so you might directly code in C++ the relevant code doing that (of course you need to understand all the details of the protocol: relevant HTTP requests and JSON formats), without any Go code (or process). If you go that route, I recommend making your Qt (or C++) code for Google Speech to Text some separate free software library (to be able to get feedback and help from outside).

Want to use native C libraries in a web application, what are my options?

I have many legacy C libraries used for numerical analysis and scientific computing (e.g. simulation) that I want to use in a web application I am building (so far I have only been using Javascript to make a user interface). What options do I have in doing this on the client side and/or the server side? I heard about using native client with chrome, but I dislike that the client has to turn on the native client flag to do this.
On Server Side:
To begin with CGI (Common Gateway Interface) is the most basic method to be able to use native C libraries in a web application - wherein you delegate an executable (say written in C) to generate the sever side web content.
But CGI is very primitive and inefficient. Each command can result in creation of a new Process on the server. Thus here are other viable alternates:
Apache Modules let you run third party software within the web server itself.
FastCGI - Single Process handles more than one user request.
SCGI - Simple CGI
Refer: http://en.wikipedia.org/wiki/Common_Gateway_Interface#Alternatives
On Client Side:
Good News & Bad News:
You can use PNaCl (Portable Native Client) in chrome. It will be turned on by default.
BUT the first public release is expected in late 2013.Look for PNaCl
You can't do much on the client side - there's no way you can expect the client to have these libraries, and no safe way to download and run them.
The simplest way is to write your server side any way you want, and access them through a web interface. Many languages customarily used for server side scripting can access native C libraries, or you can even write ordinary C applications and run them as scripting agents.
In the "really exotic" category, it is possible to run what starts as C code in the client
if you embed it in a sufficiently protected environment. For example, see the description
of how sqlite (a C database application) was made into a 100% pure java application by
embedding a mips simulator written in java.
http://blog.benad.me/2008/1/22/nestedvm-compile-almost-anything-to-java.html
Looked at Wt yet? Its pretty neat.
Also you have options to code in cgi(ugly).
Although not C, its written in C++. If you can ignore that part: Wt at your service
For doing it client-side, you can use Emscripten. However, this will most probably require some refactoring of your existing code to fit JavaScript's asynchronous main loop requirement.
Note that Emscripten isn't a proof of concept or something like that. It is very powerful and already used to port complex code to the web. You can take a look at the demos (listed in the above URL) to see what can be done with it.
It sounds like you're best off to represent your legacy C library methods as a kind of (WEB) service at the server side. A raw CGI application seems to be a pretty low level point for this approach, but is generally right.
There are C/C++ frameworks available to create webservice servers, and client side libraries that support webservice access and data representation. For the server side you could use gSoap for example.
Another possibility would be to use the webserver of your choice to transmit ordinary files and use a custom webserver (which wouldn't need to support the full HTTP spec) wired up to your C code to communicate with client-side Javascript.
Two minimal webservers you could use as base are libuv-webserver and nweb.

ZeroC ICE vs 0MQ/ZeroMQ vs Crossroads IO vs Open Source DDS

How does ZeroC ICE compare to 0MQ? I know that 0MQ/Crossroads and DDS are very similar, but cant seem to figure out where ICE comes in.
I need to quickly implement a system that offloads real-time market-data from C++ to C#, as a first phase of my project. The next phase will be to implement an Event Based architecture with an underlying Pub/Sub design.
I am willing to use TCP.. but the the system is currently running on a single 24 core server.. so an IPC option would be nice. From what I understand ICE is only TCP, while DDS and 0mq have an IPC option.
Currently ,I am leaning towards using Protobuf with either ICE or Crossroads IO. Got turned off from the OpenSplice DDS website. Ive done lots research on the various options, was originally considering OpenMPI + boost:mpi, but there does not seem to be MPI for .NET.
My question is:
How does ICE compare to 0MQ? I cant wrap my head around this. Was unable to find anything online that compares the two.
thanks in advance.
........
More about my project:
Currently using CMAKE C++ on Windows, but the plan is to move to CentOS at some point. An additional desired feature is to store the tic data and all the messages in a "NoSql" database such as Hbase/Hadoop or HDF5. Do any of these middleware/messaging/pub-sub libraries have any database integration?
Some thoughts about ZeroC:
Very fast; Able to have multiple endpoints; Able to load balance on the endpoints; Able to reconnect to a different endpoint in case one of the node goes down. This is transparent to the end user; Has good tool chain (IceGrid, IceStorm, IceBox, etc); Distributed, high availability, multiple failover, etc
Apart from that, I have used it for hot swapping code modules (something similar to Erlang) by having the client create the proxy with multiple endpoints, and later on bring down each endpoint for a quick upgrade one by one. With the transparent retry to a different endpoint, I could have the system up and running the whole time i did an upgrade. Not sure if this is an advertised feature or an unadvertised side-effect :)
Overall, it is very easy to scale out your servers if need be using ZeroC Ice.
I know ZeroMQ provides a fantastic set of tools and messaging patterns and I would keep using it for my pet projects. However, The problem that i see is that it is very easy to go overboard and lose track of all your distributed components. This is a must have in a distributed environment. How will you know where your clients/server are when you need to upgrade? If one of components down the chain does not receive a message, how to identify where the issue is? the publisher? the client? or any one of the bridges (REP/REQ, XREP/XREQ, etc) in between?
Overall, ZeroC provides a much better toolset and ecosystem for enterprise solutions.
And it is open source :)
Jaybny,
ZMQ:
If you want real good performance and the only job for Phase 1 of your job is to move data from C++ to C#, then Zmq is the best option.
Having a pub/sub model for event driven architecture is also something that Zmq can help you with, with its in-built messaging pattern.
Zmq also supports your IPC requirements in this case. Eg: you can have one instance of your application that consumes 24 cores by multithreading and communicating via IPC.
ZeroC Ice:
Ice is a RPC framework very much like CORBA.
Eg.
Socket/ZMQ - You send message over the wire. Read it at the other end, parse the message, do some action, etc.
ZeroC Ice - Create a contract between client and server. Contract is nothing but a template of a class. Now the client calls a proxy method of that class, and the server implements/actions it and returns the value. Thus, int result = mathClass.Add(10,20) is what the client calls. The method, parameters, etc is marshalled and sent to the server, server implements the Add method, returns the result, and the client gets 30 as the result. Thus on the client side, the api is nothing but a proxy for a servant running on a remote host.
Conclusion:
ZeroC ICE has some nice enterprisy features which are really good. However, for your project requirements, ZMQ is the right tool.
Hope this helps.
For me.. the correct answer was Crossroads I/O . It does everything I need.. but still unable to pub/sub when using protobufs... im sure ZeroC ICE is great for distributed IPC, but 0MQ/Crossroads, gives you the added flexibility to use Inter-Thread-Communication.
Note: on windows, 0mq does not have IPC.
So, all in all, the crossroads fork of 0mq is the best. but you will have to roll your own windows/ipc (or use tcp::127..) , and publisher side topic filtering features for pub/sub.
nanomsg, from the guy who wrote crossroads and 0mq (i think).
http://nanomsg.org/

How to send output of native executable programs into the world of Web Services?

I have many C/C++ old native .exe and .dll programs running on Windows servers of my company.
Some .exe programs (I will designate with E) get results on the console or into a file and most of .dll programs (D) return results in arrays of structures.
My boss has asked me for the possibility to “also” send the results generated by ‘E’ and ‘D’ to a .NET Web Service platform using WCF without modifying ‘E’ and ‘D’.
I read a little about Web Services/WCF to have an answer. However, I built a first solution scenario in my mind: create C# WCF projects which:
For ‘E’, will read the files generated by the ‘E’ programs and will send the results to clients
For ‘D’, will “interoperate and marshall” with the returned values before sending the results.
I have some questions here; after getting the results from ‘E’ and ‘D’, how do I send these results to the client? Is this a “must” to serialize the results before sending to the client program? I suppose the client program should have a routine to deserialize. If the value to send to the client is a simple string or a simple integer, is this necessary to serialize?
Thanks!!
First of all, you should be aware that there are different kinds of webservices. The most common ones are REST and SOAP. I assume that you want to use SOAP. In that case every message has to be encoded, but that will be handled by your SOAP library/framework. The same is true for the client. He will usually not decode SOAP messages "by hand". It's handled by the clients SOAP library/framework.
Your thoughts about integrating E and D are right. For D you might also have a look at CLI/C++. It might make integration easier, but that depends on your scenario and your .Net and C++ knowledge.
You should read up some tutorials on web services (soap, for instance) and how to implement them in C#. Once you understand them it'll be clear how to interoperate with your programs. Your assumptions are correct. Reading files for 'E' and "interoperate and marshall" for 'D' are the most straight forward (if not most efficient) ways of doing it.
If you still have the source for these programs, then you should first refactor them to be less dependent on their "presentation layer". For instance, you could change the exe programs to have one layer than returns data, and another layer than formats and prints to the console.
A web service could then call the "data" layer.