I have discovered that transparent retries are quite new to the gRPC C++ implementation, and there isn't a lot of documentation, except to say that GRPC_ARG_ENABLE_RETRIES turns the feature on and off.
Other programming languages have options: MaxAttempts, InitialBackoff, MaxBackoff, BackoffMultiplier & RetryableStatusCodes, but I can't find any reference to those for C++.
Do you know how to access these options in C++, or are they inaccessible?
These options can be set via the gRPC service config. For details, see gRFC A6.
Note that in gRPC C++, the configurable retry functionality was deemed stable and enabled by default (i.e., the GRPC_ARG_ENABLE_RETRIES channel arg defaults to true) in v1.40. Transparent retries were implemented in v1.45. Hedging has not yet been implemented, but we do plan to do that at some point in the future.
Related
How does ZeroC ICE compare to 0MQ? I know that 0MQ/Crossroads and DDS are very similar, but cant seem to figure out where ICE comes in.
I need to quickly implement a system that offloads real-time market-data from C++ to C#, as a first phase of my project. The next phase will be to implement an Event Based architecture with an underlying Pub/Sub design.
I am willing to use TCP.. but the the system is currently running on a single 24 core server.. so an IPC option would be nice. From what I understand ICE is only TCP, while DDS and 0mq have an IPC option.
Currently ,I am leaning towards using Protobuf with either ICE or Crossroads IO. Got turned off from the OpenSplice DDS website. Ive done lots research on the various options, was originally considering OpenMPI + boost:mpi, but there does not seem to be MPI for .NET.
My question is:
How does ICE compare to 0MQ? I cant wrap my head around this. Was unable to find anything online that compares the two.
thanks in advance.
........
More about my project:
Currently using CMAKE C++ on Windows, but the plan is to move to CentOS at some point. An additional desired feature is to store the tic data and all the messages in a "NoSql" database such as Hbase/Hadoop or HDF5. Do any of these middleware/messaging/pub-sub libraries have any database integration?
Some thoughts about ZeroC:
Very fast; Able to have multiple endpoints; Able to load balance on the endpoints; Able to reconnect to a different endpoint in case one of the node goes down. This is transparent to the end user; Has good tool chain (IceGrid, IceStorm, IceBox, etc); Distributed, high availability, multiple failover, etc
Apart from that, I have used it for hot swapping code modules (something similar to Erlang) by having the client create the proxy with multiple endpoints, and later on bring down each endpoint for a quick upgrade one by one. With the transparent retry to a different endpoint, I could have the system up and running the whole time i did an upgrade. Not sure if this is an advertised feature or an unadvertised side-effect :)
Overall, it is very easy to scale out your servers if need be using ZeroC Ice.
I know ZeroMQ provides a fantastic set of tools and messaging patterns and I would keep using it for my pet projects. However, The problem that i see is that it is very easy to go overboard and lose track of all your distributed components. This is a must have in a distributed environment. How will you know where your clients/server are when you need to upgrade? If one of components down the chain does not receive a message, how to identify where the issue is? the publisher? the client? or any one of the bridges (REP/REQ, XREP/XREQ, etc) in between?
Overall, ZeroC provides a much better toolset and ecosystem for enterprise solutions.
And it is open source :)
Jaybny,
ZMQ:
If you want real good performance and the only job for Phase 1 of your job is to move data from C++ to C#, then Zmq is the best option.
Having a pub/sub model for event driven architecture is also something that Zmq can help you with, with its in-built messaging pattern.
Zmq also supports your IPC requirements in this case. Eg: you can have one instance of your application that consumes 24 cores by multithreading and communicating via IPC.
ZeroC Ice:
Ice is a RPC framework very much like CORBA.
Eg.
Socket/ZMQ - You send message over the wire. Read it at the other end, parse the message, do some action, etc.
ZeroC Ice - Create a contract between client and server. Contract is nothing but a template of a class. Now the client calls a proxy method of that class, and the server implements/actions it and returns the value. Thus, int result = mathClass.Add(10,20) is what the client calls. The method, parameters, etc is marshalled and sent to the server, server implements the Add method, returns the result, and the client gets 30 as the result. Thus on the client side, the api is nothing but a proxy for a servant running on a remote host.
Conclusion:
ZeroC ICE has some nice enterprisy features which are really good. However, for your project requirements, ZMQ is the right tool.
Hope this helps.
For me.. the correct answer was Crossroads I/O . It does everything I need.. but still unable to pub/sub when using protobufs... im sure ZeroC ICE is great for distributed IPC, but 0MQ/Crossroads, gives you the added flexibility to use Inter-Thread-Communication.
Note: on windows, 0mq does not have IPC.
So, all in all, the crossroads fork of 0mq is the best. but you will have to roll your own windows/ipc (or use tcp::127..) , and publisher side topic filtering features for pub/sub.
nanomsg, from the guy who wrote crossroads and 0mq (i think).
http://nanomsg.org/
I am penning down the features that a remote logging
library might need when built from scratch.
I looked up this: http://www.aggsoft.com/serial-data-logger.htm
I wish to know that what differences can be between a
remote logging library and a remote logger software.
Few things that I thought of:
1. The library can be used in C++ programs to log error messages on the fly.
2. The library will require programming knowledge on the end user's part.
3. The software cannot be used "inside" a C++ program, so we won't be able to log the error messages on the fly? Not sure about this one.
I would like to know that besides logging error messages, what are the things for which it makes sense to use the remote logging library? Sharing big files? Anything else than these two things?
Secondly which is better in what way out of a library and a software - in the current case?
As I mentioned in the my comments to your question, I would think that a logging library would provide some sort of an API/SDK, whereas remote software would not. The same would hold true if its sending messages via TCP/UDP or a serial port. The difference between the 2 options would be how much coding you would have to do. That is, how much would you have to reinvent the wheel?
IMHO, nearly all debug environment/tools support redirect the console output the serial port (using print, or other API). It usually not a a task of Application programmer.
There are other methods for "remote logging":
1) syslog, syslog-ng 's remote service
2) save log local, fetch using ftp
I've got a WCF service being hosted using TCP/IP (netTcpBinding):
var baseWcfAddress = getWcfBaseUri();
host = new ServiceHost(wcfSingleton, baseWcfAddress);
var throttlingBehavior = new System.ServiceModel.Description.ServiceThrottlingBehavior();
throttlingBehavior.MaxConcurrentCalls = Int32.MaxValue;
throttlingBehavior.MaxConcurrentInstances = Int32.MaxValue;
throttlingBehavior.MaxConcurrentSessions = Int32.MaxValue;
host.Description.Behaviors.Add(throttlingBehavior);
host.Open();
I'd like to write a Mac client in Objective C or C++. Are there any existing classes that can facilitate the connection to my WCF service? If not, what are my steps & options to making it happen?
Every binding starting with net is considered as not interoperable. Even pure .NET client without WCF is not able to communicate with the service without enormous effort by reimplementing whole binary protocol and encoding. You should probably start with:
.NET Message Framing protocol
.NET Binary Format: XML Data Structure
Your option for Mac is using Mono which should have support for netTcpBinding.
Your real option for Objective-C / C++ on Mac is creating interoperable WCF service exposing data over HTTP. If you are not the owner of the service you can create routing WCF service which will be bridge between interoperable HTTP and netTCP.
Edit:
One more thing - if the service uses netTcpBinding with default configuration it is secured with windows security. I expect that it can be another show stopper on Mac.
In the context of the comment:
netTcpBinding was found to be one of the quicker options -- certainly much faster than the vanilla BasicHttpBinding/WS binding that was tried. That's the only real need since netTcpBinding used binary vs straight text it was faster.
Firstly, I have looked at this many, many times - and oddly enough, every time I test it, NetTcpBinding completely fails to be any quicker than the basic xml offering. However, since performance is your goal I have options...
I'm a bit biased (since I wrote it), but I strongly recommend "protobuf-net" here; since it is designed along the same idioms as most .NET serializers, it is pretty easy to swap in, but it is faster (CPU) and smaller (bandwitdh) in every test I make for this - or tests that other people make. And because the protobuf format is an open specification, you don't have to worry about the "Net" bindings being non-interoperable.
For MS .NET, I have direct WCF hooks that can be used purely from config making enabling it a breeze. I honestly don't know how well that will work with the Mono equivalent - I haven't tried. It might work, but if not the other option is to simply throw a byte[] or Stream over the network and worry about (de)serialization manually.
My preferred layout here is basic-http binding with MTOM enabled, which gives you the simplicity and portability of the simplest xml binding, without the overhead of base-64 for the binary data.
googling about asynchronous /non-blocking connectors for mysql i went basically to this post
However, it's been 2 years and following whats happening on drizzle is a bit confusing at the moment. libdrizzle was a separate dependency at some point but they decided to merge it with the rest of the project. Are there other options for asynchronous database access from c++?
I've been looking at OTL, ODB and OpenDBX, but they all seem to be synchronous (require a separate thread for non-blocking operation)
I had the same desire and came to the conclusion that it's not supported. Even with the MySQL C API you can use the low-level functions to issue queries and wait for a response asynchronously, but you cannot ever get full asynchronous result collection--you always end up blocking from the time the first piece of the result is returned until the last.
I don't have direct experience with it, but I've read that Postgres does support full asynchrony (at least in the C API).
I used to used MySAC in my own project. It works well though is a little outdated. I just quote the description from their website:
MySAC is a library that provides mechanisms for making asynchronous request to MySQL database.
And maybe you will interested in https://github.com/huxingyi/myc if you use libuv. It's a pure c mysql connector wrote by me, you can implement your own network layer or just use the implemented libuv based uvmyc inside the example folder.
Here's my question.
Right now I have a Linux server application (written using C++ - gcc) that communicates with a Windows C++ client application (Visual Studio 9, Qt 4.5.)
What is the very easiest way to add SSL support to both sides in order to secure the communication, without completely gutting the existing protocol?
It's a VOIP application that uses a combination of UDP and TCP to initially set up the connection and do port tunneling stuff, and then uses UDP for the streaming data.
I've had lots of problems in the past with creating the security certificates from scratch that were necessary to get this stuff working.
Existing working example code would be ideal.
Thank you!
SSL is very complex, so you're going to want to use a library.
There are several options, such as Keyczar, Botan, cryptlib, etc. Each and every one of those libraries (or the libraries suggested by others, such as Boost.Asio or OpenSSL) will have sample code for this.
Answering your second question (how to integrate a library into existing code without causing too much pain): it's going to depend on your current code. If you already have simple functions that call the Winsock or socket methods to send/receive ints, strings, etc. then you just need to rewrite the guts of those functions. And, of course, change the code that sets up the socket to begin with.
On the other hand, if you're calling the Winsock/socket functions directly then you'll probably want to write functions that have similar semantics but send the data encrypted, and replace your Winsock calls with those functions.
However, you may want to consider switching to something like Google Protocol Buffers or Apache Thrift (a.k.a. Facebook Thrift). Google's Protocol Buffers documentation says, "Prior to protocol buffers, there was a format for requests and responses that used hand marshalling/unmarshalling of requests and responses, and that supported a number of versions of the protocol. This resulted in some very ugly code. ..."
You're currently in the hand marshalling/unmarshalling phase. It can work, and in fact a project I work on does use this method. But it is a lot nicer to leave that to a library; especially a library that has already given some thought to updating the software in the future.
If you go this route you'll set up your network connections with an SSL library, and then you'll push your Thrift/Protocol Buffer data over those connections. That's it. It does involve extensive refactoring, but you'll end up with less code to maintain. When we introduced Protocol Buffers into the codebase of that project I mentioned, we were able to get rid of about 300 lines of marshalling/demarshalling code.
I recommend to use GnuTLS on both the client and the server side, only for the TCP connection. Forget about the UDP data for now. The GnuTLS documentation has example code for writing both clients and servers. Please understand that at least the server side (typically the TCP responder) needs to have a certificate; the client side can work with anonymous identification (although there is even an example without server certificate, using only DH key exchange - which would allow man-in-the-middle attacks).
In general, it is likely that you will have to understand the principles of SSL, no matter what library you use. Library alternatives are OpenSSL (both Unix and Windows), and SChannel (only Windows).
Have you tried the SSL support in Boost.Asio or ACE? Both use OpenSSL under-the-hood, and provide similar abstractions for TCP, UDP and SSL. Sample code is available in both the Boost.Asio and ACE distributions.
One thing you may need to keep in mind is that SSL is record-oriented instead of the stream-oriented (both TCP and UDP). This may affect how you multiplex events since you must, for example, read the full SSL record before you can call a read operation complete.
To help handle this with no changes to the application yo may want to look at the stunnel project (http://www.stunnel.org/). I don't think that it will handle the UDP for you though.
The yaSSL and CyaSSL embedded SSL/TLS libraries have worked well for me in the past. Being targeted at embedded systems, they are optimized for both speed and size. yaSSL is written in C++ and CyaSSL is written in C. In comparison, CyaSSL can be up to 20 times smaller than OpenSSL.
Both support the most current industry standards (up to TLS 1.2), offer some cool features such as stream ciphers, and are dual licensed under the GPLv2 and a commercial license (if you need commercial support).
They have an SSL tutorial which touches on adding CyaSSL into your pre-existing code as well: http://www.yassl.com/yaSSL/Docs-cyassl-manual-11-ssl-tutorial.html
Product Page: http://yassl.com/yaSSL/Products.html
Regards,
Chris