I'm writing an application where a client will connect to a server and subscribe to data updates. The client tells the server what data items it is interested in, and then subscribes using a method with a streaming response. This works well.
However, there are also non-data related notifications that the client should know about. I'm not sure about the best way to handle those. I've thought of:
Adding another method to the existing service. This would be just like the data subscription but be used for event subscription. The client could then subscribe to both types of updates. Not sure what the best practice is for the number of methods in a service, or the mixing of responsibilities in a service.
Exposing a second service from the server with a streaming method for event notifications. This would make the client use multiple connections to get its data - and use another TCP port. The event notifications would be rare (maybe just a few during the lifetime of the connection) so not sure if that is important to consider. Again - not sure about best practices for the number of services exposed from a server.
This one seems unorthodox, but another method might be to pass connection info (IP address and port) from the client to the server during the client's connection sequence. The server could then use that to connect to the client as a way to send event notifications. So the client and server would each have to implement both client and server roles.
Any advice on ways to manage this? Seems like this a problem that would already have been solved - but it also appears that the C++ implementation of gRPC lags a bit behind some of the other languages which offer some more options.
Oh - and I'm doing this on Windows.
Thanks
I've come up with another alternative that seems to fit the ProtoBuf style better than the others. I've created ProtoBuf message types for each of the data/event/etc notifications that the server should send, and enclosed each of them inside a common 'notification' message that uses the 'oneof' type. This provides a way to have a single streaming method response that can accommodate any type of notification. It looks like this:
message NotificationData
{
oneof oneof_notification_type
{
DataUpdate item_data_update = 1;
EventUpdate event_data_update = 2;
WriteResponse write_response_update = 3;
}
}
service Items
{
...
rpc Subscribe (SubscribeRequest) returns (stream NotificationData) {}
...
}
Any comments or concerns about this usage?
Thanks
Related
I'm looking into using the Boost::Beast websocket library to create an asynchronous bidirectional pipe to pass data between a server and a client. I leveraged some code from the async example (I can post some at a later time if necessary, don't have access to it now). I currently have a class which creates several threads running a SocketListener. When a client connects, it creates a Session shared_ptr to do the async read and write functions. The problem is, this session object will only write out when the client has sent me a message. I'm looking for an implementation that allows my server to write on demand to all the clients connected to it and also listen for incoming data from those connections.
Is this possible? Am I using the wrong technique for this? The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket. Incoming would allow a client to drop configurations for the server and outgoing would just monitor a message queue and do a async write if a message is available.
Thanks!
Is this possible?
Yes
Am I using the wrong technique for this?
No
The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket, and No respectively.
That is not necessary, a websocket stream is full-duplex. You can read and write at the same time.
outgoing would just monitor a message queue and do a async write if a message is available.
This is the correct approach, but you can do that in the same Session object that also handles the reads.
Here's an example that reads continuously and can also write full-duplex: https://github.com/vinniefalco/CppCon2018
How can I implement or do kind of "hack" in PUB-SUB pattern to get an ability to publish only to authorized subscribers, disconnect unauthorized subscribers etc?
I googled for this problem, but all the answers very similar to set subscribe filter in subscriber side.
But I want, as I said, publish my updates from PUB only to those clients that passed an authorization, or have some secret key, that was received in REQ-REP.
Thanks for any ideas.
Read Chapter 5 of The Guide, specifically the section called "Pros and Cons of Pub-Sub".
There are many problems with what you're trying to accomplish in the way you're trying to accomplish it (but there are solutions, if you're willing to change your architecture).
Presumably you need the PUB socket to be generally accessible to the world, whether that's the world at large or just a world consisting of some sockets which are authorized and some sockets which are not. If not, you can just control access (via firewall) to the PUB socket itself to only authorized machines/sockets.
When a PUB socket receives a new connection, it doesn't know whether the subscriber is authorized or not. PUB cannot receive actual communication from SUB sockets, so there's no way for the SUB socket to communicate its authorization directly. XPUB/XSUB sockets break this limitation, but it won't help you (see below).
No matter how you communicate a SUB socket's authorization to a PUB socket, I'm not aware of any way for the PUB socket to kill or ignore the SUB socket's connection if it is not authorized. This means that an untrusted SUB socket can subscribe ALL ('') and receive all messages from the PUB socket, and the PUB socket can't do anything about it. If you trust the SUB socket to police itself (you create the connecting socket and control the machines it's deployed on), then you have options to just subscribe to a "control" topic, send an authorization, and have the PUB socket feed back the channels/topics that you are allowed to subscribe to.
So, this pretty much kills it for achieving general security in a PUB/SUB paradigm that is publicly accessible.
Here are your options:
Abandon PUB/SUB - The only way you can control exactly which peer you send to every single time on the sending side (that I'm aware of) is with a ROUTER socket. If you use ROUTER/DEALER, the DEALER socket can send it's authorization, the ROUTER socket stores that with its ID, and when something needs to be sent out, it just finds all connected sockets that are authorized and sends it, sequentially, to each of them. Whether this is feasible or not depends on the number of sockets and the workload (size and number of messages).
Encrypt your messages - You've already said this is your last resort, but it may be the only feasible answer. As I said above, any SUB socket that can access your PUB socket can just subscribe to ALL ('') messages being sent out, with no oversight. You cannot effectively hide your PUB socket address/port, you cannot hide any messages being sent out over that PUB socket, but you can hide the content of those messages with encryption. Proper method of key sharing depends on your situation.
As Jason has shown you an excellent review on why ( do not forget to add a +1 to his remarkable answer, ok? ), let me add my two cents on how:
Q: How?
A: Forget about PUB/SUB archetype and create a case-specific one
Yes. ZeroMQ is rather a very powerful can-do toolbox, than a box-of-candies you are forbidden to taste and choose from to assemble your next super-code.
This way your code is and remains in power of setting both controls and measures for otherwise uncontrollable SUB-side code behaviour.
Creating one's own, composite, layered messaging solution is the very power ZeroMQ brings to your designs. There you realise you are the master of distributed system design. Besides the academic examples, no one uses the plain primitive-behaviour-archetypes, but typically composes more robust and reality-proof composite messaging patterns for the production-grade solutions.
There is no simple one-liner to make your system use-case work.
While it need not answer all your details, you may want to read remarks
on managing PUB/SUB connections
on ZeroMQ authorisation measures.
In my website, I'd like to create a public API that would allow clients (unknown people) to interact with my services. A classic REST API would work well in that case.
However, I need to be able to send events to the clients too. These events are not related to client HTTP requests. I saw "webhooks" are a way to deal with this. If I understood well, with webhooks, my service would send HTTP POST requests to a URL specified by the client, with event data inside this request.
I think websocket can be used too as a solution for this full-duplex communication need.
What I want to know, is which method would be the simplest for clients to implement to talk to my services? Simplicity is the key point here.
The hard thing is that my clients can use various technologies (full websites with HTTP servers, iOS/Android apps without server, etc.)
What are implications for clients if I use REST API + webhooks? Websockets? etc?
How to make a choice?
Hope it's clear (but not sure). Thanks :)
I would consider webhooks a simpler solution. And yes, you understood it well, that with webhooks, a developer using your API would register a URL where your backend would POST event data. It's a common pattern that's used in APIs.
A great benefit of using a webhooks design is that a client/server connection does not need to stay open. After all, if events occur infrequently (i.e. only a few times per hour, or per day) or keeping a consistent connection open is a challenge, establishing a connection only when it's needed is rather efficient.
The challenge of using webhooks for you, the API provider, is designing an evented backend system that deals with change of state detection and reliable webhook calling mechanisms (i.e. dealing with webhook receiver URLs that are unresponsive or throw errors).
The challenge of using webhooks on the developer end is that they need to stand up a reliable web server that listens for the event POST data from your server.
Realtime APIs (i.e. based on Websockets, Bayeux/CometD) are really swell because that live connection means that new connections do not have to be established, which is particularly useful with very chatty sessions. Additionally, there are a lot of projects and companies out there that have taken care of the heavy lifting on the server and client with fully-baked libraries. One of those is Fanout.io which makes pushing messages between the client/server possible with just a few lines of code, utilizing XMPP, Bayeux, and Websockets when possible.
(I am not affiliated with Fanout, but I have used it)
So, to sum it up, webhooks are simple mostly because you are already familiar with the architecture needed to implement them, and the pattern is a well traveled one. If you are leaning toward a persistent connection approach, I would look at tools/platforms like Fanout because it takes care of the heavy lifting (i.e. subscribe/publish, concurrent connection scale, client/server libraries).
Is this even possible?
I know, I can make a one-way asynchronous communication, but I want it to be two-way.
In other words, I'm asking about the request/response pattern, but non-blocking, like described here (the 3rd option)
Related to Asynchronous, acknowledged, point-to-point connection using gSoap - I'd like to make the (n)acks async, too
You need a way to associate requests with replies. In normal RPC, they are associated by a timeline: the reply follows the response before another response can occur.
A common solution is to send a key along with the request. The reply references the same key. If you do this, two-way non-blocking RPC becomes a special case of two one-way non-blocking RPC connections. The key is commonly called something like request-id or noince.
I think that is not possible by basic usage,
The only way to make it two way is via response 'results of the call'
But you might want to use little trick
1] Create another server2 at client end and call that server2 from server
Or if thats not you can do over internet because of NAT / firewall etc
2] re architect your api so that client calls server again based on servers responce first time.
You can have client - server on both end. For example you can have client server on system 1 and system 2. (I specify sender as cient and receiver as server). You send async message from sys1 client to sys 2 server. On recieving message from sys1 you can send async response from sys 2 client to sys1 server back. This is how you can make async two way communication.
I guess you would need to run the blocking invocation in a separate thread, as described here: https://developer.nokia.com/Community/Wiki/Using_gsoap_for_web_services#Multithreading_for_non-blocking_calls
Hi to all the experts out there :)
This is my first question here.
Problem Description :
I have to write a Market Data Feed Handler. This is going to be a Windows Service, will be using two Sockets.
Socket A : For communication between Subscribing applications and Feed Handler (Feed Handler will be accepting the connection request and the Item Request).
Socket B : For communication between Feed Handler and External Market Data provider, like Reuters/Bloomberg.
In both the cases Request/Response will be using the same port.
Note : The volume of Data coming from the external system is low (External system will only send the information which has been subscribed for, at this point of time).
However later on we may want to scale it, some providers throw all the data, and Feed Handler has to filter out locally, based on the subscription.
My questions :
What threading model i should use?
Which I/O strategy i should use?
Keeping in mind both the cases, should i create separate Request/Response thread?
EDIT 1: After reading few tutorials on Winsock, i'm planning to use Event Objects for asynchronous behavior.
The point of concern here is that, a single thread should listen for incoming client connections (Accept them) and also Connect to other server, in turn send/recv on two different ports.
Thread A
1) Listening for incoming connections. (Continuous)
2) Receiving Subscribe/Unsubscribe request from connected clients. (Rarely)
3) Connect to the external server (Onetime only).
4) Forward the request coming from client to the external server. (Rarely)
5) Receive data from external server. (Continuous)
6) send this data back to the connected clients. (Continuous)
My question is can a single thread act as both Client and Server, using asynchronous I/O models?
Thanks in advance.
Deepak
The easiest threading model seems to be single threaded synchronous. If you need to implement a filter for a provider, implement that as a socket-in/socket-out separate process.