I'm setting up a project and one of the main questions is how to implement a simple message queueing system (something along the line of a messenger chat system). I would like to avoid polling, but there will most likely be a lot of concurrent connections (tens of thousands). These will be HTTP+SSL connections, started from an application not a browser.
One solution I found would be DNS Load Balancing: distribute these persistent connections across a bunch of nginx webservers.
What do you think? Any other possible solutions?
For load balancing, keeping the application server stateless will open up the field significantly. Once you've got that, you're free to use almost any generic load balancer. From something protocol specific like HTTP load balancers to the generic TCP level load balancers.
Keep it stateless, the rest will be trivial in comparison.
If you are planning on using web services (XML message passing ), you can use gsoap, which has an included web server sample application, which uses thread pools. I've run a server using this and mysql ( for persistent state ). I agree with Ryan on reducing/eliminating the statefulness of the application.
DNS load balancing will allow you to distribute queries between multiple IP addresses, which could be multiple servers. Keep in mind that your clients could get different servers from one request to another, so your applicaiton can't use local state management. Your applicaiton will have to store its state in a centralized location such as a database.
Have you considered peer-to-peer? The state of the art in punching through firewalls is actually very effective especially since you're running your own client software in each instance, and you have servers to start the connection.
More work, but significantly less server resources.
Also, write your own server software - make sure it can handle a lot of connections and is extraordinarily lightweight and you should be able to handle thousands of connections per server before you do load balancing.
-Adam
Related
I have a situation where I have a NodeJs app that runs as an event listener. This NodeJs app listens for external events outside of my application through websocket.
I need each of the events coming in to only be processed once by my Nodejs app.
However, it's also crucial to ensure that this particular NodeJs app instance can auto-scale up/down when needed and is highly available so that it wouldn't be a bottleneck.
Usually, when it comes to scaling and HA, the first thing that come to my mind is to run a few of instances of it with a load balancer, or run multiple containers on something like ECS. Doing so would introduce multiple instances of the Nodejs app and would also mean each of the same events from the websocket will get processed more than once by all the instances/containers which received it.
What would be a good solution and design to tackle such a problem?
Not sure I fully understand the situation here but I think what you are saying is that you have a socket server that emit to other services, however that a single instance, even with dedicated resources is subject to bottlenecks.
Assuming what I have said is in-line with the question what you probably want to look at (not sure if you using socket.io or not) is the redis socket.io package. This will essentially use redis to store the sockets so you can cluster your socket server and not have it sending duplicates or missed users.
To your question about scale, you for sure would want to use containers for this, we actually use digitalocean 'apps' as an easy way to deploy our containers without having to manage Kuberneties and docker images, only downside right now is no auto scale, however scaling out is just a click of a button and with alerts setup we know when to scale up or down.
With this setup, we have our socket server running with managed redis server, when we need more socket server we just tick it up and we have more throughput.
If this question seems basic to more IT-oriented folks, then I apologize in advance. I'm not sure it falls under the ServerFault domain, but correct me if I'm wrong...
This question concerns some backend operations of a web application, hosted in a cloud environment (Google). I'm trying to assess options for coordinating our various virtual machines. I'll describe what we currently have, and those "in the know" can maybe suggest a better way (I hope!).
In our application there are a number of different analyses that can be run, each of which has different hardware requirements. They are typically very large, and we do NOT want these to be run on the application server (referred to as app_server below).
To that end, when we start one of these analyses, app_server will start a new VM (call this VM1). For some of these analyses, we only need VM1; it performs the analysis and sends a HTTP POST request back to app_server to let it know the work is complete.
For other analyses, VM1 will in turn will launch a number of worker machines (worker-1,...,worker-N), which run very similar tasks in parallel. Once the task on a single worker (e.g. worker-K) is complete, it should communicate back to VM1: "hey, this is worker-K and I am done!". Once all the workers (worker-1,...,worker-N) are complete, VM1 does some merging operations, and finally communicates back to app_server.
My question is:
Aside from starting a web server on VM1 which listens for POST requests from the workers (worker-1,..), what are the potential mechanisms for having those workers communicate back to VM1? Are there non-webserver ways to listen for HTTP POST requests and do something with the request?
I should note that all of my VMs are operating within the same region/zone on GCE, so they are able to communicate via internal IPs without any special firewall rules, etc. (e.g. running $ ping <other VM's IP addr> works). I obviously do not want any of these VMs (VM1, worker-1, ..., worker-N) to be exposed to the internet.
Thanks!
Sounds like the right use-case for Cloud Pub/Sub. https://cloud.google.com/pubsub
In your case workers would be publishing events to the queue and VM1 would be subscribing to them.
Hard to tell from your high - level overview if it can be a match, but take a look at Cloud Composer too https://cloud.google.com/composer/
I've noticed that most architectures that acts as a web service client uses a proxy to communicate with the rest server? While it is possible to access a rest service without a proxy server, one example I've read is this where it uses a proxy server to communicate with its rest server are there any advantages of using a proxy to access a rest service?
Using a proxy is usually not necessary for small local application web services. It depends mostly on your server load (number of clients, frequency of requests), and on the network area where your services are accessed : back-office server-to-server, front-office LAN, WAN or on the whole internet).
The REST webservices are mostly online resources, identified in a unique way by an URL, and generally served in a classic HTTP way. From the client side, he does not know if the data he gets is static, dynamic or cached. He simply gets the data as if it's static.
On large scale applications, with the increase of clients, resources and web services requests, you need technical components to handle problematics like user balancing, usage tracking of your web services as your application evolves. You'll also want to deliver the best performance you can to the clients. This can be achieved efficiently with a proxy solution.
Advantages of NOT using a proxy:
Simplicity
Advantages of using a proxy-based solution:
Rewrite URLs from a single centralized entry point (instead of setting it heterogeneously on each server/app/ws configuration).
Track the usage of your webservices (globally)
Enhance performance capabilities (caching, balancing to dedicated servers)
Managing API versions (switching gobally /myAPI from /myAPI-V1 to /myAPI-V2 easily done, and go back fingers in the nose)
Modifying some API calls on-the-fly (compatibility between versions, do preliminary input data validation, or add technical information to calls).
Manage webservices security globally (control IPs, quota per user, etc).
Hope this answers your question.
Edit (in answer to comment)
The proxy can act as a cache. For frequently asked resources (REST services), it can serve the same response to several users. Your service will be called juste once, even if there is 100 requests on this resource.
But this depend on how your services are really used, so you need to track requests to know if caching is helpful or not in your case.
How many users do you have ?
How many web services ?
Whar kind of data/resources are served ?
How fast are your services (individually) ?
What is the network performance ? (LAN? WAN? Internet? Mobile?)
How many servers and applications serve your users ?
Do you encounter any network load problems ?
A proxy cannot "accelerate" your existing services, but it can enhance the way you serve the resources to your clients.
Do not use a proxy if you do not know if you need it. You must know what is your actual system architecture and what are the weaknesses and bottlenecks.
I am looking to implement a service (web/windows, .net) that maintains a list of available services and can provide an endpoint based on the nature or type of request. The requester can then pass the actual work request to the provided endpoint. The actual work requests can contain very large chunks (from 10MB up to and possibly exceeding a GB) of data.
WCF routing services sounds like a perfect fit, but turns out not to be because the it requires the actual work request to pass through it, creating a bottleneck at the routing service (the whole point is to get a system to be able to scale out). If I had smaller messages, WCF routing would be a no brainer.
Is there anything out there that fits the bill? Preferably .NET/windows based?
Do you mean because the requests block for work?
Do could use OneWay OperationContract to create async services so as to not block the request pool.
[ServiceContract]
interface IMyContract
{
[OperationContract(IsOneWay = true)]
void DoWork()
}
Update
I think understand your question better now, you are looking to distribute load to different servers to avoid request bottle necks due to heavy traffic load (preferably distributed based on content).
I'd say that MVC Routing is indeed ideal for this. One of the features that you can leverage is the fall over functionality. You can actually define multiple backup endpoints, and in the case where one fails, it will automatically move over to the next. There's a good introduction to how this works here.
There's also a good article here that talks about load balancing with WCF using the same principles. It provides 2 solutions for a round robin filter implementation that allows you to load balance the service requests (even though at the begin he says his general answer to whether it supports load balancing is no for implementation reasons).
If you are worried about all requests routing via the one server and still becoming a bottle neck, then think of web load balancers. It's the same scenario. Sitting in the middle forwarding packets doesn't require much work, and they have no problem handling huge volumes of traffic. I don't think this is an issue IMO.
We have a C++ (SOAP-based) web service deployed Using Systinet C++ Server, that has a single port for all the incoming connections from Java front-end.
However recently in production environment when it was tested with around 150 connections, the service went down and hence I wonder how to achieve load-balancing in a C++ SOAP-based web service?
The service is accessed as SOAP/HTTP?
Then you create several instances of you services and put some kind of router between your clients and the web service to distribute the requests across the instances. Often people use dedicated hardware routers for that purpose.
Note that this is often not truly load "balancing", in that the router can be pretty dumb, for example just using a simple round-robin alrgorithm. Such simple appraoches can be pretty effective.
I hope that your services are stateless, that simplifies things. If indiviual clients must maintain affinity to a particualr instance thing get a little tricker.