Generally we have APIM point to a single host in DNS. We then use DNS to resolve the host to multiple hosts using round robin.
/--host1
---[APIM]---host2
\--host3
Will this work with websockets? To be more clear, if we have a single host setup in APIM as a websocket server that actually resolves to 3 different hosts via DNS, will this work with WSO2 APIM?
Yes, that should work. But the load balancing will happen at the connection level, but not at the frame level. That means if you have 3 clients connecting, those 3 will stick to each host.
Related
I want to develop my application separately (API, JOBS, WEB), so that it stays in this manner:
API: api.myaddress.com
JOBS: jobs.myaddress.com
WEB: myaddress.com
I know how to do that with distinct instances with Amazon and GoogleComputing, however, I was wondering IF, I could setup a single instance to do all that, and each DNS namespace, going to a different port on that machine, like:
api.myaddress.com resides in xxx.xxx.xxx.xxx:8090
jobs.myaddress.com resides in xxx.xxx.xxx.xxx:8080
myaddress.com resides in xxx.xxx.xxx.xxx:80
Also if that is possible, I don't know where I should configure that (Is it in the DNS, or a specific setup on my instance in Amazon/Google?)
Why do you want them to go to a different port? Its certainly not necessary. You can use DNS to point all of those domains/subdomains to a single server/ip address, and then thru your webserver configuration bind the various subdomain names to each particular website on that server.
In IIS you bind in the IIS Manager tool, and apache has a similar ability:
http://httpd.apache.org/docs/2.2/vhosts/examples.html
It sounds like what you are looking for is an HTTP reverse proxy. This would be a web server on your machine that binds to port 80 and, based on the incoming Host: header (or other criteria) it forwards the request to the appropriate Node.js instance, each of which is bound to a (different) port of its own.
There are several alternatives. A couple that immediately come to mind are HAProxy and Nginx.
DNS cannot be used to control which port a request arrives at.
Another approach, which is (arguably) unconventional but nonetheless would work would be to set up 3 CloudFront distributions, one for each hostname. Each distribution forwards requests to an "origin server" (your server) and the destination port can be specified for each one. Since CloudFront is primarily intended as a caching service, you would need to return Cache-Control: headers from Node to disable that caching where appropriate... but you could also see some performance improvements on responses that CloudFront can be allowed to cache for you.
what you are looking for is a load balancer (ELB in case of amazon).
setup the load balancer to send traffic to the different ports and at DNS level setup CNAMES for your services that point to the 3 load balancers that you have.
Does AWS support websockets with SSL ?
Can EWS ELB be used for websockets over SSL ?
What happens when a EC2 instance(machine) is added or removed to this ELB. Especially removed; what if a machine goes down. are the existing sockets routed to some other machine or reseted to connected.
can ELB be a bottleneck at any point in time.
any other alternatives .. let me know
This link might prove partially helpful for you - it would appear that you can do web sockets over SSL, but currently I'm struggling to implement it.
StackOverflow - Websocket with Tomcat 7 on AWS Elastic Beanstalk
Currently AWS ELB doesn't support Websocket balancing, there is a trick to do it via SSL, but it has some limitation and depends on your app logic. So if websocket connection is used only as server-client communication, it will work. But if you have more advanced logic when clients must communicate with each other via a server then this solution won't work. For example one client has established connection for a chatroom, then other clients can connect to the established chatroom and communicate with each other.
Then only possible way to use HA-proxy http://blog.haproxy.com/2012/11/07/websockets-load-balancing-with-haproxy/
But shown example just shows how to configure HA-proxy base on two servers. So if you do not use Amazon Autoscalling Group, the solution is good. But if you will need use ASG, the question about add/remove instances to ha-proxy config is other challenge.
I am trying to setup an Elastic load balancer to route requests to a cluster of node.js servers running Primus.io with sockjs to manage real time communications.
I have set up the load balancer to listen with the following configuration:
HTTPS 8084 -> HTTPS 8084 (The port used on my node.js servers)
SSL 443 -> TCP 80
My understanding is that the only way to get websockets to work through ELB is via SSL->TCP, hence the above configuration.
I have correctly enabled the new proxy protocol for ELB as described here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
When trying to connect to the server from a client an HTTPS request is initially sent and then from what I can gather it should be upgraded to websockets. But the request is simply failing when I send it to the loadbalancer address.
If I send the initial Primus connection request to the ip of a single nodejs server like so:
var primus = new Primus('https://ip.address.of.single.server:8084');
The request is correctly returned and is upgraded to websockets correctly.
When I switch the ip address to that of the balancer, it fails and the initial https request to the node.js server returns nothing. I assume this means that the websocket transfer could not be established, but to be honest I have little experience in this area so could be completely wrong.
Does anyone have any idea what I am doing wrong?
Thanks in advance
Do you have clustered your NodeJS-instances? For example, if you use SocketIO you should use a clustered session store. Actually, I'm also currently investigating the same with SockJS running on top of Vertx.
The problem behind is Amazon ELB won't respect any forwards in the past (in opposite to Sticky Session on top of HTTP) which means that a connections via TCP level can be forwarded at any cluster's node. Yes, one tcp channel would be okay. But frameworks like SocketIO do a little bit more to support sessions (does not exist in WebSockets) and multiple transport layers (http, polling, sockets, and so on).
I am trying to turn the server/client model into a server/server model, so as to have the my 2 computers running the program find each other by perhaps a url or something else like ip address.I was wondering if it was possible for 2 servers to connect via url's. or is ip the only way? examples would be appreciated since this is my second day writing c++.
For HTTP, the server only talks to clients. So, I am not sure what you mean by server to server.
URLs are fine to use to access an HTTP server, but the host name will need to be resolved into an IP address before a network connection can actually be established. You should be able to find libraries that will do those details for you, but it is not hard to manually establish a socket connection to an HTTP server.
There are configurations where there are multiple servers, acting as a single server. These are sometimes referred to as web farms or a HTTP cluster. Typically, there is some sort of load balancer in front of the cluster. Many HTTP load balancers support a server affinity feature to make sure a client is sent to the same server in the cluster for subsequent operations.
In a cluster configuration, servers may need to synchronize shared state, such as file system data or configuration data. This is typically handled by some mechanism that is external to the HTTP server process itself. The HTTP server process may need to cooperate with the synchronization, but this can be as simple as restarting the process.
There is another mode of HTTP server configuration called a reverse proxy configuration. A cluster of HTTP proxy servers sit in front of a single HTTP server. The proxy servers are thought to be cheap and expendable entities that off load work from the HTTP server itself, providing a scalable means to increase HTTP server capacity.
There are many open source HTTP server and proxy projects available as examples of how they are implemented. If you are trying to build your own custom server application, you can have a look at the HTTP examples in Boost asio.
We just made our web system more secure by converting a single web server/database server into a 2 tier system with the webserver in front of the database server. The webserver has 2 NIC's, one for the outside world and one for an internal network. The database server has one NIC for the inside network.
In the old days, I could use Navicat's SSH feature to connect to the single websever/database server. Now the database server is hidden.
Using the command line I can ssh to webserver and then ssh into database server. But I miss my graphical tools. Is there any way to get Navicat to connect to the database server? Is there something I can set up on the webserver that will proxy to the database?
Short answer: You shouldn't connect to the database server through the web server. Yes, there are ways you could set this up, but I wouldn't recommend it if your goal is increased security.
There ought to be a way for you to VPN in to the internal network, and then ssh to both hosts from there. The security benefit is largely in reducing the attack surface on your externally accessible machines, so you'd be better off turning off ssh entirely on the external interface, then VPN-ing in to the internal network (which I hope is firewalled to only allow database traffic between the two servers, not that the web server has a NIC that's on your internal network!) Once you're on the internal network you can have Navicat connect directly to the server, without the need for ssh tunneling. (Obviously you'd need to set the firewall policies on your VPN tunnel correctly to allow this.)
If this setup is not possible, such as if you're using a low-end shared webhost, see these instructions to set up an HTTP Tunneling connection through the webhost. I really would recommend using the VPN solution if you can, but if you can't, HTTP Tunneling is the most secure way to support connecting directly through the web server to the db server.