Why websocket don't work on the cloud? - amazon-web-services

I developed our websocket project on wildfly. When we test it on localhost or within our local network, everything work fine. But when I deployed it on AWS, websocket don't work any longer. We can access other html pages. But when we conenct to "ws://ip/project location ", chrome just says hand shake error. I have experienced the same web socket problem on jelastic hosting too. My question is
Why it is happening like this?
Is websocket protocol not stable enough?
Is there any suitable hosting for websocket projects in java?

So far balancers don't forward websocket headers. To make WS working you must have a public IP address and no other services in front of your application.

I suggest you try deploying to the cloud provider : Heroku - their sample app code using node.js and websockets will get you up and running quickly. A locally running websocket app which uses a specific port - say 8888 will run fine on heroku with :
var port = process.env.PORT || 8888;
as heroku internally will deploy your app with a run-time generated port visible via PORT .
If you are using node.js with websockets I suggest using the einaros ws implementation
var WebSocketServer = require("ws").Server;
which seamlessly handles the notion of ws port -vs- the http port

Currently ELB doesn't support Websocket in HTTP mode. To be able to handle Websocket you need to configure the ELB in tcp mode (the payload of the tcp connection will be send directly to the server, so the ELB doesn't impact the http and ws flow). With this set up you won't be able to see the caller ip.
Without the ELB Websocket works perfectly (AWS only sees ip traffic and the OS only tcp one), we haven't change any thing for a plain old http server in order to use WS (except the WS handling code in the web server).
To know if you are using ELB look at the bill, AWS can provide you a lot of very interesting services, for a fee.

Related

I want to deploy a multi-tier web app into AWS but don't understand how to set it up

I was hoping someone may be able to explain how I would setup a multi-tiered web application. There is a database tier, app tier, web server tier and then the client tier. I'm not exactly sure how to separate the app tier and web server tier since the app tier will be in a private subnet. I would have the client send the request directly to the app server but the private net is a requirement. And having the app server separated from the web server is a requirement as well.
The only idea I have had was to serve the content on the web server and then the client will send all requests to the same web server on another port. Like port 3000, if a request is captured on that port, a node app using express will forward the request to the app tier since the web server can speak to the app server.
I did setup a small proof of concept doing this. The web server serves the content, then I have another express app setup to listen on port 3000, the client sends the request on port 3000 and then it just sends the exact same thing back to the app server.
This is my current setup with the web servers hosting two servers. One to serve the frontend on port 80 and one to receive requests on port 3000. The server listening on port 3000 forwards all requests to the app server ALB(It's basically a copy of all the same routes on the app server but it just forwards the requests instead of performing an action). But is there a way to not have this extra hop in the middle? Get rid of the additional server that is listening on 3000 without exposing the internal ALB?
To separate your web servers and application servers, you can use a VPC with public and private subnets. In fact, this is such a common scenario that Amazon has already provided us with documentation.
As for a "better way to do this," I assume you mean security. Here are some options:
You can (and should) run host based firewalls such as IP tables on your hosts.
AWS also provides a variety of options.
You can use Security Groups, which are statefull firewalls for your hosts
You can also use Network Access Control Lists (ACLs), which are stateless firewalls used to control traffic in and out of subnets.
AWS would also argue that many shops can improve their security posture by using managed services, so that all of the patching and maintenance handled by AWS. For example, static content could be hosted on Amazon S3, with dynamic content provided by microservices leveraging API Gateway. Finally, from a security perspective AWS provides services like Trusted Advisor, which can help you find and fix common security misconfigurations.

exposing kestrel server deployed as web job for external interaction

I have deployed an application hosting Kestrels server bindded to a specific port as web job .I want to access that port in order to have to access to APIs implemented in that application.
If I try to bind with port 443 it fails on other ports the server starts but cant interact with external requests.Is there any way I can expose this port to listen to incoming requests
Azure Web App only support port 443 and 80. And webjob host in Azure App Service.
After a lot of searching for information and trying. I can tell you with certainty that other ports cannot be used.
For more details, you can read below post.
Opening ports to Azure Web Job
Is it possible to use an Azure Web Job to listen on a public socket
The above is a statement of port restrictions in webjob.
For you want webjob to monitor and process incoming requests, my suggestion is that webjob monitors ports 443 and 80 instead of binding. You can use RawSocket.
Monitor all requests, analyze whether the request content contains instructions that need to be executed, and then proceed to the next business operation.
If you already have completed project, you also can choose VM or Cloud Services.

How can I enable API request and MongoDB access only for the app server?

We are working on an app whose Front-end has been decoupled from the back end.
We have 2 project packages. The first project package acts as the front-end for the app and interacts with the 2nd package which acts as the back-end via API.
Front-end is built with:
React
Redux
Back-end is built with:
ExpressJS
MongoDB
We have deployed the app successfully on the AWS EC2 instance but I am doubtful about the security inbound measure we have applied for the packages. Both the packages are deployed on the same ec2 instance.
The front end of the app can be accessed with https://xxx.xxx.x.xxx:8080. Under the security group, under inbound rules, I added the source as anywhere for port 8080 under the custom TCP rule.
I did the same for port 3000 reserved for the back-end API server and port 27017 reserved for MongoDB.
What I actually wanna do is only let the front-end package running on port 8080 talk to the API server and the API server in turn talks to MongoDB.
I do not want everyone to have access to the backend server and MongoDB except for the front-end app server.
Note that I have already used the JWT token to secure the API. This one is to add an extra layer of security.
How can I limit the access to only the Front-end app server?
Thanks in anticipation.
Sorry, are you accessing your MongoDB directly from the frontend? I hope no...
So you should configure firewall on your server (firewalld or iptables) to block Mongo and all other internal ports from access from web.

Using Primus.io (websockets) behind AWS Elastic Load Balancer

I am trying to setup an Elastic load balancer to route requests to a cluster of node.js servers running Primus.io with sockjs to manage real time communications.
I have set up the load balancer to listen with the following configuration:
HTTPS 8084 -> HTTPS 8084 (The port used on my node.js servers)
SSL 443 -> TCP 80
My understanding is that the only way to get websockets to work through ELB is via SSL->TCP, hence the above configuration.
I have correctly enabled the new proxy protocol for ELB as described here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
When trying to connect to the server from a client an HTTPS request is initially sent and then from what I can gather it should be upgraded to websockets. But the request is simply failing when I send it to the loadbalancer address.
If I send the initial Primus connection request to the ip of a single nodejs server like so:
var primus = new Primus('https://ip.address.of.single.server:8084');
The request is correctly returned and is upgraded to websockets correctly.
When I switch the ip address to that of the balancer, it fails and the initial https request to the node.js server returns nothing. I assume this means that the websocket transfer could not be established, but to be honest I have little experience in this area so could be completely wrong.
Does anyone have any idea what I am doing wrong?
Thanks in advance
Do you have clustered your NodeJS-instances? For example, if you use SocketIO you should use a clustered session store. Actually, I'm also currently investigating the same with SockJS running on top of Vertx.
The problem behind is Amazon ELB won't respect any forwards in the past (in opposite to Sticky Session on top of HTTP) which means that a connections via TCP level can be forwarded at any cluster's node. Yes, one tcp channel would be okay. But frameworks like SocketIO do a little bit more to support sessions (does not exist in WebSockets) and multiple transport layers (http, polling, sockets, and so on).

c++ http tcp server to server connection

I am trying to turn the server/client model into a server/server model, so as to have the my 2 computers running the program find each other by perhaps a url or something else like ip address.I was wondering if it was possible for 2 servers to connect via url's. or is ip the only way? examples would be appreciated since this is my second day writing c++.
For HTTP, the server only talks to clients. So, I am not sure what you mean by server to server.
URLs are fine to use to access an HTTP server, but the host name will need to be resolved into an IP address before a network connection can actually be established. You should be able to find libraries that will do those details for you, but it is not hard to manually establish a socket connection to an HTTP server.
There are configurations where there are multiple servers, acting as a single server. These are sometimes referred to as web farms or a HTTP cluster. Typically, there is some sort of load balancer in front of the cluster. Many HTTP load balancers support a server affinity feature to make sure a client is sent to the same server in the cluster for subsequent operations.
In a cluster configuration, servers may need to synchronize shared state, such as file system data or configuration data. This is typically handled by some mechanism that is external to the HTTP server process itself. The HTTP server process may need to cooperate with the synchronization, but this can be as simple as restarting the process.
There is another mode of HTTP server configuration called a reverse proxy configuration. A cluster of HTTP proxy servers sit in front of a single HTTP server. The proxy servers are thought to be cheap and expendable entities that off load work from the HTTP server itself, providing a scalable means to increase HTTP server capacity.
There are many open source HTTP server and proxy projects available as examples of how they are implemented. If you are trying to build your own custom server application, you can have a look at the HTTP examples in Boost asio.