Does AWS support websockets with SSL ?
Can EWS ELB be used for websockets over SSL ?
What happens when a EC2 instance(machine) is added or removed to this ELB. Especially removed; what if a machine goes down. are the existing sockets routed to some other machine or reseted to connected.
can ELB be a bottleneck at any point in time.
any other alternatives .. let me know
This link might prove partially helpful for you - it would appear that you can do web sockets over SSL, but currently I'm struggling to implement it.
StackOverflow - Websocket with Tomcat 7 on AWS Elastic Beanstalk
Currently AWS ELB doesn't support Websocket balancing, there is a trick to do it via SSL, but it has some limitation and depends on your app logic. So if websocket connection is used only as server-client communication, it will work. But if you have more advanced logic when clients must communicate with each other via a server then this solution won't work. For example one client has established connection for a chatroom, then other clients can connect to the established chatroom and communicate with each other.
Then only possible way to use HA-proxy http://blog.haproxy.com/2012/11/07/websockets-load-balancing-with-haproxy/
But shown example just shows how to configure HA-proxy base on two servers. So if you do not use Amazon Autoscalling Group, the solution is good. But if you will need use ASG, the question about add/remove instances to ha-proxy config is other challenge.
Related
Is it possible to use AWS Application Loadbalancer for RSocket?
An AWS Application Loadbalancer can also be used for WebSocket connections and my project uses RSocket with WebSocket as its transport. This made me wonder if it is possible to use this loadbalancer for RSocket aswell.
On one hand I would think it is possible to use this loadbalancer, as it only receives a connection and passes this to the target RSocket server.
On the other hand, if all RSocket frames go through the loadbalancer, it might not know how to handles these frames, which would make it not possible to use.
I couldn't find much about RSocket and loadbalancing online besides this post .But this is client side loadbalancing and I was looking for server side loadbalancing.
And this post .But this uses LoadBalanceSocketClient while I want to find out if an AWS Application Loadbalancer can be used.
Here follows a simple diagram of what I would like to have (if possible):
The RSocket client connects to the loadbalancer which passes the connection to a RSocket server (for example server A). Then the client and RSocket server A can communicate.
AWS will see this as a typical websocket service. So as long as it lets HTTP/1.1 connections through and lets them upgrade to WebSocket there shouldn't be a problem. This is very standard so it shouldn't be an issue. Ideally it won't see individual frames of the traffic, and you app will handle all frames on a single WebSocket connection. But it looks like the API Gateway support does deal with individual messages https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-set-up-websocket-deployment.html. You should ignore the RSocket client load balancing, and focus on AWS WebSocket routing.
As an example, with GCP (instead of AWS) the complexity is that this bumps you up from AppEngine Standard to Flexible. The demo site https://demo.rsocket.io/ is deployed to GCP and exposes websockets.
The additional kink, is that you possibly want stateful routing if you want client resumption.
I am developing an Websocket application and I am having doubts on how to create a scalable application.
1- Should I use Nginx? And if so, where does nginx stand? It would be like this:
ELB -> Nginx -> Ec2 instances
or
Nginx -> ELB -> Ec2 instances
2- Is it necessary to use a service like Redis to make the communication between servers? Example: I am connected to server1 and my friend is connected to server2, but we are in the same room chat. If I send a message, it needs to reach my friend.
3 - Is it possible to let my Elb receives only calls in https but the conversation with the backend is http? I ask this, because I use OpsWorks and it was very hard to normalize cookbooks to create my environment.
Thank you.
Generally the architecture looks like:
ALB --> nginx1,niginx2 --> ALB --> ec2 websocket server1, server2
This allows your web servers and app servers to be load balanced independently of each other
Not necessarily. Redis is used primarily as an in memory data store for caching.
Yes - You can terminate ssl on ALB and it is in fact recommended to do it this way in order to offload ssl processing on load balancer as opposed to doing it on instances themselves. See - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html . Additional benefit of using this is that you can use ACM to issue certificates for free that can be deployed on ALB. ACM can handle renewals for you automatically as well.
i am traying to figure it out how is the best way to deploy a TCP/IP and UDP service on Amazon AWS.
I made a previous research to my question and i can not find anything. I found others protocols like HTTP, MQTT but no TCP or UDP
I need to refactor a GPS Tracking service running right now in AMAZON EC2. The GPS devices sent the position data using udp and tcp protocol. Every time a message is received the server have to respond with an ACKNOWLEDGE message, giving the reception confirmation to the gps device.
The problem i am facing right now and is the motivation to refactor is:
When the traffic increase, the server is not able to catch up all the messages.
I try to solve this issue with load balancer and autoscaling but UDP is not supported.
I was wondering if there is something like Api Gateway, which gave me a tcp or udp endpoint, leave the message on a SQS queue and process with a lambda function.
Thanks in advance!
Your question really doesn't make a lot of sense - you are asking how to run a service without running a server.
If you have reached the limits of a single instance, and you need to grow, look at using the AWS Network Load Balancer with an autoscaled group of EC2 instances. However, this will not support UDP - if you really need that, then you may have to look at 3rd party support in the AWS Marketplace.
Edit: Serverless architectures are designed for http based application, where you send a request and get a response. Since your app is TCP based, and uses persistent connections, most existing serverless implementations simply won't support it. You will need to rewrite your app to support http, or use traditional server based infrastructures that can support persistent connections.
Edit #2: As of Dec. 2018, API gateway supports WebSockets. This probably doesn't help with the original question, but opens up other alternatives if you need to run lambda code behind a long running connection.
If you want to go more Serverless, I think the ECS Container Service has instances that accept TCP and UDP. Also take a look at running Docker Containers with with Kubernetes. I am not sure if they support those protocols, but I believe they do.
If not, some EC2 instances with load balancing can be your best bet.
I am setting up a Tomcat application in EC2. For reliability, I am running two or more instances. If one server goes down, my users should be redirected to the other instance. This suggests that session state should be kept in an external source, or mirrored between the servers.
AWS offers a hosted service, Elasticache, which seems like it would work well. I even found a nice library, memcached-session-manager. However, I soon ran into some issues.
Unless someone can convince me otherwise, I need the session states to be encrypted in transit. Otherwise someone could intercept the network traffic and pretend to be someone else on my site. I don't see any built-in Amazon method to keep traffic off the internet. (Is peering available here?)
The library mentioned earlier does have Redis support with SSL, but it does not support a Redis cluster. Someone put in a pull request for this but it has not been incorporated and this library is a complex build. I may talk myself into living without the cluster, but that puts us back at a single point of failure.
Tomcat is running on EC2 in your VPC, and ElastiCache is in your VPC. Your AWS VPC is an isolated network. Nobody can intercept the traffic between the EC2 and Elasticache servers unless your VPC network becomes compromised in some way.
If you want to use Redis instead, with SSL connections, then I believe at this time you would need a Tomcat Session Manager implementation that uses Jedis. This one uses Jedis, but you would need to upgrade the version of Jedis it uses in order to use SSL connections.
Good Day,
I have been using AWS quite a bit for my cloud based system for a hardware project. Using SimpleDB and the notification service provided is great.
However, I need a backend on AWS that basically listens to requests coming in, processes it and sends it back to a particular address. Some kind of UDP service.
I could easily write a c#/c++ app for it, but i am not sure if I can host it on AWS. Does anyone know how this works?
Short answer: yes.
EC2 instances are just like any other virtual machine, obviously you can put in a server that listens to UDP. Configuring the network for this is, of course, slightly more complicated, but possible. The one thing making it more complicated is that with UDP you will not be able to enjoy the load balancer service that Amazon offers, as it (currently) only supports TCP-based protocols.
So, if you have one server you wish to put on the internet, the procedure is probably same as what you'd do with a TCP server: set up a server and an elastic IP pointing to it, and then have your clients connect to it (by knowing the elastic IP you've been allocated, or by referring to that IP via a DNS resolution). If you have multiple servers you wish to set up, answering the same address, life is a bit more complicated. With TCP, you could have set up an Amazon load balancer and assign your elastic IP to the load balancer. If you'd want a load balancer for UDP, the Amazon stock load balancer can't do that, but you can still find a software load balancer (there are hundreds of them on Amazon's public images library) to set up.
Nginix has an Amazon image that will load balance UDP for $2,500/yr or you can launch your own EC2 instance and use open source Nginx.
My specific use case was for a UDP logging service, if you can use hostnames Route 53 could be a scalable managed solution as well.