Using Socket.IO with HTTPS and AWS Elastic Beanstalk - amazon-web-services

I have a website hosted with AWS Elastic Beanstalk using an nginx proxy and have been trying to use Socket.IO, however I get an error message of Firefox can’t establish a connection to the server at wss://my-domain.com/socket.io/?EIO=3&transport=websocket&sid=WKtOGhLspneTExRLAAAB and a 400 response code when the page tries to establish the connection.
This is how the node server is created and emits events:
const express = require("express");
const app = express();
const server = app.listen(8081, () => {
console.log("Server started on port " + port);
});
const io = require("socket.io")(server);
app.route("/api/listener").post((req, res) => {
io.emit("response", req.body.Status);
res.sendStatus(200);
});
This is how the client attempts to connect:
const io = require("socket.io-client");
let socket = io("https://my-domain.com");
socket.on("response",
function() {
console.log("Response received");
});
This is the configuration for my load balancer:
I have tried suggestions from other similar questions, like changing the protocol to SSL and instance protocol to TCP, but that stops the site from being reachable. I do have proxy_set_header Connection "upgrade" and proxy_set_header Upgrade $http_upgrade set in my configuration for nginx.

I found a solution to my specific situation. My previous Elastic Beanstalk environment was set up with a Classic load balancer; I've migrated my website to a new Elastic Beanstalk environment with an Application load balancer and the following configuration:

Related

Configure AWS ECS Services Connect

Context
I have been learning AWS Fargate and already deployed two services in the same cluster. The first service comes from an image from an nextjs listen in 3000 port. The other service is an Nginx server listening in port 80.
Service Connect setup
I have turned on the ECS Service Connect to the both services. In the Nextjs server I setted up the the Service Connect server endpoints as the following image shows:
Nginx setup
For the Nginx config I used the following file:
server {
listen 80;
location / {
if ($request_uri = "/") {
add_header Content-Type text/html;
return 200 "<html><body><h1>It works!</h1></body></html>";
}
proxy_pass http://nextjs:3000;
proxy_set_header Nginx-Message "Message from Nginx";
}
}
Problem
This setup works on my machine with Docker Compose and the services were deployed just fine. But, when I try to access the Nginx server using the routes that tries to find Nextjs, it doen't work. I get HTTP ERROR 426.
To test this, you can access the link http://learnecs-nginx-1541224866.us-east-1.elb.amazonaws.com/ and see that the Nginx is working. But, when I use http://learnecs-nginx-1541224866.us-east-1.elb.amazonaws.com/any-route, I get the 426 error.
You can see the repository with the code on GitHub
I have read all AWS documentation and watched some videos, but there isn't enough material about how to use AWS Service connect in the internet.

HTTP 400 Error on AWS ElasticBeanstalk with ALB and Socket.IO

I have an Elastic Beanstalk environment with Application Load Balancer (ALB) setup on Node.js instance which server both REST APIs as well as Socket.IO connections.
When the number of EC2 instances is more than one some of Socket.IO connections are failing with HTTP 400. The same issue is resolved when there is only one instance.
I have also tried enabling sticky sessions in the ALB and have also created a Redis adapter which connects to my Redis instance in ElasticCache.
Unable to figure out what is causing those HTTP 400 errors.
Socket.IO documentation https://socket.io/docs/v2/using-multiple-nodes/
After two days of research, I found the solution.
Use transport "websocket" instead of "polling" (default).
Server:
const io = require('socket.io')(http, {
cors: {
origin: '*'
},
transports: ['websocket']
})
Client:
import io from 'socket.io-client'
socket = io('wss://your_url/', {
transports: ['websocket']
})
Eventually had to use Network Load Balancer (NLB) which resolved this issue.

AWS Elastic Beanstalk load balancer is redirecting to HTTPS - does my app still need UseHttpsRedirection() and UseHsts()?

First, let me say that this is the first time I have written an ASP.NET Core 3.1 web app and first time learning AWS with Elastic Beanstalk. So if it seems like I'm confused... it's because I am. ;-)
I have two AWS environments - one is Staging and one is Production. The Staging environment has no SSL certificate and no load balancer. It only listens on port 80.
Production has a load balancer set up with my SSL certificate, and is set up to redirect all port 80 traffic to port 443.
Port 80 = Redirect to https://#{host}:443/#{path}?#{query}
Status code:HTTP_301
Port 443 = Forward to my-target-group: 1 (100%)
Group-level stickiness: Off
When I generated the new web app in VS 2019, I opted in on HTTPS/HSTS by checking "Configure for HTTPS". So it has this in Startup.cs:
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
I am getting this error in my Windows event log in Staging and Production: “Failed to determine the https port for redirect”
I tried the suggestion from Enforce HTTPS in ASP.NET Core
services.AddHttpsRedirection(options =>
{
options.HttpsPort = 443;
});
But that messed up the Staging environment because there's nothing listening on port 443.
Since Staging is only using HTTP, and Production is redirecting to HTTPS at the load balancer, should I just remove the UseHsts() and UseHttpsRedirection() altogether from my Startup? Will that pose any security problems - I do want traffic encrypted over the internet but I don't think it's necessary between the load balancer and the EC2 instance, correct?
Or do I need Forwarded headers, as suggested at Configure ASP.NET Core to work with proxy servers and load balancers?
I do want traffic encrypted over the internet but I don't think it's necessary between the load balancer and the EC2 instance, correct?
Correct. That's how it is usually setup. So you usually would have SSL termination on your load balancer (LB), and then from LB to your instance it would be regular http traffic:
Client----(https)---->LB----(http)---->instances
does my app still need UseHttpsRedirection() and UseHsts()?
No, as your app is just recieving http traffic only from the LB.

Connecting to GO websocket server running on AWS Beanstalk

I have a websocket server built with go running on AWS beanstalk. I'm running a load balancer with a SSL cert. I'm having issues connecting to it via the browser. If I try to connect to it through another go program running on my terminal everything works fine. I've updated my environment to accept TCP instead of HTTP connections on port 80.
When I try to connect from the webapp though I get this error.
WebSocket connection to 'wss://root.com/users/fcbd7f8d-2ef6-4fe2-b46c-22db9b107214/sockets/client'
failed: Error during WebSocket handshake: Unexpected response code: 400
When I check the AWS logs I find this error.
the client is not using the websocket protocol:
'websocket' token not found in 'Upgrade' header
UPDATE
if I run the webapp on my localhost and change the connection string from wss:// to ws:// it works. If I try the same url in the live webapp I get an ssl error.
Mixed Content: The page at 'https://root.com/captions' was loaded over HTTPS,
but attempted to connect to the insecure WebSocket endpoint
'ws://root.com/users/fcbd7f8d-2ef6-4fe2-b46c-22db9b107214/sockets/client'.
This request has been blocked; this endpoint must be available over WSS.

AWS EC2 instance wont load Kibana

I am installing the ELK stack on an EC2 instance. I think my install was successful, but I can't load Kibana in my web browser. I think there are issues with my network setting but I am new to aws and I am not sure.
When I run
curl localhost:5601
I get
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;
}</script>
When I then run this command on my instance ip
curl 174.129.93.100:5601
I get this, but I can ping successfully
curl: (7) Failed to connect to 174.129.93.100 port 5601: Connection refused”
I've had this problem for like a week and really need help solving it.
Well, the port in security group is open as it says Connection Refused. Either the service is not running on the designated port or it is listening on localhost only.
In the kibana configuration, change from localhost or 127.0.0.1 to Private IP of the EC2 Instance and restart the service.
Check this link: https://www.elastic.co/guide/en/kibana/4.5/kibana-server-properties.html