My application sits behind an elastic application load balancer and nginx server. When trying to connect to socket.io from chrome I receive a WebSocket is closed before the connection is established message and a 400 error.. This does not seem to occur when connecting directly through nginx. Additionally I have found that this error also does not occur when using a firefox browser. Is there a setting I am missing on the load balancer? I have tried sticky sessions/etc.. but that does not seem to resolve the issue.
Th
I ended up figuring this out. After adding the following to my config file it seemed to work consistantly. proxy_set_header Origin "";
Related
I have an express js client app and an express server app with almost the same istio configuration. Client cannot be accessed through its host URL while the server is working well. Curl client host URL just gives me infinite waiting. And I cannot find any related traffic log in the istio-proxy of client pod. This is very confusing. What could be the possible reason for this problem?
istioctl analyse on live cluster dose not give any helpful information
We are using QSslServer to accept https connections in form of QSslSockets and it's working for years in Windows, Mac, Ubuntu & Android's Chrome & Firefox.
To our surprise, the website connection is not happening if we use Browserstack's mobiles, which are supposedly not emulators. Our URL looks like: https://website.in: 2000; So it's not on port 443 or 80.
The web page doesn't open with Android 9, 10, 11, 12's Chromes.
No errors are seen with sslErrors(). Even calling ignoreSslErrors() didn't help.
After putting logs, we found that though the connection is happening, the QSslSocket::readyRead() is not emitted, which is called with our normal devices.
How to resolve this problem?
Following are creating the issues:
URL with non-standard port. We are using :2000 to host our website and that's not working for SSL authentication after initial connection. If we route our website through :443 then it starts working.
LetsEncrypt certificate. We had faced some issue in Mac in past, where the certificate generated using "LetsEncrypt.org" had to be explicitly accepted from Mac's certificate store. Here the similar issue is happening. After opening the website, certain images don't show up probably due to the same reasons.
So, I am trying to make an API server on postman. Somehow it doesn't want to start. I know I have a crap computer, but I can run this many pages.
So, back to the problem. I am trying to run GET localhost:3000/users?id=hiddenIDsoyoucanthackme and it says this:
Hmmm… can't reach this page: localhost refused to connect.
Try:
Search the web for localhost
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Back on the problem. Same at Postman.
Error: connect ECONNREFUSED 127.0.0.1:3000
Help.
I have a Django app which returns a large JSON while calling an API.
The problem is when I'm requesting the data, the data itself is truncated which is crashing the frontend.
I'm using cloud front for DNS and SSL and other feature provided by them for caching and improved performance.
I tried curling the API and got the following error from curl:
curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err
2)
I tried disabling the Cloudflare but didn't work. On my localhost, however, everything works fine.
HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
Closing connection 0
TLSv1.2 (OUT), TLS alert, Client hello (1): curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
The JSON should be fetched entirely without getting chunked.
I got the same error with an application behind an AWS Application Load Balancer, using command:
curl "https://console.aws.example/api/xxx" -b "SESSION=$SESSION"
15:14:30 curl: (92) HTTP/2 stream 1 was not closed cleanly:
PROTOCOL_ERROR (err 1)
I had to force the use of HTTP/1.1 with the argument --http1.1
So the final command is:
curl "https://console.aws.example/api/xxx" -b "SESSION=$SESSION" --http1.1
I had this issue with AWS's Application Load Balancer (ALB). The problem was that I had Apache configured to use http2, but behind an ALB. The ALB supports http2 by default:
Application Load Balancers provide native support for HTTP/2 with HTTPS listeners. You can send up to 128 requests in parallel using one HTTP/2 connection. The load balancer converts these to individual HTTP/1.1 requests and distributes them across the healthy targets in the target group. Because HTTP/2 uses front-end connections more efficiently, you might notice fewer connections between clients and the load balancer. You can’t use the server-push feature of HTTP/2. 1
So, curl was using HTTP/2 to connect with the ALB, which was then converting it into an HTTP/1 request. Apache was adding headers to the response asking the client to Upgrade to HTTP/2, which the ALB just passed back to the client, and curl read it as invalid since it was already using an HTTP/2 connection. I solved the problem by disabling HTTP/2 on my Apache instance. Since it will always be behind an ALB, and the ALB is never going to make use of HTTP/2, then there is no point of having it.
Fix or Remove the Content-Length header in your HTTP request.
I was trying to connect to an AWS Gateway when this issue occurred to me. I was able to get the correct response using POSTMAN but if I copied over the same headers to curl, it would give me this error.
What finally worked for me was removing the Content-Length header as the length of the request in curl wasn't matching the same as it was in POSTMAN.
Since in my case I was only testing the API so this is fine, but I wouldn't suggest removing this header in production. Check to make sure the length is calculated correctly if this is occurring to you in a codebase.
With Nginx, you can experience this error in curl by having two http2 virtual hosts listening on the same server name. Running a check on your nginx config file will throw a warning letting you know that something isn't right. Fixing/removing the duplicate listing fixes this problem.
# nginx -t
nginx: [warn] conflicting server name "example.com" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "example.com" on [::]:443, ignored
I have Node.JS app running in Elastic Bean Stalk. I have Elastic Load Balancer set up and SSL is set up in that.
I did socket.io in my Node.JS. I had trouble accessing it because of Nginx configurations. I fixed them by using .ebextensions .
Now when i access my socket io using my HTTP URL , it works good and it uses WebSocket protocol itself. But When i use HTTPS, it switches back to polling. How can i fix this?
Is it some configuration I have to do to make it work in HTTPS?
Update : After adding {transports: ['websocket'], upgrade: false} ,its sending only websokcet requests. But still I get error as
WebSocket connection to 'wss://myurl.ca/socket.io/?EIO=3&transport=websocket' failed: WebSocket is closed before the connection is established.
The problem was with certificates.HTTPS certificates were set up for www.myurl.ca .And websocket calls were without www . It was just wss://myurl.ca.
It was a minor mistake.But changing the certificate to myurl.ca solved the issue.