We have a global buttering middleware rule applied to Traefik v2.9.6 running inside EKS v1.23 as seen here -
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: request-limits
spec:
buffering:
maxRequestBodyBytes: 10485760
And this is applied via -
additionalArguments:
- --entrypoints.websecure.http.middlewares=traefik-request-limits#kubernetescrd
The 10 MiB limit works, but the expected HTTP 413 response is not returned, instead the response is
"Connection reset by peer (Write failed)"
Is there a way to intercept this response, and generate the expected HTTP response code instead?
The issue has nothing to do either either EKS or Traefik, it had to do with the client making the REST API request into the cluster.
The version of the Java JDK on the client host was causing the connections to be terminated prematurely before the server side response, which should have been a HTTP 413.
Testing with Curl identified the issue.
Related
I have two backend web servers, and i need to monitor them using httpcheck by checking the URL and looking for a string to be present in the response of the request. if the string is not available switch the backend to another server.
Status:
Server1 - Active
Server2 - Backup
Configuration Details:
Health Check Method : HTTP
HTTP Check Method : GET
Url used by http check requests: /jsonp/FreeForm&maxrecords=10&format=XML&ff=223
Http check version : HTTP/1.0\r\nAccept:\ XS01
Result of the http Request is
{"d":{"__type":"Response","Version":"4.5.23.1160","ResultCode":"XS01","ErrorString":"","Results":[{"__type":"Result",
so, I am expecting the string ResultCode":"XS01" in the response from the server, if the string found the server1 is up, if not bring the Server2 from the backup.
how can i achieve this in HAProxy Backend Health Check?
This can be done under Advanced Settings--> Backend Pass thru using the expect string,
http-check expect string XS01
Could you please help me in understanding RequestAuthentication?
when I apply simple RequestAuthentication and restart Pod, envoy sidecar's ready state is false, and logs throw warn Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected
as soon I delete RequestAuthentication and recreate pod - works OK
Istio 1.8.3
apiVersion: 'security.istio.io/v1beta1'
kind: RequestAuthentication
metadata:
name: jwt-validator
spec:
selector:
matchLabels:
role: api
jwtRules:
- issuer: "https://mykeycloak.com/auth/realms/myrealm"
When proxy is in LDS stale state
the following log is shown in istiod
2021-04-10T17:30:53.326877Z warn ads ADS:LDS: ACK ERROR sidecar~10.238.2.69~PODNAME.NS~NS.svc.cluster.local-60 Internal:Error adding/updating listener(s) vi ││ rtualInbound: Issuer 'MY_JWT_ISSUER_URL' in jwt_authn config has invalid local jwks: Jwks RSA [n] or [e] field is missing or has a parse error
Resolved
Issuer here is not just a string to match in JWT, but the real URL that must be accessible from istiod, and with a valid SSL certificate
I'm placing this answer for better visibility.
As #Yegor Lopatin mentioned in edit, the issue was solved by fixing the issuer:
Issuer here is not just a string to match in JWT, but the real URL that must be accessible from istiod, and with a valid SSL certificate
issuer must be a valid and accessible link. I thought it is just an string, which you compare with when reading JWT
e.g.
jwtRules:
- issuer: "testing#secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.5/security/tools/jwt/samples/jwks.json"
There are tutorials you might refer to when you're seting up JWT with istio:
https://www.istiobyexample.dev/jwt
https://istio.io/latest/docs/tasks/security/authorization/authz-jwt/
I'm getting a strange 400 error when I try to connect to an Istio Gateway when behind an AWS load balancer.
I don't see any activity in the istio-ingresgateway logs (even with debug settings on), but when I run sysdig on the ingressgateway pod, I see weird semi-random text with often with "QUIT !T" in it.
I get this when I try to make an http request via a browser or curl from outside the cluster.
The same Istio configuration works when I try to make the request in minikube or in Azure.
I'm also able to use the same AWS lb to point to a Nginx ingress controller and it works just fine.
sudo sysdig -s2000 -A -c echo_fds fd.ip=10.1.2.3
Sometimes there is no GET request in the output
------ Read 100B from 10.1.1.3:44404->10.1.2.3:80 (envoy)
QUIT
!T
C
ct>
------ Write 66B to 10.1.1.3:44404->10.1.2.3:80 (envoy)
HTTP/1.1 400 Bad Request
content-length: 0
connection: close
And, sometimes this happens right before the GET request
------ Read 3.39KB from 10.1.1.3:35430->10.1.2.3:80 (envoy)
QUIT
!T
C
atfI>GET /myapp/ HTTP/1.1
I'm wondering if the weird characters are causing the envoy routes not to match, but I have no idea where this could be coming from.
Any advice as to what this might be?
Any general strategies for debugging Istio ingress?
Any help is much appreciated.
So I found the answer to this question. The garbage in the request was a read herring.
A little more info about the setup:
The AWS load balancer was terminating TLS, so all the traffic behind it was going over the http port 31380. I was already passing the X-Forwarded-Proto: https header via the istio VirtualService setting, so the backend applications could assume that the traffic was already-terminated TLS traffic.
The solution:
The issue in my case was that the AWS Target Group had the Proxy protocol v2: Enabled set on it. Turning this off solved the issue.
I have a Django app which returns a large JSON while calling an API.
The problem is when I'm requesting the data, the data itself is truncated which is crashing the frontend.
I'm using cloud front for DNS and SSL and other feature provided by them for caching and improved performance.
I tried curling the API and got the following error from curl:
curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err
2)
I tried disabling the Cloudflare but didn't work. On my localhost, however, everything works fine.
HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
Closing connection 0
TLSv1.2 (OUT), TLS alert, Client hello (1): curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
The JSON should be fetched entirely without getting chunked.
I got the same error with an application behind an AWS Application Load Balancer, using command:
curl "https://console.aws.example/api/xxx" -b "SESSION=$SESSION"
15:14:30 curl: (92) HTTP/2 stream 1 was not closed cleanly:
PROTOCOL_ERROR (err 1)
I had to force the use of HTTP/1.1 with the argument --http1.1
So the final command is:
curl "https://console.aws.example/api/xxx" -b "SESSION=$SESSION" --http1.1
I had this issue with AWS's Application Load Balancer (ALB). The problem was that I had Apache configured to use http2, but behind an ALB. The ALB supports http2 by default:
Application Load Balancers provide native support for HTTP/2 with HTTPS listeners. You can send up to 128 requests in parallel using one HTTP/2 connection. The load balancer converts these to individual HTTP/1.1 requests and distributes them across the healthy targets in the target group. Because HTTP/2 uses front-end connections more efficiently, you might notice fewer connections between clients and the load balancer. You can’t use the server-push feature of HTTP/2. 1
So, curl was using HTTP/2 to connect with the ALB, which was then converting it into an HTTP/1 request. Apache was adding headers to the response asking the client to Upgrade to HTTP/2, which the ALB just passed back to the client, and curl read it as invalid since it was already using an HTTP/2 connection. I solved the problem by disabling HTTP/2 on my Apache instance. Since it will always be behind an ALB, and the ALB is never going to make use of HTTP/2, then there is no point of having it.
Fix or Remove the Content-Length header in your HTTP request.
I was trying to connect to an AWS Gateway when this issue occurred to me. I was able to get the correct response using POSTMAN but if I copied over the same headers to curl, it would give me this error.
What finally worked for me was removing the Content-Length header as the length of the request in curl wasn't matching the same as it was in POSTMAN.
Since in my case I was only testing the API so this is fine, but I wouldn't suggest removing this header in production. Check to make sure the length is calculated correctly if this is occurring to you in a codebase.
With Nginx, you can experience this error in curl by having two http2 virtual hosts listening on the same server name. Running a check on your nginx config file will throw a warning letting you know that something isn't right. Fixing/removing the duplicate listing fixes this problem.
# nginx -t
nginx: [warn] conflicting server name "example.com" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "example.com" on [::]:443, ignored
Our integration partner was using our Web service with http: 8090 and now we are moving to https: 8443 so they tried to update the WS URL but they are getting "handshake error". They are asking whether they can still use http 8090. If we route any traffic coming from http 8090 to https 8443 in the webserver config, will they still get handshake error?
When you create a redirect, the server sends a HTTP 302 which the client is obligated to follow, which means that they should still get the error. depending on your setup, and config, they may be able to send the request anyway, but if that works, then all your traffic is potentially insecure...