Istio RequestAuthentication blocks envoy sidecar's Ready status - istio

Could you please help me in understanding RequestAuthentication?
when I apply simple RequestAuthentication and restart Pod, envoy sidecar's ready state is false, and logs throw warn Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected
as soon I delete RequestAuthentication and recreate pod - works OK
Istio 1.8.3
apiVersion: 'security.istio.io/v1beta1'
kind: RequestAuthentication
metadata:
name: jwt-validator
spec:
selector:
matchLabels:
role: api
jwtRules:
- issuer: "https://mykeycloak.com/auth/realms/myrealm"
When proxy is in LDS stale state
the following log is shown in istiod
2021-04-10T17:30:53.326877Z warn ads ADS:LDS: ACK ERROR sidecar~10.238.2.69~PODNAME.NS~NS.svc.cluster.local-60 Internal:Error adding/updating listener(s) vi ││ rtualInbound: Issuer 'MY_JWT_ISSUER_URL' in jwt_authn config has invalid local jwks: Jwks RSA [n] or [e] field is missing or has a parse error
Resolved
Issuer here is not just a string to match in JWT, but the real URL that must be accessible from istiod, and with a valid SSL certificate

I'm placing this answer for better visibility.
As #Yegor Lopatin mentioned in edit, the issue was solved by fixing the issuer:
Issuer here is not just a string to match in JWT, but the real URL that must be accessible from istiod, and with a valid SSL certificate
issuer must be a valid and accessible link. I thought it is just an string, which you compare with when reading JWT
e.g.
jwtRules:
- issuer: "testing#secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.5/security/tools/jwt/samples/jwks.json"
There are tutorials you might refer to when you're seting up JWT with istio:
https://www.istiobyexample.dev/jwt
https://istio.io/latest/docs/tasks/security/authorization/authz-jwt/

Related

Traefik Middleware Buffering Response Code

We have a global buttering middleware rule applied to Traefik v2.9.6 running inside EKS v1.23 as seen here -
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: request-limits
spec:
buffering:
maxRequestBodyBytes: 10485760
And this is applied via -
additionalArguments:
- --entrypoints.websecure.http.middlewares=traefik-request-limits#kubernetescrd
The 10 MiB limit works, but the expected HTTP 413 response is not returned, instead the response is
"Connection reset by peer (Write failed)"
Is there a way to intercept this response, and generate the expected HTTP response code instead?
The issue has nothing to do either either EKS or Traefik, it had to do with the client making the REST API request into the cluster.
The version of the Java JDK on the client host was causing the connections to be terminated prematurely before the server side response, which should have been a HTTP 413.
Testing with Curl identified the issue.

istio 1.4.8: strange 400 error when used with AWS Load balancer

I'm getting a strange 400 error when I try to connect to an Istio Gateway when behind an AWS load balancer.
I don't see any activity in the istio-ingresgateway logs (even with debug settings on), but when I run sysdig on the ingressgateway pod, I see weird semi-random text with often with "QUIT !T" in it.
I get this when I try to make an http request via a browser or curl from outside the cluster.
The same Istio configuration works when I try to make the request in minikube or in Azure.
I'm also able to use the same AWS lb to point to a Nginx ingress controller and it works just fine.
sudo sysdig -s2000 -A -c echo_fds fd.ip=10.1.2.3
Sometimes there is no GET request in the output
------ Read 100B from 10.1.1.3:44404->10.1.2.3:80 (envoy)
QUIT
!T
C
ct>
------ Write 66B to 10.1.1.3:44404->10.1.2.3:80 (envoy)
HTTP/1.1 400 Bad Request
content-length: 0
connection: close
And, sometimes this happens right before the GET request
------ Read 3.39KB from 10.1.1.3:35430->10.1.2.3:80 (envoy)
QUIT
!T
C
atfI>GET /myapp/ HTTP/1.1
I'm wondering if the weird characters are causing the envoy routes not to match, but I have no idea where this could be coming from.
Any advice as to what this might be?
Any general strategies for debugging Istio ingress?
Any help is much appreciated.
So I found the answer to this question. The garbage in the request was a read herring.
A little more info about the setup:
The AWS load balancer was terminating TLS, so all the traffic behind it was going over the http port 31380. I was already passing the X-Forwarded-Proto: https header via the istio VirtualService setting, so the backend applications could assume that the traffic was already-terminated TLS traffic.
The solution:
The issue in my case was that the AWS Target Group had the Proxy protocol v2: Enabled set on it. Turning this off solved the issue.

CUPS/IPP over HTTPS via CF/Gorouter - TLS handshake error

I want to print PostScripts via CUPS/HTTPS on Cloud Foundry.
It's working when I'm using HTTP but fails for HTTPS with gorouter's log:
http: TLS handshake error from ...
My cipher_suites:
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256:TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384:TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384:TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
I tried to set router.logging_level to debug (default is info) but it changes nothing...
Is there any chance to get more information?
What is the most detailed log level for gorouter?
I solved my problem.
In my case mutual TLS was enabled on gourouter:
By default, Gorouter requests but does not require client certificates in TLS handshakes.
https://docs.cloudfoundry.org/adminguide/securing-traffic.html#gorouter_mutual_auth
Checking if mTLS is enabled
1. Widows SCHANNEL event logging
Add a registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL
EventLogging REG_DWORD = 3
https://blogs.technet.microsoft.com/kevinjustin/2017/11/08/schannel-event-logging/
Now you should find event logs that server asks for client certificate but it can't be found.
2. curl
Look at the bold lines:
curl -I -v -H "Connection: close" https://your-app.cloud
About to connect() to your-app.cloud port 443 (#0)
Connected to your-app.cloud port 443 (#0)
Initializing NSS with certpath: sql:/etc/pki/nssdb
CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none
NSS: client certificate not found (nickname not specified)
SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
3. openssl
Look at the bold lines:
openssl s_client -connect your-app.cloud:443 -state
CONNECTED(00000003)
SSL_connect:before/connect initialization
SSL_connect:SSLv2/v3 write client hello A
SSL_connect:SSLv3 read server hello A
...
verify return:1
SSL_connect:SSLv3 read server certificate A
SSL_connect:SSLv3 read server key exchange A
SSL_connect:SSLv3 read server certificate request A
SSL_connect:SSLv3 read server done A
SSL_connect:SSLv3 write client certificate A
SSL_connect:SSLv3 write client key exchange A
SSL_connect:SSLv3 write change cipher spec A
SSL_connect:SSLv3 write finished A
SSL_connect:SSLv3 flush data
SSL_connect:SSLv3 read server session ticket A
SSL_connect:SSLv3 read finished A
Disable Gorouter mTLS
Change Gorouter properties using CF deployment manifest:
- name: router
- name: gorouter
release: routing
properties:
router:
forwarded_client_cert: always_forward
client_cert_validation: none
Now you can check if mTLS is enabled again.
Note that these settings didn't for the routing version 0.164.0 but for 0.178.0 it works as expected.

WSO2 API Manager Proxy Configuration

I have configured Open Weather API in API Manager (Version 1.6.0). Steps:
Add API
Name: weather
Context: /weather
Version: v1
Tier Availability: All
Transports: HTTP & HTTPS
Endpoint Type: HTTP endpoint
Production Endpoint: http://api.openweathermap.org/data/2.5/weather
==> At this point, if I click "Test", I get "Invalid" error.
I still go ahead and save and publish the API.
In Store, subscribe to the API and try to run in REST Client:
http://localhost:8280/weather/v1
Authorization: xxxx
Error response is seen after a while:
<am:fault
xmlns:am="http://wso2.org/apimanager">
<am:code>101503</am:code>
<am:type>Status report</am:type>
<am:message>Runtime Error</am:message>
<am:description>Error connecting to the back end</am:description>
</am:fault>
Error seen on the console:
[2014-05-22 14:11:39,067] WARN - ConnectCallback Connection refused
or failed f or : api.openweathermap.org/162.243.44.32:80 [2014-05-22
14:11:39,093] INFO - LogMediator STATUS = Executing default 'fault'
sequence, ERROR_CODE = 101503, ERROR_MESSAGE = Error connecting to the
back end
I am running the AM behind a proxy. I assume AM needs to be told to go through proxy when connecting to external URLs.
I have tried below option:
When starting the server use the command:
wso2server.bat -Dhttp.proxyHost= -Dhttp.proxyPort=8085 start
With this, I am unable to login to publisher or store. When clicked on the Login prompt, nothing happens.
How to configure the proxy server in AM, so that that AM uses the proxy server to connect to external URLs?
You can set the proxyhost and prort number in de axis2.xml file:
$WSO2APU_HOME/repository/conf/axis2/axis2.xml
Note: you must set http.proxyHost=your.internet.proxy.com, do not leave it empty

Applying encryption on email.yml / configuration.yml at Bitnami Redmine?

I'm a newbie on REDMINE(my version is 1.4.4)
I just got a question that is there any way that I can encrypt my email password on configuration.yml file which is to set up email configuration.
FYI, current setting is like below.
email_delivery:
delivery_method: :smtp
smtp_settings:
tls: true
address: smtp.gmail.com
port: 587
domain: smtp.gmail.com
authentication: :plain
user_name: "myEmailAddress#gmail.com"
password: "myEmailPassword" # I don't want to reveal my pw here :(
enable_starttls_auto: true
Thx in advance.
Yes you can but the only way I know to do it is to install and use Postfix as the relay and setup stunnel to handle the encrypted password exchange.
I had my system setup that way for a while but now I use Postfix without stunnel because Amazon SES service now supports starttls.
I had help from this site in my original setup:
http://quietmint.com/linux/postfix-relaying-mail-through-an-smtps-smarthost-on-port-465/
My latest setup I derived help from here:
http://blog.duoconsulting.com/2012/01/30/using-amazons-ses-with-postfix-as-a-smarthost-forwarder/