I'm trying to understand how mTLS is implemented in Istio and came up with the below scenario.
In my setup I have a namespace foo with two pods as below:
NAME READY STATUS RESTARTS AGE
httpbin-75b47445c9-gscrn 2/2 Running 0 1d
sleep-6777b55c98-tlqb6 2/2 Running 0 1d
My requirement is to retrieve the public certificate of httpbin from sleep.(Just for testing purposes)
So I get an interactive shell inside sleep and execute the below command.
curl --insecure -v http://httpbin.foo:8000/ip 2>&1 | awk 'BEGIN { cert=0 } /^\* SSL connection/ { cert=1 } /^\*/ { if (cert) print }'
But I'm not getting any output from it.
However, if I replace http://httpbin.foo:8000/ip with http://google.com I can get the certificate details correctly.
Can you please explain what is happening here.
Note that when you run curl http://httpbin.foo:8000 from an Istio-injected pod, the following happens:
Your HTTP request arrives to the Istio sidecar proxy of your pod (sleep)
The sidecar proxy encapsulates your request into mTLS connection to httpbin.foo:8000
The sidecar proxy of httpbin.foo performs TLS termination (decapsulates the original HTTP request) and forwards it to the httpbin.foo service
The httpbin.foo service receives the original, plain HTTP request
The httpbin.foo service sends a plain HTTP response
The sidecar proxy of httpbin.foo encapsulates the response and returns the response to your sidecar proxy
Your sidecar proxy returns the response back to your curl
Note that your curl will get plain HTTP response of the httpbin.foo service, which is unaware of Istio mTLS. Istio mTLS is like a tunnel for the communication between your curl and httpbin.foo, the certificate from the tunnel will not be returned to your curl.
To get the certificate of the sidecar proxy of httpbin.foo, you need to send a request to httbin.foo service directly (it will arrive to the sidecar proxy of httpbin.foo), bypassing the Istio sidecar proxy of your source pod (sleep).
For that you can deploy the sleep pod to some namespace without Istio injection, and then use openssl to retrieve the certificate:
kubectl create namespace without-istio
kubectl apply -f samples/sleep/sleep.yaml -n without-istio
kubectl exec -it $(kubectl get pod -l app=sleep -n without-istio -o jsonpath={.items..metadata.name}) -n without-istio -c sleep -- openssl s_client -connect httpbin.foo:8000
Related
I was asked by the devops of my company to test wether I had access to to the instance where the tool is hosted, access to the bastion host, data, and at the tool deployment process (API and frontend) before modifying it. So I wanted to test wether I had Access API locally
Access API locally (on API example)
$ ssh -L<local-port>:ec2-user#<stage-or-prod-instance-ip>:<host-port>
ubuntu#<stage-or-prod-bastion-ip>
where:
<local-port> - local port
<stage-or-prod-instance-ip> is IP of EC2 instance where API (other service) is deployed. Check the IP in Elastic Container Service using
AWS console.
<host-port> is a Host Port (dynamically assigned - check in AWS ECS console)
<stage-or-prod-bastion-ip> - stage or prod bastion IP
e.g.:
$ ssh -L3000:ha.ha.he.he:32980 ubuntu#hi.hi.ho.ho
get JWT token from utils/get_token.sh (toke is automatically exported
as TOKEN env var)
send request to API (or other service) endpoint using curl:
$ curl -XPOST -H "Authorization: Bearer $TOKEN"
localhost:5000/api/data/studies-metadata
I set up tunnel to API (other service) instance following using the same as the one given as an example to the doc, and this might be the whole problem, maybe this address is not the one where the API is deployed but I don't know how I can find the others on the Elastic Container Registry. So I did:
ssh -L3000:ha.ha.he.he:32980 ubuntu#hi.hi.ho.ho
With the true ha.ha.he.he and hi.hi.ho.ho from the doc (I put this mockup text just because I am not sure it is sensitive information.
Then I got the token with:
export TOKEN=$(aws secretsmanager get-secret-value \
--secret-id dev/api/token \
--query SecretString \
--output text | jq ".TOKEN" -r)
echo $TOKEN
And last I tried on port 5000 of the localhost:
ubuntu#ip-10-0-0-238:~$ curl -XPOST -H "Authorization: Bearer $TOKEN" localhost:5000/api/data/studies-metadata
curl: (7) Failed to connect to localhost port 5000: Connection refused
So I guessed it wasn't the right port? I tried to look at what was going out of this machine :
ubuntu#ip-10-0-0-238:~$ sudo netstat -anp | grep -i tcp | grep -i listen
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 251314/systemd-reso
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 658/sshd: /usr/sbin
tcp6 0 0 :::22 :::* LISTEN 658/sshd: /usr/sbin
Fine, but none of these looks like the /api/data/studies-metadata I was looking for ...
So I am thinking I am not on the right machine so I thought about looking for <stage-or-prod-instance-ip> the IP of EC2 instance where API (or other services) is deployed. I check the IP in Elastic Container Service using AWS console:
I did this query within the instance I connected to via ssh:
ubuntu#ip-10-0-0-238:~$ aws ec2 describe-instances --query "Reservations[].Instances[].PrivateIpAddress" --output text
10.0.0.238 10.0.1.238
So I guess I connected to the wrong instance? How can I connect to the other one?
I am a true begineer in AWS.
I am trying to register zabbix agent to zabbix server but facing this error[Received empty response from Zabbix Agent at [XX.XXX.XX.XX]. Assuming that agent dropped connection because of access permissions.] in zabbix server UI.
I have elb top of Zabbix server and using elb dns name in zabbix agent conf file. Seems registration is happening but agent server is not active[Availability].
**conf file**
Server=elb end point
ServerActive=elb end point
Any lead would be appreciated.
This is error from agent alowed hosts. Often when you are using the Zabbix server in docker and agent from system on the same IP.
see log file at first
$ cat /var/log/zabbix/zabbix_agentd.log | grep connection
now compare incoming connection: connection from "SOME_IP" and allowed hosts: "SOME_IPs"
example output :
failed to accept an incoming connection: connection from "172.17.0.2" rejected, allowed hosts: "127.0.0.1"
This is your problem. connection from 172.17.0.2 (docker) is not allowed in your zabbix_agentd.conf. You will have to edit /etc/zabbix/zabbix_agentd.conf like this :
Server=::ffff:127.0.0.1,172.17.0.2 ServerActive=::ffff:127.0.0.1,172.17.0.2
Last step is restart agent
$ sudo systemctl restart zabbix-agent
Why does using kubectl with impersonation --as= result in "The connection to the server localhost:8080 was refused" on a host with only the default service account configured?
I have downloaded kubectl to a host with only the default service account configured. If I try to impersonate any user, e.g. system:anonymous, the following error message is returned: "The connection to the server localhost:8080 was refused".
I can resolve the issue by starting a local proxy using kubectl proxy --port=8080, however, I would like to avoid this.
Why does kubectl try to connect to localhost:8080, when using impersonation (--as=)?
kube#ctf1-k8s-deploy1-545977f47-g9dpl:~$ kubectl config view
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
kube#ctf1-k8s-deploy1-545977f47-g9dpl:~$ ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token
kube#ctf1-k8s-deploy1-545977f47-g9dpl:~$ kubectl auth can-i --list --as=system:anonymous
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The clusters need to have the host and port of the Kubernetes API Server.
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://API_SERVER_HOST:PORT
Edit:
When --as is added as parameter to kubectl auth can-i then kubectl is not using in-cluster configuration any more which is why its referring to localhost:8080 instead of correct API Server IP.
At the moment the kubectl has an issue where any client configuration override flag (eg. --as, --request-timeout,...) disables the automatic fallback to in-cluster configuration.
See Kubernetes Github issue
I'm trying to install kubernetes dashboard on AWS Linux image but I'm getting JSON output on the browser. I have run the dashboard commands and given token but it did not work.
Kubernetes 1.14+
1) Open terminal on your workstation: (standard ssh tunnel to port 8002)
$ ssh -i "aws.pem" -L 8002:localhost:8002 ec2-user#ec2-50-50-50-50.eu-west-1.compute.amazonaws.com
2) When you are connected type:
$ kubectl proxy -p 8002
3) Open the following link with a web browser to access the dashboard endpoint: http://localhost:8002/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Try this:
$ kubectl proxy
Open the following link with a web browser to access the dashboard endpoint:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
More info
I had similar issue with reaching the dashboard following your linked tutorial.
One way of approaching your issue is to change the type of the service to LoadBalancer:
Exposes the service externally using a cloud provider’s load balancer.
NodePort and ClusterIP services, to which the external load balancer
will route, are automatically created.
For that use:
kubectl get services --all-namespaces
kubectl edit service kubernetes-dashboard -n kube-system -o yaml
and change the type to LoadBalancer.
Wait till the the ELB gets spawned(takes couple of minutes) and then run
kubectl get services --all-namespaces again and you will see the address of your dashboard service and you will be able to reach it under the “External Address”.
As for the tutorial you have posted it is from 2016, and it turns out something went wrong with the /ui in the address url, you can read more about it in this github issue. There is a claim that you should use /ui after authentication, but it also does not work.
For the default settings of ClusterIP you will be able to reach the dashboard on this address:
‘YOURHOSTNAME’/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Another option is to delete the old dashboard:
Kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Install the official one:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Run kubectl proxy and reach it on localhost using:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview
We have set up OpenShift Origin on AWS using this handy guide. Our eventual
hope is to have some pods running REST or similar services that we can access
for development purposes. Thus, we don't need DNS or anything like that at this
point, just a public IP with open ports that points to one of our running pods.
Our first proof of concept is trying to get a jenkins (or even just httpd!) pod
that's running inside OpenShift to be exposed via an allocated Elastic IP.
I'm not a network engineer by any stretch, but I was able to successuflly get
an Elastic IP connected to one of my OpenShift "worker" instances, which I
tested by sshing to the public IP allocated to the Elastic IP. At this point
we're struggling to figure out how to make a pod visible that allocated Elastic IP,
owever. We've tried a kubernetes LoadBalancer service, a kubernetes Ingress,
and configuring an AWS Network Load Balancer, all without being able to
successfully connect to 18.2XX.YYY.ZZZ:8080 (my public IP).
The most promising success was using oc port-forward seemed to get at least part way
through, but frustratingly hangs without returning:
$ oc port-forward --loglevel=7 jenkins-2-c1hq2 8080 -n my-project
I0222 19:20:47.708145 73184 loader.go:354] Config loaded from file /home/username/.kube/config
I0222 19:20:47.708979 73184 round_trippers.go:383] GET https://ec2-18-2AA-BBB-CCC.us-east-2.compute.amazonaws.com:8443/api/v1/namespaces/my-project/pods/jenkins-2-c1hq2
....
I0222 19:20:47.758306 73184 round_trippers.go:390] Request Headers:
I0222 19:20:47.758311 73184 round_trippers.go:393] X-Stream-Protocol-Version: portforward.k8s.io
I0222 19:20:47.758316 73184 round_trippers.go:393] User-Agent: oc/v1.6.1+5115d708d7 (linux/amd64) kubernetes/fff65cf
I0222 19:20:47.758321 73184 round_trippers.go:393] Authorization: Bearer Pqg7xP_sawaeqB2ub17MyuWyFnwdFZC5Ny1f122iKh8
I0222 19:20:47.800941 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
I0222 19:20:47.800963 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
( oc port-forward hangs at this point and never returns)
We've found a lot of information about how to get this working under GKE, but
nothing that's really helpful for getting this working for OpenShift Origin on
AWS. Any ideas?
Update:
So we realized that sysdig.com's blog post on deploying OpenShift Origin on AWS was missing some key AWS setup information, so based on OpenShift Origin's Configuring AWS page, we set the following env variables and re-ran the ansible playbook:
$ export AWS_ACCESS_KEY_ID='AKIASTUFF'
$ export AWS_SECRET_ACCESS_KEY='STUFF'
$ export ec2_vpc_subnet='my_vpc_subnet'
$ ansible-playbook -c paramiko -i hosts openshift-ansible/playbooks/byo/config.yml --key-file ~/.ssh/my-aws-stack
I think this gets us closer, but creating a load-balancer service now gives us an always-pending IP:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-lb 172.30.XX.YYY <pending> 8080:31338/TCP 12h
The section on AWS Applying Configuration Changes seems to imply I need to use AWS Instance IDs rather than hostnames to identify my nodes, but I tried this and OpenShift Origin fails to start if I use that method. Still at a loss.
It may not satisfy the "Elastic IP" part but how about using AWS cloud provider ELB to expose the IP/port to the pod via a service to the pod with LoadBalancer option?
Make sure to configure the AWS cloud provider for the cluster (References)
Create a svc to the pod(s) with type LoadBalancer.
For instance to expose a Dashboard via AWS ELB.
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: LoadBalancer <-----
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Then the svc will be exposed as an ELB and the pod can be accessed via the ELB public DNS name a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com.
$ kubectl (oc) get svc kubernetes-dashboard -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard LoadBalancer 10.100.96.203 a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com 443:31636/TCP 16m k8s-app=kubernetes-dashboard
References
K8S AWS Cloud Provider Notes
Reference Architecture OpenShift Container Platform on Amazon Web Services
DEPLOYING OPENSHIFT CONTAINER PLATFORM 3.5 ON AMAZON WEB SERVICES
Configuring for AWS
Check this guide out: https://github.com/dwmkerr/terraform-aws-openshift
It's got some significant advantages vs. the one you referring to in your post. Additionally, it has a clear terraform spec that you can modify and reset to using an Elastic IP (haven't tried myself but should work).
Another way to "lock" your access to the installation is to re-code the assignment of the Public URL to the master instance in the terraform script, e.g., to a domain that you own (the default script sets it to an external IP-based value with "xip.io" added - works great for testing), then set up a basic ALB that forwards https 443 and 8443 to the master instance that the install creates (you can do it manually after the install is completed, also need a second dummy Subnet; dummy-up the healthcheck as well) and link the ALB to your domain via Route53. You can even use free Route53 wildcard certs with this approach.