What TLS version and ciphers does kubectl use? - kubectl

When kubectl connects to a cluster, I need to be able to see what TLS version it's using and what ciphers as well. I tried running with -v=999, but the debug output doesn't show that. Is there a way to see that, please?

One way to check what tls version is using is connecting to the master node and at the path /etc/kubernetes/manifests and then run grep -i tls-min-version as described in here
another thing that you can try is checking the cluster logs dump and see if you find what you need in there:
kubectl cluster-info dump
For reference you might want to check this kubernetes documentation

Related

Migrate to updated APIs

I'm getting an error to migrate API from GKE though I'm not using the said API /apis/extensions/v1beta1/ingresses
I ran the command kubectl get deployment [mydeployment] -o yaml and did not find the API in question
It seems an IngressList is that calls the old API. To check you can use following command, this will give you the entire ingress info.
kubectl get --raw /apis/extensions/v1beta1/ingresses | jq
I have same issue but i have upgraded node version from 1.21 to 1.22

How to access UI in Airflow 1.10?

To start with I am trying to upgrade from 1.9 version to 1.10 so my setup contains two vms running different versions of airflow with different port forwarding.
I can access UI from vm running with 1.9 but not able to access UI from 1.10.
To debug I want to confirm if airflow webserver is running. if I execute
sudo systemctl start airflow-webserver
it throws no error but when
I am looking at netstat I am not seeing any process listening to port 8080(default).
Also I have not created any user as I do not need rbac authentication ? Can that be a problem?
As requested by #kaxil. Below is the output of ps aux | grep airflow
Can someone provide some suggestions on how to fix this problem? Also if you need any further resource can provide it. I am not sure what is relevant here.
Output of journalctl -u airflow-webserver.service -b
The Error message shows that there is an issue with airflow.cfg file i.e. there might be a character in your airflow.cfg that is causing the issue. Recheck your config file, if you don't find an issue, post your config file in your question and we will try to figure it out.

GCP: kubectl exec/logs fails to container on using UBUNTU as OS

I created a 2 node cluster with OS as UBUNTU.
After deploying a container, trying a kubectl exec or logs fail with following error :-
Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user <username>
Please tell how to make it work.
Nodes are part of default pool only.
Steps to reproduce:-
gcloud container clusters create "gke-test-cluster" --image-type=UBUNTU --machine-type=n1-standard-2 --zone us-east1-c --num-nodes 2 --cluster-version=1.8
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/shell-demo.yaml
kubectl get pod shell-demo
kubectl exec -it shell-demo -- /bin/bash
Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-0c"?
kubectl logs shell-demo
Error from server: Get https://10.142.0.5:10250/containerLogs/default/shell-demo/nginx: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-0c"?
I am using my laptop for all CLI commands.
This issue has already been raised at:-
https://issuetracker.google.com/issues/77986235
https://serverfault.com/questions/907468/gcp-kubectl-exec-logs-fails-to-container-on-using-ubuntu-as-os/907882?noredirect=1#comment1177112_907882
I reproduced your issue, with your exact commands and it worked just fine. This has to be an issue due to something else (like the firewall, as in the issue tracker is suggested).
Actually, check to confirm you have these three firewall rules:
gke-gke-test-cluster-07424324-all ...
gke-gke-test-cluster-07424324-ssh ...
gke-gke-test-cluster-07424324-vms ...
About cloud shell and your laptop, there is no much difference, if you are correctly authenticated with Cloud SDK. So to say "This issue is also reproducible from gcp cloud-shell" doesn't really make sense.
If you do have the firewall rules, and don't have much done in the project, I would recommend you to create a new project and start over there.
It was some issue with size of project metadata. We cleaned it up and it worked.

debugging distcc: no job seems to run on slave

First, my ultimate goal is to cross compile OpenCV for arm so I have tried 2 approaches, but no success so far.
This question is related to using distcc for compiling, using the target to run the make command but taking advantage of a beefy server to speed things up.
Basically, the target doesn't seem to be sending jobs to the slave server.
I installed distcc on both machines (apt-get install distcc)
As I understand it, the daemon only needs to run on the slave.
I set up hosts in /etc/distcc/hosts: In that file I have the IPs of both the target at 192.168.10.45 and slave at 192.168.10.34
I run the daemon with
distccd --daemon --allow 192.168.10.45
to allow the target
with ps aux | grep distcc
I can see the 32 instances of distccd running.
If I use
netstat -pant | grep distcc
I see the daemon listening
Now, if I tail the log file at /var/log/distccd.log, there is nothing there, and nothing happening
When I run a job on the target with
make -j33 CC=distcc
it seems to run fine, but I see nothing happening on the slave
ufw is disabled, the 2 machines ping and can talk to each other via ssh.
What am I missing here?
You must define the list of compilation hosts (through the /etc/distcc/hosts file or through the DISTCC_HOSTS environment variable) on the master (target) machine. Check the host list by running on the master distcc --show-hosts.
Specify distcc as a compiler for C++ as well:
make -j33 CC=distcc CXX=distcc
Did you run:
sudo update-distcc-symlinks
The official installation documentation currently omits this step. I had the same symptoms and had some trouble finding the log, but eventually saw that I had to specify logging in an environment variable:
DISTCCD_OPTS="${DISTCCD_OPTS} --log-file /dev/shm/distccd.log"
Which said:
(dcc_warn_masquerade_whitelist) CRITICAL! /usr/local/lib/distcc not found. You must see up masquerade (see distcc(1)) to list whitelisted compilers or pass --enable-tcp-insecure. To set up masquerade automatically run update-distcc-symlinks.

Cannot connect to an Instance using ssh after Broken pipe

I'm totally new using SaltStack and AWS, probably this is a dumb question, I created an AMI (using packer) with SaltStack (masterless) as a provisioner... I was able to connect via ssh and make a configuration to the minion. I was able to run salt-call state.highstate successfully.
Later, I lost the connection to my instance,
([root#<ip> ec2-user]# Write failed: Broken pipe) and after that, I wasn't able to connect again.
What's been tried:
Reboot the instance and didn't work
I've checked the permissions on the .ssh files and they seem fine
Create a new instance and use the same key.pem and I was able to connect to this new instance.
I'm not sure if I'm missing a configuration in SaltStack. Is there a possibility that the keys on my instance changed after running salt-call state.highstate ??
What am I doing wrong?
There's nothing inherent in running highstate that would have terminated the SSH connection and prevented you from reconnecting. I would suspect it's something in your SLS files which is breaking SSH - which is applied when you run highstate.
Things that might have been done by your Salt states:
your SSH keys were removed/mangled
opensshd config was changed
openssh-server was uninstalled
EDIT: Having seen the output from Salt in the pastebin linked in comments, it's probably the AuthorizedKeysFile option being commented out:
-AuthorizedKeysFile .ssh/authorized_keys
+#AuthorizedKeysFile .ssh/authorized_keys
I recommend using file.replace to patch in specific changes you need, as opposed to replacing the whole /etc/ssh/sshd_config with a new version.