kubelogin convert-kubeconfig need to be invoked after every az aks get-credentials - kubectl

I'm having the problem described here: https://aptakube.com/blog/how-to-use-azure-kubelogin
If I follow the article and use kubelogin convert-kubeconfig -l azurecli it works again indeed.
The problem is that every time I refresh my credentials using az aks get-credentialsz I need to repeat kubelogin convert-kubeconfig -l azurecli otherwise it won't work.
Is there a way to make this change permanent ?

Related

Why am I getting inconsistent results when attempting to update my instance group using `gcloud`?

I have an instance group in GCP, and I am working on automating the deployment process. The instances in this group are based on a tagged GCR image. When a new image is pushed to the container registry, we have been manually triggering an upgrade by navigating to the instance group from console.cloud.google.com, clicking "restart/replace vms", and setting these options:
Operation: replace
Maximum surge: 3
Maximum unavailable: 0
Here is my gcloud command for doing the same thing (link to Google's documentation about this command):
gcloud beta compute instance-groups managed rolling-action start-update my-instance-group \
--version=template=my-template-with-image \
--replacement-method=substitute \
--max-surge=3 \
--max-unavailable=0 \
--region=us-central1
Manually, the process always works. But the gcloud command is flaky. It always appears to succeed from the command line, but the instance groups are not always restarted. I have even tried adding these two flags, and the restart attempt was still unreliable:
--minimal-action=replace \
--most-disruptive-allowed-action=replace \
There is quite a lot of output from the gcloud command (which I can provide, if necessary), but here are the only parts of the output that differ between a successful and unsuccessful attempt:
Good:
currentActions:
creating: 1
status:
isStable: false
versionTarget:
isReached: false
Bad:
currentActions:
creating: 0
status:
isStable: true
versionTarget:
isReached: true
That is pretty much the extent of my knowledge at this point. I am not sure how to move forward in automating the build process, and I have been unable to find answers from the documentation so far.
I hope I was not too verbose, and thank you in advance to anyone who spends time on this :)

Adding user to group chrome-remote-desktop - Failed to access group. Is the user a member?

I created an instance with Debian 9 and was following the instructions on Google's site here. I have done this before successfully. All was going fine, but now when I do this part:
DISPLAY= /opt/google/chrome-remote-desktop/start-host \
--code="4/xxxxxxxxxxxxxxxxxxxxxxxx" \
--redirect-url="https://remotedesktop.google.com/_/oauthredirect" \
--name=
I get the error
Adding user newuser_gmail_com to group chrome-remote-desktop
ERROR:Failed to access chrome-remote-desktop group. Is the user a
member?
Can anyone help me out here? I notice that when I did this previously, the username create was not newuser_gmail_com, but rather simply newuser. Any suggestions you have would be much appreciated. Many thanks!
I found the answer, but this raises a possible bug for the Google Cloud team. The bug occurs if I add enable-oslogin = TRUE as a metadata. This causes the chrome-remote-desktop to fail.
When a user is added to a group (chrome-remote-desktop in this case), the change is not reflected in existing sessions until the user logs out and back in. To work around this limitation, Chrome Remote Desktop attempts to use sg to access the new group from the existing session. It looks like this isn't working for some reason on this system (apparently OS Login related?), so starting the host fails.
It should be sufficient to log out and back in. Once logged back in, very that the output of groups contains chrome-remote-desktop, then try running the headless setup flow again. (Make sure you generate a new command, as the --code argument is one-time-use only.)

What is the reason for error "Resource in project is the subject of a conflict" while trying to recreate a cloudsql instance?

I am trying to create a cloudsql instance with the following command:
gcloud beta sql instances create sql-instance-1 --tier=db-f1-micro --region=asia-south1 --network=default --storage-type=HDD --storage-size=10GB --authorized-networks=XX.XXX.XX.XX/XX
The instance sql-instance-1 is something I need not running all the time. So I create an sqldump file and when I need the database I create it. When I run this command it fails with the following error
ERROR: (gcloud.beta.sql.instances.create) Resource in project [my-project-id] is the subject of a conflict: The instance or operation is not in an appropriate state to handle the request.
From what I understand the gcloud is complaining that instance name was used before although the instance is already deleted. When I change the name to a new unused name the command works fine. The problem with this is I need to give a new name every time I re-create the instance from the dump.
My questions are:
Is this expected behavior i.e. should name of cloud-sql instance be unique and not used before within a project.
I also found that --network option is not recognized with gcloud. Seems to work only with gcloud beta as explained here. When is this expected to become GA?
This is indeed expected behaviour. From the documentation:
You cannot reuse an instance name for up to a week after you have
deleted an instance.
Regarding the --network flag and it's schedule for GA, there is no ETA for its release outside of beta. However, it's release will be listed in the Google Cloud SDK Release Notes, which you can get updates from by subscribing to the google-cloud-sdk-announce group

Change CNAME to Active AWS Region

I have an application running in AWS; both us-east-1 and us-west-2
When I have to perform maintenance or there is some other issue I want to switch DNS based on an nslookup.
I currently have two Jenkins jobs set up: east-to-west and west-to-east which requires me to manually verify the DNS record and pick the appropriate job. I now want to have a master job that will perform the nslookup then kick off the appropriate job.
I'm stuck trying to use the Jenkins conditional. If I do an "nslookup myapp | grep west" then I can trigger the west-to-east job. I'm not finding a way to do an, "else" if the condition is false.
Another option I'd consider is changing parameters as shown in the logic below and then doing a post build. My jobs names are flip-us-east-1-to-us-west-2 and flip-us-west-2-to-us-east-1
a=us-east-1
b=us-west-2
if nslookup east # if true will run us-west-2-to-us-east-1
a=us-west-2
b=us-east-1
fi
flip-a-to-b
Pipeline is an excellent idea but it's not installed :(
I ended up using a choice parameter which requires me to do the nslookup. I then do a string match conditional. Not ideal but works and boss is happy ;)

How do I get AWS EC2 to not reset my sshd_config file?

I want to allow password logins to my EC2 instances. I know which line it is that controls this in /etc/ssh/sshd_config and what it should be set to. Specifically:
PasswordAuthentication yes
However, even when I've set this on a master image that I keep, whenever I restore it to a new instance, the value on the line keeps getting reset to 'no'. Which means that every time I launch a new instance I have to yet again manually change this file. This has made the automation of my instances one step away from being fully automated.
What do I need to do to my master image so that every instance I create from it leaves my sshd_config file the way I like?
This is a Fedora 16 image fully configured with proprietary and other software.
If you used an old AMI as the basis for your images, that option used to be changed by the kickstart file, but as far as I know that option was removed some time ago.
These days the AMI is most likely configured by cloud-init and if that is the case you should find and change the ssh_pwauth option in /etc/cloud/cloud.cfg
Edit file /etc/cloud/cloud.cfg (needs root permission, e.g. sudo)
Look for the ssh_pwauth key
Change its value from 0 to true. Not 1, but true!
ssh_pwauth: true