I recently received an email from Google:
Hello Google Kubernetes Engine Customer,
We’re writing to remind you that we have discouraged Basic authentication in
Google Kubernetes Engine (GKE). This authentication strategy has been
disabled by default since version 1.12, because it does not align with
Googles’ security best practices and will no longer be supported in GKE,
starting from v1.19.
You’re receiving this message, because you’re currently using a
static password to authenticate for one or more of your GKE clusters.
How can I avoid using a static password, where is this kept? I don't remember setting this up.
I've referenced https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster. Am I right to understand I didn't do anything particularly to fall out of compliance except using the GCP automation prior to 1.12 and now need to take some sort of action to remain within current standards?
Want to ensure I understand the history and scope of this change and perhaps have a simplified video I can follow verbatim to ensure I don't fall into a downtime I can't get out of. Or just a set of commands if this is standard to maintain my current workflow and authenticate on my user that already has current access prior to 1.12 when I deploy my app.
Disabling basic authentication should not result in any downtime for your cluster.
The preferred method for authentication with the API server is OAuth and this should already be enabled for your cluster. You can check that this is working by running the following commands:
gcloud auth login
gcloud container clusters get-credentials $CLUSTERNAME --zone $ZONE
Running any kubectl command, e.g. kubectl cluster-info
Assuming all goes well there (and I can't think of any reason it will not), you'd then run
gcloud container clusters update CLUSTER_NAME --no-enable-basic-auth
in order to disable basic auth.
EDIT: If you need to (re)enable basic auth, you can run the foolowing command:
gcloud container clusters update --username=$USER --password=$PASS
where $USER and $PASS are the username and password you were previously using (or a new user/password if you choose).
Of course, if you have any automations using the Cloud SDK which use basic auth, you'd need to update those as well.
Related
I am running cloud foundry on a Kubernetes cluster on the Digital Ocean platform. I am able to deploy apps successfully via cf push APP_NAME without a database. Now I would like to run my Django app with a PostgreSQL database. When I run from terminal cf marketplace it does now show me the list of offerings/services available in the marketplace.
cf marketplace
Output
Getting services from marketplace in org abc-cforg / space abc-cfspace as admin...
OK
No service offerings found
Output from cf version
cf version 6.53.0+8e2b70a4a.2020-10-01
I have tried with cf version 7 as well but no luck.
I am quoting from this doc -
No problem. The Cloud Foundry marketplace is a collection of services that can be
provisioned on demand. Your marketplace may differ depending on the Cloud Foundry
distribution you are using.
What should I be doing now to get the list of service offerings in the marketplace? I googled quite some time but could not find a fix.
I have an account in pivotal as well but this is deprecated already as per this link.
By default, there will not be any services in the marketplace. As a platform operator, you'll need to add the services that you want to expose to your CloudFoundry users.
If you look at a public CloudFoundry offering, you can see that this is done for you, and when you run cf m you'll get the list of services that the public provider and their operations team set up for you.
When you run your own CF, that's on you to set up.
There are a couple of things you can do:
The easy option is to use user-provided services. These are not set up through the marketplace, so you simply ignore that command altogether.
You would instead go procure your service from somewhere else. You mentioned using Digital Ocean, so you could procure one of their managed databases. Once you have your database credentials, you would run cf cups -p username,password,host my-service (these are free-form fields names, enter whatever makes sense for your service) and, when prompted, enter the info. This creates a user-provided service, which can be bound to your apps and works just like a service you'd acquire through the marketplace.
The more involved option requires deploying more infrastructure to run a service broker. The service broker talks to Cloud Controller and provides a catalog of available services. Those services are what Cloud Controller displays when you run cf m.
There are some community-provided brokers and commercial ones as well. I think a lot of these brokers also assume a Bosh deployment and not Kubernetes, so be careful to read the instructions and see if that's a requirement.
A quick scan through and here are a few that seem like they should work:
https://github.com/cloudfoundry-community/cf-containers-broker
https://github.com/cloudfoundry-community/s3-broker
https://github.com/cloudfoundry-community/rds-broker
I managed to get multicluster istio working following the documentation.
However this requires the kubeconfig of the clusters to be setup on each other. I am looking for an alternative to doing that. Based on presentation from solo.io and admiral, it seems that it might be possible to setup ServiceEntries to accomplish this manually. Istio docs are scarce in this this area. Does anyone have pointers on how to make this work?
There are some advantages to setting up the discovery manually or thru our CD processes...
if one cluster gets compromised, the creds to other clusters dont leak
allows us to limit the which services are discovered
I posted the question on twitter as well and hope to get some feedback from the Istio contributors.
As per Admiral docs:
Admiral acts as a controller watching k8s clusters that have a credential stored as a secret object which the namespace Admiral is running in. Admiral delivers Istio configuration to each cluster to enable services to communicate.
No matter how you manage contol-plane configuration (manually or with controller) - you have store and provision credentials somehow. In this case with use of the secrets
You can store your secrets securely in git with sealed-secrets.
You can read more here.
This happens while trying to create a VPC-native GKE cluster. Per the documentation here the command to do this is
gcloud container clusters create [CLUSTER_NAME] --enable-ip-alias
However this command, gives below error.
ERROR: (gcloud.container.clusters.create) Only alpha clusters (--enable_kubernetes_alpha) can use --enable-ip-alias
The command does work when option --enable_kubernetes_alpha is added. But gives another message.
This will create a cluster with all Kubernetes Alpha features enabled.
- This cluster will not be covered by the Container Engine SLA and
should not be used for production workloads.
- You will not be able to upgrade the master or nodes.
- The cluster will be deleted after 30 days.
Edit: The test was done in zone asia-south1-c
My questions are:
Is VPC-Native cluster production ready?
If yes, what is the correct way to create a production ready cluster?
If VPC-Native cluster is not production ready, what is the way to connect privately from a GKE cluster to another GCP service (like Cloud SQL)?
Your command seems correct. Seems like something is going wrong during the creation of your cluster on your project. Are you using any other flags than the command you posted?
When I set my Google cloud shell to region europe-west1
The cluster deploys error free and 1.11.6-gke.2(default) is what it uses.
You could try to manually create the cluster using the GUI instead of gcloud command. While creating the cluster, check the “Enable VPC-native (using alias ip)” feature. Try using a newest non-alpha version of GKE if some are showing up for you.
Public documentation you posted on GKE IP-aliasing and the GKE projects.locations.clusters API shows this to be in GA. All signs point this to be production ready. For whatever it’s worth, the feature has been posted last May In Google Cloud blog.
What you can try is to update your version of Google Cloud SDK. This will bring everything up to the latest release and remove alpha messages for features that are in GA right now.
$ gcloud components update
I was happily deploying to Kubernetes Engine for a while, but while working on an integrated cloud container builder pipeline, I started getting into trouble.
I don't know what changed. I can not deploy to kubernetes anymore, even in ways I did before without cloud builder.
The pods rollout process gives an error indicating that it is unable to pull from the registry. Which seems weird because the images exist (I can pull them using cli) and I granted all possibly related permissions to my user and the cloud builder service account.
I get the error ImagePullBackOff and see this in the pod events:
Failed to pull image
"gcr.io/my-project/backend:f4711979-eaab-4de1-afd8-d2e37eaeb988":
rpc error: code = Unknown desc = unauthorized: authentication required
What's going on? Who needs authorization, and for what?
In my case, my cluster didn't have the Storage read permission, which is necessary for GKE to pull an image from GCR.
My cluster didn't have proper permissions because I created the cluster through terraform and didn't include the node_config.oauth_scopes block. When creating a cluster through the console, the Storage read permission is added by default.
The credentials in my project somehow got messed up. I solved the problem by re-initializing a few APIs including Kubernetes Engine, Deployment Manager and Container Builder.
First time I tried this I didn't succeed, because to disable something you have to disable first all the APIs that depend on it. If you do this via the GCloud web UI then you'll likely see a list of services that are not all available for disabling in the UI.
I learned that using the gcloud CLI you can list all APIs of your project and disable everything properly.
Things worked after that.
The reason I knew things were messed up, is because I had a copy of the same things as a production environment, and there these problems did not exist. The development environment had a lot of iterations and messing around with credentials, so somewhere things got corrupted.
These are some examples of useful commands:
gcloud projects get-iam-policy $PROJECT_ID
gcloud services disable container.googleapis.com --verbosity=debug
gcloud services enable container.googleapis.com
More info here, including how to restore service account credentials.
I trying to create some firewall rules in google compute, everything goes well, but some time later, they are just disappears.
I tried to add rules on default network, and also custom created - in both cases result same.
Tried both: through web UI, and through gcloud tool
If you believe that someone or something is reverting your Firewall changes, you can take multiple approaches to verify that.
inspect Cloud Console Activity logs
same using CLI: gcloud beta logging read "resource.type=gce_firewall_rule"
check GCE Operations section in Cloud Console
check GCE API requests in Cloud Console Logging, using this advanced filter:
resource.type="gce_firewall_rule"
jsonPayload.event_subtype:"compute.firewalls"