After spending ages trying to follow https://cloud.google.com/architecture/accessing-private-gke-clusters-with-cloud-build-private-pools and get my private worker pool to access my private GKE cluster, I managed to get it working.
However, I'm now trying to use Cloud Deploy to deploy workloads to my private GKE cluster. Since Cloud Deploy doesn't use my Cloud Build private worker pool, it can't leverage the connectivity between the worker pool and the GKE. Is there a way to make Cloud Deploy access a private GKE? I'm not finding anything online for that.
Thanks!
This is possible! To do so, you will need to configure an execution environment to specify the private work pool you would like use. You can find more details, here:
https://cloud.google.com/deploy/docs/execution-environment#changing_from_the_default_pool_to_a_private_pool
Related
So I was able to connect to a GKE cluster from a java project and run a job using this:
https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/JobExample.java
All I needed was to configure my machine's local kubectl to point to the GKE cluster.
Now I want to ask if it is possible to trigger a job inside a GKE cluster from a Google Cloud Function, which means using the same library https://github.com/fabric8io/kubernetes-client
but from a serverless environment. I have tried to run it but obviously kubectl is not installed in the machine where the cloud function runs. I have seen something like this working using a lambda function from AWS that uses https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/ecs/AmazonECS.html
to run jobs in an ECS cluster. We're basically trying to migrate from that to GCP, so we're open to any suggestion regarding triggering the jobs in the cluster from some kind of code hosted in GCP in case the cloud function can't do it.
Yes you can trigger your GKE method from a serverless environment. But you have to be aware to some edge cases.
With Cloud Functions, you don't manage the runtime environment. Therefore, you can't control what is installed on the container. And you can't control, or install Kubectl on it.
You have 2 solutions:
Kubernetes expose API. From your Cloud Functions you can simply call an API. Kubectl is an APIf call wrapper, nothing more! Of course, it requires more efforts, but if you want to stay on Cloud Functions you don't have any other choice
You can switch to Cloud Run. With Cloud Run, you can define your own container and therefore install Kubectl in it, in addition of your webserver (you have to wrap your Function in a webserver with Cloud Run, but it's pretty easy ;) )
Whatever the solution chosen, you also have to be aware of the GKE control plane exposition. If it is publicly exposed (generally not recommended), there is no issue. But, if you have a private GKE cluster, your control plane is only accessible from the internal network. To solve that with a serverless product, you have to create a serverless VPC connector to bridge the serverless Google-managed VPC with your GKE control-plane VPC.
How would you use an existing Compute Engine VM instance for a Google Cloud Build pipeline?
I know there's been a similar question in the past, however, the suggested answer is not really what I want - creating and then destroying a Compute Engine with every build.
In settings, Cloud Build allows you to enable "service account permissions" for Compute Engine (Compute Instance Admin (v1)), but I've found no information how to use that permission and service for running the build process with one of your predefined VM instances.
Or maybe I misunderstand the answer in the linked thread above and
COMMAND=sudo supervisorctl restart
actually restarts the existing VM supervisorctl? Any help would be appreciated.
You can't run a Cloud Build build on a GCE instance. The most customizable option you would have is to run the build on a private pool. But even in those cases it's always managed, you never have access to the underlying VM.
Another option would be to start a powerful GCE instance with Cloud Build via the GCE API, run your operations there and then stop the GCE instance.
I am new to google cloud platform and kubernetes in gcp. I am writing a java client code to connect to gcp and retrieve kubernetes secrets for some automation. Can someone please advise a good sample or documentation to start with?
Thanks in advance!
D'y'mean this?
Also, if it's Kubernetes on GCP, it's best to mention it as GKE (Google Kubernetes Engine) unless you mean to say that you're including a Kubernetes instance inside of a GCE (Google Compute Engine) instance, in which case you treat it as a Kubernetes instance in another computer across the internet.
Lastly... I am going to flag this question. as #DazWilkin has explained that it is preferable to make some effort before asking.
How does Kubernetes knows what external cloud provider on it is running?
Is there any specific service running in Master which finds out if the Kubernetes Cluster running in AWS or Google Cloud?
Even if it is able to find out it is AWS or Google, from where does it take the credentials to create the external AWS/Google Load Balancers? Do we have to configure the credentials somewhere so that it picks it from there and creates the external load balancer?
When installing Kubernetes cloud provider flag, you must specify the --cloud-provider=aws flag on a variety of components.
kube-controller-manager - this is the component which interacts with the cloud API when cloud specific requests are made. It runs "loops" which ensure that any cloud provider request is completed. So when you request an Service of Type=LoadBalancer, the controller-manager is the thing that checks and ensures this was provisioned
kube-apiserver - this simply ensure the cloud APIs are exposed, like for persistent volumes
kubelet - ensures thats when workloads are provisioned on nodes. This is especially the case for things like persistent storage EBS volumes.
Do we have to configure the credentials somewhere so that it picks it from there and creates the external load balancer?
All the above components should be able to query the required cloud provider APIs. Generally this is done using IAM roles which ensure the actual node itself has the permissions. If you take a look at the kops documentation, you'll see examples of the IAM roles assigned to masters and workers to give those nodes permissions to query and make API calls.
It should be noted that this model is changing shortly, to move all cloud provider logic into a dedicated cloud-controller-manager which will have to be pre-configured when installing the cluster.
I have an instance of NSQ running within a zone in google compute engine as part of a larger application.
As part of an automated testing effort, I'd like the ability to push events to this queue in our test environments. I would rather not expose this instance to the internet, and instead, create a google cloud function that acts as a facade. The cloud function can be installed at the project level, which is great since I don't want production to have this capability.
It seems that cloud functions are created at the region level and do not have access to zone local IP addresses. As a result, I can't figure out a way to post events to NSQ without exposing it to the public internet.
Is it possible to have a google cloud function communicate down to an instance running on gce without exposing that instance to the public internet?
Investigating the matter I've found that this is not possible yet. Google Cloud Functions internal connectivity to Google Compute Engine has been already requested some months ago.1
In the public bug/issues database of Google it has been already said that it has been taken into consideration by google and that the Google engineering team is working on it 2. There is no ETA for the functionality though.
Sources:
Google Groups question
Public google issue/bug tracker
Here is my workaround since there is no way to do this via cloud functions.
I am using the gcloud cli tool to connect to the GCE zone and issue curl commands via ssh to the NSQ instance. It is not great but it gets the job done.
https://cloud.google.com/sdk/gcloud/