I have two clusters on GKE. Each cluster has a 6 node setup. I just deployed a helm chart of vitess on both clusters.
A vitess operator is also deployed. etcd deplyoed with clusterWide: true
My question is how to connect these two seperate vitess deployment so they work as one like this demo https://www.youtube.com/watch?v=-Hz6LFJu1cY
Related
I want to make my application available on the AWS marketplace. My application is composed of an EKS cluster and then multiple helm chart used to deploy my micro-services (using argocd at the moment).
I am not sure which delivery method to use, I want to use a cloudformation template stack to deploy the EKS infrastructure and the helm chart ?
Should I use a lambda function to call my application deployment service once the initial cloudformation stack is deployed (EKS cluster created) ?
Is there any API that we can use to upscale/downscale the number of PODs in AWS EKS ?
I tried to go through the documentations related to horizontal pod autoscaling but that doesn't fulfil my requirement as I want to create an API to scale the pods and that approach focuses more on kubectl commands.
I was able to achieve this using client-java-api offered by kubernetes.
listNamespacedDeployments method can be used to get the deployments and pods based on the deployments.
replaceNamespacedDeployments can be used to replace the specified deployment to upscale or downscale the number of pods.
I have an Elastic Kubernetes Cluster(EKS) running in AWS , In the cluster many services and pods are running .I want to use AppDynamics to monitor the services and pods . I am new to AppDynamics so I don't know much about it . but i am confused in some areas
What are the performance metrics(CPU usages , no of instances... ) should I use for monitor the
cluster
How can I monitor the cluster , how to setup AWS with AppDynamics to monitor everything
The Cluster Agent is used for monitoring AWS EKS, additionally the Cluster Agent Operator can be used to setup additional Infra / Network monitoring.
Compatibility: https://docs.appdynamics.com/21.9/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/cluster-agent-requirements-and-supported-environments
Install (Cluster Agent): https://docs.appdynamics.com/21.9/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent
(You will need to grab / build an image and then install using Kubernetes CLI or the Cluster Agent Helm Chart)
Install (Infra Agent / Network Visibility - requires the Cluster Agent): https://docs.appdynamics.com/21.9/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/install-infrastructure-visibility-with-the-cluster-agent-operator
Metrics: https://docs.appdynamics.com/21.9/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/use-the-cluster-agent/monitor-cluster-health
As to what Metrics to actively monitor this is a bit subjective, however there are plenty of guides around to help, e.g:
https://www.kubermatic.com/blog/the-complete-guide-to-kubernetes-metrics/
https://sematext.com/blog/kubernetes-metrics/
Does AWS Elastic Kubernetes Service have the same concept as Google Kubernetes Engine apps via a marketplace? I'm looking to deploy RabbitMQ and previously accomplished this on GKE.
Otherwise, it looks like there's a helm chart or I can manually do this via the container on dockerhub.
No, you need top deploy it by your own.
AWS does have a marketplace where you can utilize various AMI's or deploy a certified Bitnami RabbitMQ image. You can see that for yourself here: https://aws.amazon.com/marketplace
The downside is that this isn't something available for AWS EKS and as a result we will have to install/maintain this ourselves. That could look something like using the stable/rabbitmq-ha chart with anti-affinity across AZ's, quorum queues, and EBS.
Learn more about helm here:
https://helm.sh/docs/intro/using_helm/
Learn more about the rabbitmq helm chart here: https://hub.helm.sh/charts/stable/rabbitmq-ha
This happens while trying to create a VPC-native GKE cluster. Per the documentation here the command to do this is
gcloud container clusters create [CLUSTER_NAME] --enable-ip-alias
However this command, gives below error.
ERROR: (gcloud.container.clusters.create) Only alpha clusters (--enable_kubernetes_alpha) can use --enable-ip-alias
The command does work when option --enable_kubernetes_alpha is added. But gives another message.
This will create a cluster with all Kubernetes Alpha features enabled.
- This cluster will not be covered by the Container Engine SLA and
should not be used for production workloads.
- You will not be able to upgrade the master or nodes.
- The cluster will be deleted after 30 days.
Edit: The test was done in zone asia-south1-c
My questions are:
Is VPC-Native cluster production ready?
If yes, what is the correct way to create a production ready cluster?
If VPC-Native cluster is not production ready, what is the way to connect privately from a GKE cluster to another GCP service (like Cloud SQL)?
Your command seems correct. Seems like something is going wrong during the creation of your cluster on your project. Are you using any other flags than the command you posted?
When I set my Google cloud shell to region europe-west1
The cluster deploys error free and 1.11.6-gke.2(default) is what it uses.
You could try to manually create the cluster using the GUI instead of gcloud command. While creating the cluster, check the “Enable VPC-native (using alias ip)” feature. Try using a newest non-alpha version of GKE if some are showing up for you.
Public documentation you posted on GKE IP-aliasing and the GKE projects.locations.clusters API shows this to be in GA. All signs point this to be production ready. For whatever it’s worth, the feature has been posted last May In Google Cloud blog.
What you can try is to update your version of Google Cloud SDK. This will bring everything up to the latest release and remove alpha messages for features that are in GA right now.
$ gcloud components update