I am using helm to install and configure the ALB ingress controller for EKS Fargate, everything works as expected but I want to upgrade the default resources for CPU and Memory, the default values are 0.25vCPU 0.5GB
i am using this command
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=CLUSTERNAME \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set vpcId=vpc-XXXXXXXX \
--set replicaCount=1
how i can set the resources config using --set resources=??
https://artifacthub.io/packages/helm/aws/aws-load-balancer-controller
Related
Hello and thank you in advance!
I have the following issue:
I'm trying to install prometheus over AWS EKS using Helm, but want to have an opportunity to configure AWS ELB to be private and available from inside my VPC(by default it's being created as a public LoadBalancer with FQDN).
When I execute following:
helm install stable/prometheus --name prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2" \
--set server.service.type=LoadBalancer \
--set server.service.annotations{0}="service.beta.kubernetes.io/aws-load-balancer-internal":"0.0.0.0/0"
It creates a standard LoadBalancer service with no annotations included:
$ kubectl describe service/prometheus-server -n=prometheus
Name: prometheus-server
Namespace: prometheus
Labels: app=prometheus
chart=prometheus-11.7.0
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Selector: app=prometheus,component=server,release=prometheus
Type: LoadBalancer
IP: 10.100.255.81
I was playing around with quotes and other possible syntax variations but no luck. Please advise on the proper annotation usage.
It's kind of tricky, but you can do it like this:
helm install stable/prometheus --name prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2" \
--set server.service.type=LoadBalancer \
--set server.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-internal"="0.0.0.0/0"
You can see the format and limitation of set here in the Helm docs. For example,
--set nodeSelector."kubernetes\.io/role"=master
becomes:
nodeSelector:
kubernetes.io/role: master
✌️
As you know by installing Istio, it creates a kubernetes loadbalancer with a publicIP and use the public IP as External IP of istio-ingress-gateway LoadBalancer service. As the IP is not Static, I have created a static public IP in Azure which is in the same ResourceGroup as AKS, I found the resource-group name as below:
$ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
I download the installation file through following command:
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.4.2 sh -
I tried to re-install istio by following command:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set prometheus.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set gateways.istio-ingressgateway.loadBalancerIP= my-static-public-ip | kubectl apply -f -
However it didn't work, still got the dynamic IP. So I tried to setup my static public IP on the files:
istio-demo.yaml, istio-demo-auth.yaml by adding loadbalancer IP under istio-ingressgateway:
spec:
type: LoadBalancer
loadBalancerIP: my-staticPublicIP
Also file: values-istio-gteways.yaml
loadBalancerIP: "mystaticPublicIP"
externalIPs: ["mystaticPublicIP"]
And then re-installed the istio using helm command as it mentioned above. This time it added mystaticPublicIP as one of the External_IP of istio-ingress-gateway Loadbalancer service. So now it has both dynamic IP and mystaticPublicIP.
That doesn't seem a right way to do that.
I went through the relevant questions under this website and also googled but none of them could help.
I'm wondering if anyone know how to make this work out?
I can successfully assign the static public IP to Istio gateway service with the following command,
helm template install/kubernetes/helm --name istio --namespace istio-system --set gateways.istio-ingressgateway.loadBalancerIP=my-static-public-ip | kubectl apply -f –
I have an existing GKE cluster with the Istio addon installed, e.g.:
gcloud beta container clusters create istio-demo \
--addons=Istio --istio-config=auth=MTLS_PERMISSIVE \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
I am following this guide to install cert-manager in order to automatically provision TLS certificates from Let's Encrypt. According to the guide, Istio needs SDS enabled which can be done at the point of installation:
helm install istio.io/istio \
--name istio \
--namespace istio-system \
--set gateways.istio-ingressgateway.sds.enabled=true
As I already have Istio installed via GKE, how can I enable SDS on the existing cluster? Alternatively, is it possible to use the gcloud CLI to enable SDS at the point of cluster creation?
Managed Istio per design will revert any custom configuration and will disable SDS again. So, IMHO, it is a non-useful scenario. You can enable SDS manually following this guide, but keep in mind that the configuration will remain active only for 2-3 minutes.
Currently GKE doesn't support enabling SDS when creating a cluster from scratch. On GKE managed Istio, Google is looking to have the ability to enable SDS on GKE clusters, but they don't have an ETA yet for that release.
However, if you use non-managed (open source) Istio, SDS feature is in the Istio roadmap, and I think it should be available in version 1.2, but it is not a guarantee.
Even though currently the default ingress gateway created by Istio on GKE doesn't support SDS, you can add your own extra ingress gateway manually.
You can get the manifest of the default istio-ingressgateway deployment and service in your istio-system namespace and modify it to add the SDS container and change the name and then apply it to your cluster. But it's a little too tedious, there's a simpler way to do that:
First download the open-source helm chart of istio (choose a version that works with your Istio on GKE version, in my case my Istio on GKE is 1.1.3 and I downloaded open-source istio 1.1.17 and it works):
curl -O https://storage.googleapis.com/istio-release/releases/1.1.17/charts/istio-1.1.17.tgz
# extract under current working directory
tar xzvf istio-1.1.17.tgz
Then render the helm template for only the ingressgateway component:
helm template istio/ --name istio \
--namespace istio-system \
-x charts/gateways/templates/deployment.yaml \
-x charts/gateways/templates/service.yaml \
--set gateways.istio-egressgateway.enabled=false \
--set gateways.istio-ingressgateway.sds.enabled=true > istio-ingressgateway.yaml
Then manually modify the rendered istio-ingressgateway.yaml file with following modifications:
Change the metadata.name for both the deployment and service to something else like istio-ingressgateway-sds
Change the metadata.lables.istio for both the deployment and service to something else like ingressgateway-sds
Change the spec.template.metadata.labels for the deployment similarly to ingressgateway-sds
Change the spec.selector.istio for the service to same value like ingressgateway-sds
Then apply the yaml file to your GKE cluster:
kubectl apply -f istio-ingressgateway.yaml
Holla! You have your own istio ingressgatway with SDS created now and you can get the load balancer IP of it by:
kubectl -n istio-system get svc istio-ingressgateway-sds
To let your Gateway to use the correct sds enabled ingressgateway you need to set spec.selector.istio to match the one you set. Below is an example of a Gateway resource using a kubernetes secret as TLS cert:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-test
spec:
selector:
istio: ingressgateway-sds
servers:
- hosts:
- '*.example.com'
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*.example.com'
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: example-com-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds
Per Carlos' answer, I decided not to use the Istio on GKE addon as there is very limited customization available when using Istio as a managed service.
I created a standard GKE cluster...
gcloud beta container clusters create istio-demo \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
And then manually installed Istio...
Create the namespace:
kubectl create namespace istio-system
Install the Istio CRDs:
helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
Install Istio using the default configuration profile with my necessary customizations:
helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
--set gateways.enabled=true \
--set gateways.istio-ingressgateway.enabled=true \
--set gateways.istio-ingressgateway.sds.enabled=true \
--set gateways.istio-ingressgateway.externalTrafficPolicy="Local" \
--set global.proxy.accessLogFile="/dev/stdout" \
--set global.proxy.accessLogEncoding="TEXT" \
--set grafana.enabled=true \
--set kiali.enabled=true \
--set prometheus.enabled=true \
--set tracing.enabled=true \
| kubectl apply -f -
Enable Istio sidecar injection on default namespace
kubectl label namespace default istio-injection=enabled
I have written a small script which creates VPC, firewall-rule, and instance. I gave parameters to script. but instead of taking parameter for firewall-rule it takes instancename2 value in firewall name field which.
ZONE=$2
MACHINE_TYPE=$3
IMAGE_FAMILY=$4
IMAGE_PROJECT=$5
BOOT_DISK_SIZE=$6
BOOT_DISK_TYPE=$7
NETWORK_NAME=$8
FIREWALL_RULE=$9
FIREWALL_NAME=$10
TAGS=$11
gcloud compute networks create $NETWORK_NAME --subnet-mode=auto
gcloud compute firewall-rules create $FIREWALL_NAME --network=$NETWORK_NAME --allow=$FIREWALL_RULE --source-tags=$TAGS
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--machine-type=$MACHINE_TYPE \
--image-family=$IMAGE_FAMILY \
--image-project=$IMAGE_PROJECT \
--boot-disk-size=$BOOT_DISK_SIZE \
--boot-disk-type=$BOOT_DISK_TYPE \
--network-interface network=$NETWORK_NAME,no-address \
--tags=$TAGS \
command : bash network.sh myvm us-west1-a f1-micro ubuntu-1810 ubuntu-os-cloud 10 pd-ssd mynetwork tcp:80 myrule mytag
output :
Created .
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
mynetwork AUTO REGIONAL
Instances on this network will not be reachable until firewall rules
are created. As an example, you can allow all internal traffic between
instances as well as SSH, RDP, and ICMP by running:
$ gcloud compute firewall-rules create <FIREWALL_NAME> --network mynetwork --allow tcp,udp,icmp --source-ranges <IP_RANGE>
$ gcloud compute firewall-rules create <FIREWALL_NAME> --network mynetwork --allow tcp:22,tcp:3389,icmp
Creating firewall...⠛Created
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
myvm0 mynetwork INGRESS 1000 tcp:80 False
Created.
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
myvm us-west1-a f1-micro 10.138.0.2 RUNNING
please check the name of firewall created (below 'creating firewall...done.'). It's not what i provided in command. Its similar to INSTANCE_NAME variable.
I have been following this guide to deploy Pega 7.4 on Google Cloud compute engine. Everything went smoothly however on the Load Balancer health check the service continues to be unhealthy.
When visiting the external IP a 502 is returned and in trying to troubleshoot GCP told us to "Make sure that your backend is healthy and supports HTTP/2 protocol". Well in the guide this command:
gcloud compute backend-services create pega-app \
--health-checks=pega-health \
--port-name=pega-web \
--session-affinity=GENERATED_COOKIE \
--protocol=HTTP --global
The protocol is HTTP but is this the same as HTTP/2?
What else could be wrong besides checking that the firewall setup allows the health checker and load balancer to pass through (below)?
gcloud compute firewall-rules create pega-internal \
--description="Pega node to node communication requirements" \
--action=ALLOW \
--rules=tcp:9300-9399,tcp:5701-5800 \
--source-tags=pega-app \
--target-tags=pega-app
gcloud compute firewall-rules create pega-web-external \
--description="Pega external web ports" \
--action=ALLOW \
--rules=tcp:8080,tcp:8443 \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--target-tags=pega-app
Edit:
So the Instance group has a named port on 8080
gcloud compute instance-groups managed set-named-ports pega-app \
--named-ports=pega-web:8080 \
--region=${REGION}
And the health check config:
gcloud compute health-checks create http pega-health \
--request-path=/prweb/PRRestService/monitor/pingservice/ping \
--port=8080
I have checked VM Instance logs on the pega-app and getting 404 when trying to hit the ping service.
My problem was that I used a configured using a Static IP address without applying a domain name system record like this: gcloud compute addresses create pega-app --global I skipped this step so it generates ephemeral IP addresses each time the instances have to boot up.