I am trying to configure a request routing using Istio and Ingress-nginx but I'm not able to route the requests properly. Basically I have two deployments each as a different subset and implemented a weighted VirtualService.
In Kiali dashboard it shows the request being routed from the ingress-controller to PassthroughCluster even though I can see the correct route mapping using istioctl proxy-config routes command.
Here is the complete configuration:
apiVersion: v1
kind: ServiceAccount
metadata:
name: dummy-app
namespace: my-namespace
---
apiVersion: v1
kind: Service
metadata:
name: dummy-app
namespace: my-namespace
labels:
app: dummy-app
service: dummy-app
spec:
ports:
- port: 8080
targetPort: 8080
name: http-web
selector:
app: dummy-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-app-1
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: dummy-app
version: v1
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: dummy-app
version: v1
spec:
serviceAccountName: dummy-app
containers:
- image: my-img
imagePullPolicy: IfNotPresent
name: dummy-app
env:
- name: X_HTTP_ENV
value: dummy-app-1
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-app-2
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: dummy-app
version: v2
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: dummy-app
version: v2
spec:
serviceAccountName: dummy-app
containers:
- image: my-img
imagePullPolicy: IfNotPresent
name: dummy-app
env:
- name: X_HTTP_ENV
value: dummy-app-2
ports:
- containerPort: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: dummy-app
namespace: my-namespace
spec:
host: dummy-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: dummy-app
namespace: my-namespace
spec:
hosts:
- dummy-app.my-namespace.svc.cluster.local
http:
- match:
- uri:
prefix: "/my-route"
route:
- destination:
host: dummy-app.my-namespace.svc.cluster.local
subset: v1
weight: 0
- destination:
host: dummy-app.my-namespace.svc.cluster.local
subset: v2
weight: 100
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "my-ingress-class"
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/upstream-vhost: dummy-app.my-namespace.svc.cluster.local
name: dummy-ingress
namespace: my-namespace
spec:
rules:
- host: myapp.com
http:
paths:
- backend:
service:
name: dummy-app
port:
number: 8080
path: /my-route(.*)
pathType: ImplementationSpecific
Weird thing is I have other applications working in the same namespace and using the same ingress-controller, so I'm not considering there is a problem with ingress-nginx setup.
Istio version:
client version: 1.11.4
control plane version: 1.11.4
data plane version: 1.11.4 (13 proxies), 1.12-dev (15 proxies)
Any ideas on what is the configuration problem or how can I better debug these kind of issues in Istio?
Main issue seems to be with ingress-nginx resource. Based on the above ingress definition, you are trying to bypass istio ingress gateway (where all the proxying rules has been implemented, like gateway,virtual-service and destination rules) and directly pushing the traffic to the application service from ingress. For istio proxy rules to work, you should let traffic pass through istio-ingressgateway (a service under istio-system namespace). So following changes should be made to your ingress resource:
rules:
- host: myapp.com
http:
paths:
- backend:
service:
name: istio-ingressgateway.istio-system
port:
number: 80
path: /my-route(.*)
pathType: ImplementationSpecific
Due to security requirements, I need to install Istio offline on an AWS EKS cluster. I could not find any clear documentation on offline installation or recommendations. So I have used the “istioctl generate manifest > generated-manifest.yml” to save the installation files, I have replaced the 3 images (proxy, pilot, prometheus) with my images hosted in AWS ECR and applied the file with kubectl. I should mention that “istioctl apply manifest” times out for some reason. I have also created the “istio-system” namespace manually before installation.
I have used these 2 annotations for an internal Network Load Balancer:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
The installation creates the CRDs, the pods and services. Here are my questions, if someone can point me into the right direction:
Where can I find some proper documentation for offline installation?
There is no istio-egressgateway with the default profile. Is the egress gateway not needed if I need to expose an application?
The NLB is created with 4 listeners (80, 443, 15443, 15021) but I can connect only to 15201, the others refuse the connection. Hence only the 15021 target group is healthy, the other 3 are not.
Adding below some of the relevant code:
kubectl describe service/istio-ingressgateway -n istio-system
------
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
istio=ingressgateway
release=istio
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal: true
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.212.217
IPs: 172.20.212.217
LoadBalancer Ingress: aaaa-bbb.elb.eu-west-2.amazonaws.com
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 32094/TCP
Endpoints: 10.0.35.41:15021
Port: http2 80/TCP
TargetPort: 80/TCP
NodePort: http2 30890/TCP
Endpoints: 10.0.35.41:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32039/TCP
Endpoints: 10.0.35.41:443
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30690/TCP
Endpoints: 10.0.35.41:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
sh-4.2$ kubectl describe service/istiod -n istio-system
Name: istiod
Namespace: istio-system
Labels: app=istiod
istio=pilot
istio.io/rev=default
release=istio
Annotations: <none>
Selector: app=istiod,istio=pilot
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.25.159
IPs: 172.20.25.159
Port: grpc-xds 15010/TCP
TargetPort: 15010/TCP
Endpoints: 10.0.43.252:15010
Port: https-dns 15012/TCP
TargetPort: 15012/TCP
Endpoints: 10.0.43.252:15012
Port: https-webhook 443/TCP
TargetPort: 15017/TCP
Endpoints: 10.0.43.252:15017
Port: http-monitoring 15014/TCP
TargetPort: 15014/TCP
Endpoints: 10.0.43.252:15014
Port: dns-tls 853/TCP
TargetPort: 15053/TCP
Endpoints: 10.0.43.252:15053
Session Affinity: None
Events: <none>
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: adapters.config.istio.io
labels:
app: mixer
package: adapter
istio: mixer-adapter
chart: istio
heritage: Tiller
release: istio
annotations:
"helm.sh/resource-policy": keep
spec:
group: config.istio.io
names:
kind: adapter
plural: adapters
singular: adapter
categories:
- istio-io
- policy-istio-io
scope: Namespaced
subresources:
status: {}
versions:
- name: v1alpha2
served: true
storage: true
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: templates.config.istio.io
labels:
app: mixer
package: template
istio: mixer-template
chart: istio
heritage: Tiller
release: istio
annotations:
"helm.sh/resource-policy": keep
spec:
group: config.istio.io
names:
kind: template
plural: templates
singular: template
categories:
- istio-io
- policy-istio-io
scope: Namespaced
subresources:
status: {}
versions:
- name: v1alpha2
served: true
storage: true
---
# Cni component is disabled.
# EgressGateways istio-egressgateway component is disabled.
# Resources for IngressGateways component
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
istio: ingressgateway
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 25%
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
labels:
app: istio-ingressgateway
chart: gateways
heritage: Tiller
istio: ingressgateway
release: istio
service.istio.io/canonical-name: istio-ingressgateway
service.istio.io/canonical-revision: latest
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
weight: 2
- preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
weight: 2
- preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
weight: 2
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
containers:
- args:
- proxy
- router
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --proxyLogLevel=warning
- --proxyComponentLogLevel=misc:error
- --log_output_level=default:info
- --serviceCluster
- istio-ingressgateway
- --trust-domain=cluster.local
env:
- name: JWT_POLICY
value: third-party-jwt
- name: PILOT_CERT_PROVIDER
value: istiod
- name: CA_ADDR
value: istiod.istio-system.svc:15012
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: CANONICAL_SERVICE
valueFrom:
fieldRef:
fieldPath: metadata.labels['service.istio.io/canonical-name']
- name: CANONICAL_REVISION
valueFrom:
fieldRef:
fieldPath: metadata.labels['service.istio.io/canonical-revision']
- name: ISTIO_META_WORKLOAD_NAME
value: istio-ingressgateway
- name: ISTIO_META_OWNER
value: kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway
- name: ISTIO_META_MESH_ID
value: cluster.local
- name: ISTIO_META_ROUTER_MODE
value: sni-dnat
- name: ISTIO_META_CLUSTER_ID
value: Kubernetes
image: 777777777777.dkr.ecr.eu-west-2.amazonaws.com/istio-proxyv2:1.6.8
name: istio-proxy
ports:
- containerPort: 15021
- containerPort: 8080
- containerPort: 8443
- containerPort: 15443
- containerPort: 15011
- containerPort: 15012
- containerPort: 8060
- containerPort: 853
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/istio/config
name: config-volume
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/run/secrets/tokens
name: istio-token
readOnly: true
- mountPath: /var/run/ingress_gateway
name: ingressgatewaysdsudspath
- mountPath: /etc/istio/pod
name: podinfo
- mountPath: /etc/istio/ingressgateway-certs
name: ingressgateway-certs
readOnly: true
- mountPath: /etc/istio/ingressgateway-ca-certs
name: ingressgateway-ca-certs
readOnly: true
serviceAccountName: istio-ingressgateway-service-account
volumes:
- configMap:
name: istio-ca-root-cert
name: istiod-ca-cert
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.labels
path: labels
- fieldRef:
fieldPath: metadata.annotations
path: annotations
name: podinfo
- emptyDir: {}
name: istio-envoy
- emptyDir: {}
name: ingressgatewaysdsudspath
- name: istio-token
projected:
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
name: istio
optional: true
name: config-volume
- name: ingressgateway-certs
secret:
optional: true
secretName: istio-ingressgateway-certs
- name: ingressgateway-ca-certs
secret:
optional: true
secretName: istio-ingressgateway-ca-certs
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
spec:
minAvailable: 1
selector:
matchLabels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: istio-ingressgateway-sds
namespace: istio-system
labels:
release: istio
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: istio-ingressgateway-sds
namespace: istio-system
labels:
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: istio-ingressgateway-sds
subjects:
- kind: ServiceAccount
name: istio-ingressgateway-service-account
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: tls
port: 15443
targetPort: 15443
selector:
app: istio-ingressgateway
istio: ingressgateway
type: LoadBalancer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-ingressgateway-service-account
namespace: istio-system
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
---
# IstiodRemote component is disabled.
# Resources for Pilot component
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istiod
namespace: istio-system
labels:
app: istiod
release: istio
istio.io/rev: default
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istiod
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
---
I am seeing Zonal network endpoint group unhealthy after configuring an ingress with a managed cert in GCP via
# kubernetes/backstage.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
- name: backstage
image: australia-southeast1-docker.pkg.dev/acme-dev-tooling/acme-docker/backstage:prd-v.0.35
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 7007
envFrom:
- secretRef:
name: postgres-secrets
- secretRef:
name: backstage-secrets
---
apiVersion: v1
kind: Service
metadata:
name: backstage
namespace: backstage
annotations:
cloud.google.com/backend-config: '{"default": "backstage-ingress-backendconfig"}'
spec:
selector:
app: backstage
ports:
- name: http
protocol: TCP
port: 80
type: NodePort
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backstage-ingress-backendconfig
spec:
healthCheck:
checkIntervalSec: 15
type: HTTP
requestPath: /
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: tools-managed-cert-backstage
namespace: backstage
spec:
domains:
- tools.backstage.acme-uat.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backstage-ingress
namespace: backstage
annotations:
kubernetes.io/ingress.global-static-ip-name: "tools-backstage-external-ip"
networking.gke.io/managed-certificates: tools-managed-cert-backstage
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: backstage
port:
number: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: backstage
GCP provisions an L7 https loadbalancer that cannot access the GKE cluster due to zonal health endpoint connectivity.
The ingress reads:
All Backends are in an UNHEALTHY state.
Is there something I am missing? Does the GKE ingress configure the firewall? I've looked at the rules, there are rules for 130.211.0.0/22, 35.191.0.0/16 which is the health check address.
logs/compute.googleapis.com%2Fhealthchecks yields no probe results. Despite having enabled logging.
Any help would be much appreciated.
UPDATE - above fixed per comments, the following isn't working
kind: Service
metadata:
name: argocd-server
namespace: argocd
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
argocd.argoproj.io/instance: argocd
helm.sh/chart: argo-cd-3.29.5
annotations:
cloud.google.com/backend-config: '{"default": "argocd-ingress-backendconfig"}'
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/neg-status: >-
{"network_endpoint_groups":{"80":"k8s1-20a3d3ad-argocd-argocd-server-80-c2ec22fa"},"zones":["australia-southeast1-a"]}
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/backend-config":"{\"default\":
\"argocd-ingress-backendconfig\"}","cloud.google.com/neg":"{\"ingress\":
true}"},"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","argocd.argoproj.io/instance":"argocd","helm.sh/chart":"argo-cd-3.29.5"},"name":"argocd-server","namespace":"argocd"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":"http"}],"selector":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"},"type":"ClusterIP"}}
status:
loadBalancer: {}
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: http
selector:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
clusterIP: 10.184.10.20
clusterIPs:
- 10.184.10.20
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-server
namespace: argocd
uid: fee5f91c-b431-4b8c-ab10-64daa02ec729
resourceVersion: '108355'
generation: 3
creationTimestamp: '2022-01-20T00:06:05Z'
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
deployment.kubernetes.io/revision: '3'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"},"name":"argocd-server","namespace":"argocd"},"spec":{"replicas":1,"revisionHistoryLimit":5,"selector":{"matchLabels":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"}},"template":{"metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"}},"spec":{"containers":[{"command":["argocd-server","--staticassets","/shared/app","--repo-server","argocd-repo-server:8081","--dex-server","http://argocd-dex-server:5556","--logformat","text","--loglevel","info","--redis","argocd-redis:6379"],"image":"quay.io/argoproj/argocd:v2.2.2","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"server","ports":[{"containerPort":8080,"name":"server","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{},"volumeMounts":[{"mountPath":"/app/config/ssh","name":"ssh-known-hosts"},{"mountPath":"/app/config/server/tls","name":"argocd-repo-server-tls"},{"mountPath":"/home/argocd","name":"plugins-home"},{"mountPath":"/tmp","name":"tmp-dir"}]}],"serviceAccountName":"argocd-server","volumes":[{"emptyDir":{},"name":"static-files"},{"emptyDir":{},"name":"tmp-dir"},{"configMap":{"name":"argocd-ssh-known-hosts-cm"},"name":"ssh-known-hosts"},{"name":"argocd-repo-server-tls","secret":{"items":[{"key":"tls.crt","path":"tls.crt"},{"key":"tls.key","path":"tls.key"},{"key":"ca.crt","path":"ca.crt"}],"optional":true,"secretName":"argocd-repo-server-tls"}},{"emptyDir":{},"name":"plugins-home"}]}}}}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
kubectl.kubernetes.io/restartedAt: '2022-01-20T15:44:27+11:00'
spec:
volumes:
- name: static-files
emptyDir: {}
- name: tmp-dir
emptyDir: {}
- name: ssh-known-hosts
configMap:
name: argocd-ssh-known-hosts-cm
defaultMode: 420
- name: argocd-repo-server-tls
secret:
secretName: argocd-repo-server-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: ca.crt
path: ca.crt
defaultMode: 420
optional: true
- name: plugins-home
emptyDir: {}
containers:
- name: server
image: quay.io/argoproj/argocd:v2.2.2
command:
- argocd-server
- '--staticassets'
- /shared/app
- '--repo-server'
- argocd-repo-server:8081
- '--dex-server'
- http://argocd-dex-server:5556
- '--logformat'
- text
- '--loglevel'
- info
- '--redis'
- argocd-redis:6379
ports:
- name: server
containerPort: 8080
protocol: TCP
resources: {}
volumeMounts:
- name: ssh-known-hosts
mountPath: /app/config/ssh
- name: argocd-repo-server-tls
mountPath: /app/config/server/tls
- name: plugins-home
mountPath: /home/argocd
- name: tmp-dir
mountPath: /tmp
livenessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: argocd-server
serviceAccount: argocd-server
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 5
progressDeadlineSeconds: 600
Cheers
# Here is workaround for Google Cloud with ArgoCD v2.5.2
# cloudflare-key.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-key
namespace: cert-manager
type: Opaque
stringData:
key: xxxxxxxxxxxxxxxx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: zia#mydomain.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- selector: {}
dns01:
cloudflare:
email: zia#mydomain.com
apiKeySecretRef:
name: cloudflare-key
key: key
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/component: server
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"8080":{}}}'
beta.cloud.google.com/backend-config: '{"default": "argocd-backend-config"}'
name: argocd-server
spec:
ports:
- name: http8080
protocol: TCP
port: 8080
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
---
#backendconfig.yaml
kind: BackendConfig
metadata:
name: argocd-backend-config
namespace: argocd
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 10
healthyThreshold: 1
unhealthyThreshold: 5
type: HTTP
requestPath: /healthz
port: 8080
---
# FrontendConfig.yaml
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: argocd-frontend-config
namespace: argocd
spec:
redirectToHttps:
enabled: true
---
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: gce
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.global-static-ip-name: "argocd-dev"
networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config
spec:
rules:
- host: argocd-dev.mydomain.com
http:
paths:
- backend:
service:
name: argocd-server
port:
name: http
path: "/"
pathType: Prefix
tls:
- hosts:
- argocd-dev.mydomain.com
secretName: argocd-secret #don't change, this is provided by ArgoCD
I'm trying to create a deployment on AWS EKS with my application and metricbeat as sidecar, so I have the following YML:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-modules
namespace: testframework
labels:
k8s-app: metricbeat
data:
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: [ "https://${NODE_IP}:10250" ]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: testframework
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
processors:
- add_cloud_metadata:
- add_tags:
tags: ["EKSCORP_DEV"]
target: "cluster_test"
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.elasticsearch:
index: "metricbeat-k8s-%{[agent.version]}-%{+yyyy.MM.dd}"
setup.template.name: "metricbeat-k8s"
setup.template.pattern: "metricbeat-k8s-*"
setup.ilm.enabled: false
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testframework-initializr-deploy
namespace: testframework
spec:
replicas: 1
selector:
matchLabels:
app: testframework-initializr
template:
metadata:
labels:
app: testframework-initializr
annotations:
co.elastic.logs/enabled: 'true'
co.elastic.logs/json.keys_under_root: 'true'
co.elastic.logs/json.add_error_key: 'true'
co.elastic.logs/json.message_key: 'message'
spec:
containers:
- name: testframework-initializr
image: XXXXX.dkr.ecr.us-east-1.amazonaws.com/testframework-initializr
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health/liveness
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 60
failureThreshold: 5
readinessProbe:
httpGet:
port: 8080
path: /health
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 3
- name: metricbeat-sidecar
image: docker.elastic.co/beats/metricbeat:7.12.0
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs"
]
env:
- name: ELASTIC_CLOUD_ID
value: xxxx
- name: ELASTIC_CLOUD_AUTH
value: xxxx
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0640
name: metricbeat-modules
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prom-admin
rules:
- apiGroups: [""]
resources: ["pods", "nodes"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prom-rbac
subjects:
- kind: ServiceAccount
name: default
namespace: testframework
roleRef:
kind: ClusterRole
name: prom-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: testframework-initializr-service
namespace: testframework
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: testframework-initializr
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: testframework-initializr-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: dev-initializr.test.net
http:
paths:
- backend:
serviceName: testframework-initializr-service
servicePort: 80
Well, after startup the POD in AWS EKS, I got the following error in Kubernetes Metricbeat Container:
INFO module/wrapper.go:259 Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: error making http request: Get "https://IP_FROM_FARGATE_HERE:10250/stats/summary": dial tcp IP_FROM_FARGATE_HERE:10250: connect: connection refused
I tried to use the "NODE_NAME" instead "NODE_IP", but I got "No Such Host". Any idea how can I fix it?
I have set up k8s cluster in AWS ec2 instances with 1 parent and 2 child nodes using kops.
Deployed 2 services and running with LoadBalancer service type in browser.
Now I installed NGINX but through LB ip not able to hit my service. It is giving 504 GATEWAY_TIME_OUT exception. Googled it but no success, Where am I going wrong? Here is my sample code...[AWS FREE ACCOUNT]
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: $APP_NAME
template:
metadata:
labels:
app: $APP_NAME
spec:
imagePullSecrets:
- name: $IMG_PULL_SECRET
containers:
- image: $IMAGE_REG/$APP_NAME:$IMAGE_TAG
name: $APP_NAME
imagePullPolicy: Always
ports:
- containerPort: ${CONTAINER_PORT}
protocol: TCP
env:
- name: spring.cloud.config.uri
value: 'http://config-server-service'
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
type: $SERVICE_TYPE
#type: $SERVICE_TYPE
ports:
- port: 80
targetPort: ${CONTAINER_PORT}
protocol: TCP
selector:
app: $APP_NAME
ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: JSESSIONID
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "true"
# kubernetes.io/ingress.global-static-ip-name: "my-gateway"
spec:
rules:
- http:
paths:
- path: /${APP_NAME}
backend:
serviceName: ${APP_NAME}
servicePort: 80