I have a GKE cluster on which i have deployed OAuth proxy to protect some urls :
resource "helm_release" "oauth_proxy" {
name = "oauth-proxy"
namespace = "kube-system"
repository = "https://oauth2-proxy.github.io/manifests"
chart = "oauth2-proxy"
version = "6.2.0"
values = [<<EOF
config:
clientID: 'myClientID'
clientSecret: 'myClientSecret'
cookieSecret: 'myCookieSecret'
configFile: |-
email_domains = [ "mywebsite.com" ]
upstreams = [ "file:///dev/null" ]
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: lets-encrypt-production
path: /oauth2
hosts:
- url1
tls:
- secretName: myapp-tls
hosts:
- url1
EOF
]
}
So my concern is about the ClientID, secret and cookie. I have to deploy this to other projects (that have not the same IDs) and i don't want the credentials to remain clear like this in the code.
I've found that there's a way to do so with the google_iap_client terraform resource but the documentation shows only the brand config. is this a good way ?
Can it be done through a secret ?
Related
I am trying to retrieve the hostname in my Application Load Balancer that I configured as ingress.
The scenario currently is: I am deploying a helm chart using terraform, and have configured an ALB as ingress. The ALB and the Helm chart was deployed normally and is working, however, I need to retrieve the hostname of this ALB to create a Route53 record pointing to this ALB. When I try to retrieve this information, it returns null values.
According to terraform's own documentation, the correct way is as follows:
data "kubernetes_ingress" "example" {
metadata {
name = "terraform-example"
}
}
resource "aws_route53_record" "example" {
zone_id = data.aws_route53_zone.k8.zone_id
name = "example"
type = "CNAME"
ttl = "300"
records = [data.kubernetes_ingress.example.status.0.load_balancer.0.ingress.0.hostname]
}
I did exactly as in the documentation (even the provider version is the latest), here is an excerpt of my code:
# Helm release resource
resource "helm_release" "argocd" {
name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
namespace = "argocd"
version = "4.9.7"
create_namespace = true
values = [
templatefile("${path.module}/settings/helm/argocd/values.yaml", {
certificate_arn = module.acm_certificate.arn
})
]
}
# Kubernetes Ingress data to retrieve de ingress hostname from helm deployment (ALB Hostname)
data "kubernetes_ingress" "argocd" {
metadata {
name = "argocd-server"
namespace = helm_release.argocd.namespace
}
depends_on = [
helm_release.argocd
]
}
# Route53 record creation
resource "aws_route53_record" "argocd" {
name = "argocd"
type = "CNAME"
ttl = 600
zone_id = aws_route53_zone.r53_zone.id
records = [data.kubernetes_ingress.argocd.status.0.load_balancer.0.ingress.0.hostname]
}
When I run the terraform apply I've get the following error:
╷
│ Error: Attempt to index null value
│
│ on route53.tf line 67, in resource "aws_route53_record" "argocd":
│ 67: records = [data.kubernetes_ingress.argocd.status.0.load_balancer.0.ingress.0.hostname]
│ ├────────────────
│ │ data.kubernetes_ingress.argocd.status is null
│
│ This value is null, so it does not have any indices.
My ingress configuration (deployed by Helm Release):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server
namespace: argocd
uid: 646f6ea0-7991-4a13-91d0-da236164ac3e
resourceVersion: '4491'
generation: 1
creationTimestamp: '2022-08-08T13:29:16Z'
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
helm.sh/chart: argo-cd-4.9.7
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/certificate-arn: >-
arn:aws:acm:us-east-1:124416843011:certificate/7b79fa2c-d446-423d-b893-c8ff3d92a5e1
alb.ingress.kubernetes.io/group.name: altb-devops-eks-support-alb
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/load-balancer-name: altb-devops-eks-support-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/tags: >-
Name=altb-devops-eks-support-alb,Stage=Support,CostCenter=Infrastructure,Project=Shared
Infrastructure,Team=DevOps
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
meta.helm.sh/release-name: argocd
meta.helm.sh/release-namespace: argocd
finalizers:
- group.ingress.k8s.aws/altb-devops-eks-support-alb
managedFields:
- manager: controller
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-08-08T13:29:16Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"group.ingress.k8s.aws/altb-devops-eks-support-alb": {}
- manager: terraform-provider-helm_v2.6.0_x5
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-08-08T13:29:16Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:alb.ingress.kubernetes.io/backend-protocol: {}
f:alb.ingress.kubernetes.io/certificate-arn: {}
f:alb.ingress.kubernetes.io/group.name: {}
f:alb.ingress.kubernetes.io/listen-ports: {}
f:alb.ingress.kubernetes.io/load-balancer-name: {}
f:alb.ingress.kubernetes.io/scheme: {}
f:alb.ingress.kubernetes.io/tags: {}
f:alb.ingress.kubernetes.io/target-type: {}
f:kubernetes.io/ingress.class: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/part-of: {}
f:helm.sh/chart: {}
f:spec:
f:rules: {}
- manager: controller
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-08-08T13:29:20Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
subresource: status
selfLink: /apis/networking.k8s.io/v1/namespaces/argocd/ingresses/argocd-server
status:
loadBalancer:
ingress:
- hostname: >-
internal-altb-devops-eks122-support-alb-1845221539.us-east-1.elb.amazonaws.com
spec:
rules:
- host: argocd.altb.co
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80
The terraform datasource for Ingress is : kubernetes_ingress_v1.
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/ingress_v1
data "kubernetes_ingress_v1" "argocd" {
metadata {
name = "argocd-server"
namespace = helm_release.argocd.namespace
}
depends_on = [
helm_release.argocd
]
}
This should work.
I am writing a terraform file in GCP to run a stateless application on a GKE, these are the steps I'm trying to get into terraform.
Create a service account
Grant roles to the service account
Creating the cluster
Configuring the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mllp-adapter-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mllp-adapter
template:
metadata:
labels:
app: mllp-adapter
spec:
containers:
- name: mllp-adapter
imagePullPolicy: Always
image: gcr.io/cloud-healthcare-containers/mllp-adapter
ports:
- containerPort: 2575
protocol: TCP
name: "port"
command:
- "/usr/mllp_adapter/mllp_adapter"
- "--port=2575"
- "--hl7_v2_project_id=PROJECT_ID"
- "--hl7_v2_location_id=LOCATION"
- "--hl7_v2_dataset_id=DATASET_ID"
- "--hl7_v2_store_id=HL7V2_STORE_ID"
- "--api_addr_prefix=https://healthcare.googleapis.com:443/v1"
- "--logtostderr"
- "--receiver_ip=0.0.0.0"
Add internal load balancer to make it accesible outside of the cluster
apiVersion: v1
kind: Service
metadata:
name: mllp-adapter-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
ports:
- name: port
port: 2575
targetPort: 2575
protocol: TCP
selector:
app: mllp-adapter
I've found this example in order to create an auto-pilot-public cluster, however I don't know where to specify the YAML file of my step 4
Also I've found this other blueprint that deploy a service to the created cluster using the kubernetes provider, which I hope solves my step 5.
I'm new at terraform and GCP architecture in general, I got all of this working following documentation however I'm now trying to find a way to deploy this on a dev enviroment for testing purposes but that's outside of my sandbox and it's supposed to be deployed using terraform, I think I'm getting close to it.
Can someone enlight me what's the next step or how to add those YAML configurations to the .tf examples I've found?
Am I doing this right? :(
You can use this script and extend it further to deploy the YAML files with that : https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_autopilot_public
The above TF script is creating the GKE auto pilot cluster for YAML deployment you can use the K8s provider and apply the files using that.
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment
Full example : https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_autopilot_public
main.tf
locals {
cluster_type = "simple-autopilot-public"
network_name = "simple-autopilot-public-network"
subnet_name = "simple-autopilot-public-subnet"
master_auth_subnetwork = "simple-autopilot-public-master-subnet"
pods_range_name = "ip-range-pods-simple-autopilot-public"
svc_range_name = "ip-range-svc-simple-autopilot-public"
subnet_names = [for subnet_self_link in module.gcp-network.subnets_self_links : split("/", subnet_self_link)[length(split("/", subnet_self_link)) - 1]]
}
data "google_client_config" "default" {}
provider "kubernetes" {
host = "https://${module.gke.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(module.gke.ca_certificate)
}
module "gke" {
source = "../../modules/beta-autopilot-public-cluster/"
project_id = var.project_id
name = "${local.cluster_type}-cluster"
regional = true
region = var.region
network = module.gcp-network.network_name
subnetwork = local.subnet_names[index(module.gcp-network.subnets_names, local.subnet_name)]
ip_range_pods = local.pods_range_name
ip_range_services = local.svc_range_name
release_channel = "REGULAR"
enable_vertical_pod_autoscaling = true
}
Another Good example which use the YAML files as template and apply it using the terraform. : https://github.com/epiphone/gke-terraform-example/tree/master/terraform/dev
A few months ago I integrated DataDog into my Kubernetes cluster by using a DaemonSet configuration. Since then I've been getting congestion alerts with the following message:
Please tune the hot-shots settings
https://github.com/brightcove/hot-shots#errors
By attempting to follow the docs with my limited Orchestration/DevOps knowledge, what I could gather is that I need to add the following to my DaemonSet config:
spec
.
.
securityContext:
sysctls:
- name: net.unix.max_dgram_qlen
value: "1024"
- name: net.core.wmem_max
value: "4194304"
I attempted to add that configuration piece to one of the auto-deployed DataDog pods directly just to try it out but it hangs indefinitely and doesn't save the configuration (Instead of adding to DaemonSet and risking bringing all agents down).
That hot-shots documentation also mentions that the above sysctl configuration requires unsafe sysctls to be enabled in the nodes that contain the pods:
kubelet --allowed-unsafe-sysctls \
'net.unix.max_dgram_qlen, net.core.wmem_max'
The cluster I am working with is fully deployed with EKS by using the Dashboard in AWS (Little knowledge on how it is configured). The above seems to be indicated for manually deployed and managed cluster.
Why is the configuration I am attempting to apply to a single DataDog agent pod not saving/applying? Is it because it is managed by DaemonSet or is it because it doesn't have the proper unsafe sysctl allowed? Something else?
If I do need to enable the suggested unsafe sysctlon all nodes of my cluster. How do I go about it since the cluster is fully deployed and managed by Amazon EKS?
So we managed to achieve this using a custom launch template with our managed node group and then passing in a custom bootstrap script. This does mean however you need to supply the AMI id yourself and lose the alerts in the console when it is outdated. In Terraform this would look like:
resource "aws_eks_node_group" "group" {
...
launch_template {
id = aws_launch_template.nodes.id
version = aws_launch_template.nodes.latest_version
}
...
}
data "template_file" "bootstrap" {
template = file("${path.module}/files/bootstrap.tpl")
vars = {
cluster_name = aws_eks_cluster.cluster.name
cluster_auth_base64 = aws_eks_cluster.cluster.certificate_authority.0.data
endpoint = aws_eks_cluster.cluster.endpoint
}
}
data "aws_ami" "eks_node" {
owners = ["602401143452"]
most_recent = true
filter {
name = "name"
values = ["amazon-eks-node-1.21-v20211008"]
}
}
resource "aws_launch_template" "nodes" {
...
image_id = data.aws_ami.eks_node.id
user_data = base64encode(data.template_file.bootstrap.rendered)
...
}
Then the bootstrap.hcl file looks like this:
#!/bin/bash
set -o xtrace
systemctl stop kubelet
/etc/eks/bootstrap.sh '${cluster_name}' \
--b64-cluster-ca '${cluster_auth_base64}' \
--apiserver-endpoint '${endpoint}' \
--kubelet-extra-args '"--allowed-unsafe-sysctls=net.unix.max_dgram_qlen"'
The next step is to set up the PodSecurityPolicy, ClusterRole and RoleBinding in your cluster so you can use the securityContext as you described above and then pods in that namespace will be able to run without a SysctlForbidden message.
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: sysctl
spec:
allowPrivilegeEscalation: false
allowedUnsafeSysctls:
- net.unix.max_dgram_qlen
defaultAllowPrivilegeEscalation: false
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: allow-sysctl
rules:
- apiGroups:
- policy
resourceNames:
- sysctl
resources:
- podsecuritypolicies
verbs:
- '*'
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-sysctl
namespace: app-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: allow-sysctl
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:app-namespace
If using the DataDog Helm chart you can set the following values to update the securityContext of the agent. But you will have to update the chart PSP manually to set allowedUnsafeSysctls
datadog:
securityContext:
sysctls:
- name: net.unix.max_dgram_qlen"
value: 512"
I have a requirement to add kubernetes Service with an ExternalName pointing to NLB(in a different AWS account).
I am using terraform to implement this.
I am not sure how to use NLB info external name section.
Can someone please help?
resource "kubernetes_service" "test_svc" {
metadata {
name = "test"
namespace = var.environment
labels = {
app = "test"
}
}
spec {
type = "ExternalName"
**external_name =**
}
}
Usage of external name is as follows:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
Try to put the NLB CNAME as the external name
My vault is deployed in EKS cluster with S3 backend but whenever I try to access UI in the browser it gives 404.
This is my values.yaml file
server:
affinity: null
dataStorage:
enabled: false
dev:
enabled: false
standalone:
enabled: false
ha:
enabled: true
replicas: 1
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "s3" {
access_key = "key"
secret_key = "key"
bucket = "name"
region = "region"
}
ui:
enabled: true
serviceType: "LoadBalancer"
HTTP status Code 404 means that 404 is returned if the specified resource such as a vault, upload ID, or job ID does not exist.
Make sure that you have properly handled Vault - vault-404.
Take a look: 404-http-code-amazon.
Don't you have extraSecretEnvironmentVars ? Please fill proper field of file with proper values.
Add following block of code under server standalone section:
service:
enabled: true
Look at: standalone-service-vault.
Your values.json file should be in YAML format. What is more important is that it's not even valid YAML, as it has a block of JSON plonked in the middle of it .
Look at the proper example here: vault-eks.
It is solved I just have to put external Port for service
enabled: true
serviceType: "LoadBalancer"
externalPort: 8200