I'm using a slightly customized Terraform configuration to generate my Kubernetes cluster on AWS. The configuration includes an EFS instance attached to the cluster nodes and master. In order for Kubernetes to use this EFS instance for volumes, my Kubernetes YAML needs the id and endpoint/domain of the EFS instance generated by Terraform.
Currently, my Terraform outputs the EFS id and DNS name, and I need to manually edit my Kubernetes YAML with these values after terraform apply and before I kubectl apply the YAML.
How can I automate passing these Terraform output values to Kubernetes?
I don't know what you mean by a yaml to set up an Kubernetes cluster in AWS. But then, I've always set up my AWS clusters using kops. Additionally I don't understand why you would want to mount an EFS to the master and/or nodes instead of to the containers.
But in direct answer to your question: you could write a script to output your Terraform outputs to a Helm values file and use that to generate the k8s config.
I stumbled upon this question when searching for a way to get TF outputs to envvars specified in Kubernetes and I expect more people do. I also suspect that that was really your question as well or at least that it can be a way to solve your problem. So:
You can use the Kubernetes Terraform provider to connect to your cluster and then use the kubernetes_config_map resources to create configmaps.
provider "kubernetes" {}
resource "kubernetes_config_map" "efs_configmap" {
"metadata" {
name = "efs_config" // this will be the name of your configmap
}
data {
efs_id = "${aws_efs_mount_target.efs_mt.0.id}"
efs_dns = "${aws_efs_mount_target.efs_mt.0.dns_name}"
}
}
If you have secret parameters use the kubernetes_secret resource:
resource "kubernetes_secret" "some_secrets" {
"metadata" {
name = "some_secrets"
}
data {
s3_iam_access_secret = "${aws_iam_access_key.someresourcename.secret}"
rds_password = "${aws_db_instance.someresourcename.password}"
}
}
You can then consume these in your k8s yaml when setting your environment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: some-app-deployment
spec:
selector:
matchLabels:
app: some
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-app-container
image: some-app-image
env:
- name: EFS_ID
valueFrom:
configMapKeyRef:
name: efs_config
key: efs_id
- name: RDS_PASSWORD
valueFrom:
secretKeyRef:
name: some_secrets
key: rds_password
Related
I'm creating EKS cluster using the eksctl. While developing the yaml configurations for the underlying resources, I came to know that spot instance is also supported with AWS EKS cluster(here). However while referring the documentation/schema, I didn't find anything to limit the bidding price for spot instance. So by default, it will bid with on demand pricing which is not ideal. Am I missing anything here or it's just not possible at the moment?
Sample yaml config for spot(cluster-config-spot.yaml) -
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: spot-cluster
region: us-east-2
version: "1.23"
managedNodeGroups :
- name: spot-managed-node-group-1
instanceType: ["c7g.xlarge","c6g.xlarge"]
minSize: 1
maxSize: 10
spot: true
AWS EKS cluster creation command -
eksctl create cluster -f cluster-config-spot.yaml
maxPrice can be set for self-managed node group this way; but this is not supported for managed node group. You can upvote the feature here.
I'm following the official AWS EKS tutorial on setting up a distributed GPU cluster for Tensorflow model training and am hitting a bit of a snag.
After creating a new cluster using eksctl and verifying that the corresponding ~/.kube/config file exists on my gateway node, the tutorial instructs that I download ksonnet on the gateway node and use it to initialize a new application:
$ ks init <app-name>
When I try running this, however, I receive the following error:
INFO Using context "arn:aws:eks:us-west-2:131397771409:cluster/<cluster name>" from kubeconfig file "/home/ubuntu/.kube/config"
INFO Creating environment "default" with namespace "default", pointing to "version:v1.18.9" cluster at address <cluster address>
ERROR No Major.Minor.Patch elements found
I've done some searching around on Github/SO, but have not been able to find a resolution to this issue. I suspect the true answer is to move away from using ksonnet, as it is no longer being maintained (and hasn't been for the last 2 years it appears), but for the time being I'd just like to be able to complete the tutorial :)
Any insight is appreciated!
Contents of my ~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <certificate>
server: <server>
name: arn:aws:eks:us-west-2:131397771409:cluster/<name>
contexts:
- context:
cluster: arn:aws:eks:us-west-2:131397771409:cluster/<name>
user: arn:aws:eks:us-west-2:131397771409:cluster/<name>
name: arn:aws:eks:us-west-2:131397771409:cluster/<name>
current-context: arn:aws:eks:us-west-2:131397771409:cluster/<name>
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-west-2:131397771409:cluster/<name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- <name>
command: aws
On the init, you can override the api spec version (that worked for me on that particular step although I got into other issues later on):
ks init ${APP_NAME} --api-spec=version:v1.7.0
Reference
In the end, I made it work with ks init ${APP_NAME} (without --api-spec) in GCP using ksonnet v0.13.1 on old kubeflow (v0.2.0-rc.1) and GKE cluster (1.14.10) versions.
BTW, I was in "Kubeflow: End to End" qwiklab from this course.
We have Istio set up and running in our clusters, with automatic injection enabled by default and enabled in a handful of namespaces. Now we want to do automatic injection for some pods in some other namespaces, but encountered a problem that it is seemingly impossible to do an automatic injection for a specified pod if it is not enabled for the whole namespace. We use Argo workflows to create pods automatically, so we specify sidecar.istio.io/inject: "true" inside Argo workflows so that the resulting pods appear with this annotation in their metadata:
...
metadata:
annotations:
sidecar.istio.io/inject: "true"
...
Unfortunately, Istio still does not inject a sidecar unless the namespace has the istio-injection label explicitly set to enabled, adding sidecars to all pods running there.
We cannot use the manual injection either since the pods are created automatically by the Argo service, and we wanted the sidecars to be injected only to specific pods based on the workflow definition.
So are there any possible ways to overcome this issue? Thanks!
Full Argo workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: presto-sql-pipeline-
annotations: {pipelines.kubeflow.org/kfp_sdk_version: 0.5.1, pipelines.kubeflow.org/pipeline_compilation_time: '2020-05-16T16:07:29.173967',
pipelines.kubeflow.org/pipeline_spec: '{"description": "Simple demo of Presto
SQL operator PrestoSQLOp", "name": "Presto SQL Pipeline"}'}
labels: {pipelines.kubeflow.org/kfp_sdk_version: 0.5.1}
spec:
entrypoint: presto-sql-pipeline
templates:
- name: presto-demo
container:
args:
- --source-name
- '{{workflow.namespace}}.{{workflow.name}}.presto-demo'
- --query-sql
- "SELECT 1;"
image: gcr.io/our-data-warehouse/presto-cli:latest
volumeMounts:
- {mountPath: /mnt/secrets, name: presto-local-vol}
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels: {pipelines.kubeflow.org/pipeline-sdk-type: kfp}
volumes:
- name: presto-local-vol
secret: {secretName: presto-local}
- name: presto-sql-pipeline
dag:
tasks:
- {name: presto-demo, template: presto-demo}
arguments:
parameters: []
serviceAccountName: argo
I had a similar requirement - Istio should inject proxy only when specified by the pod and ignore auto injection for all other pods.
The solution isnt mentioned in the official docs of Istio but its possible to do so.
As given in this user defined custom matrix, we can have Istio follow this behaviour when the following conditions are met :
The namespace has the label istio-injection=enabled
The Istio global proxy auto inject policy is disabled (For helm chart value : global.proxy.autoInject as given here).
The pod which needs the proxy has the annotation sidecar.istio.io/inject: "true".
All other pods will not have the Istio proxy.
I'm trying to create a Kubernetes deployment with an associated ServiceAccount, which is linked to an AWS IAM role. This yaml produces the desired result and the associated deployment (included at the bottom) spins up correctly:
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-account
namespace: example
annotations:
eks.amazonaws.com/role-arn: ROLE_ARN
However, I would like to instead use the Terraform Kubernetes provider to create the ServiceAccount:
resource "kubernetes_service_account" "this" {
metadata {
name = "service-account2"
namespace = "example"
annotations = {
"eks.amazonaws.com/role-arn" = "ROLE_ARN"
}
}
}
Unfortunately, when I create the ServiceAccount this way, the ReplicaSet for my deployment fails with the error:
Error creating: Internal error occurred: Internal error occurred: jsonpatch add operation does not apply: doc is missing path: "/spec/volumes/0"
I have confirmed that it does not matter whether the Deployment is created via Terraform or kubectl; it will not work with the Terraform-created service-account2, but works fine with the kubectl-created service-account. Switching a deployment back and forth between service-account and service-account2 correspondingly makes it work or not work as you might expect.
I have also determined that the eks.amazonaws.com/role-arn is related; creating/assigning ServiceAccounts that do not try to link back to an IAM role work regardless of whether they were created via Terraform or kubectl.
Using kubectl to describe the Deployment, ReplicaSet, ServiceAccount, and associated Secret, I don't see any obvious differences, though I will admit I'm not entirely sure what I might be looking for.
Here is a simple deployment yaml that exhibits the problem:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: example
namespace: example
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: example
spec:
serviceAccountName: service-account # or "service-account2"
containers:
- name: nginx
image: nginx:1.7.8
Adding automountServiceAccountToken: true to the pod spec in your deployment should fix this error. This is usually enabled by default on service accounts, but Terraform defaults it to off. See this issue on the mutating web hook that adds the required environment variables to your pods: https://github.com/aws/amazon-eks-pod-identity-webhook/issues/17
I had the same problem, and I solved it specifying automount_service_account_token = true in the terraform kubernetes service account resource.
Try crating the following service account:
resource "kubernetes_service_account" "this" {
metadata {
name = "service-account2"
namespace = "example"
annotations = {
"eks.amazonaws.com/role-arn" = "ROLE_ARN"
}
}
automount_service_account_token = true
}
I have a deploymgr template that creates a bunch of network assets and VMs and it runs fine with no errors reported, however no VPC peerings are ever created. It works fine if I create a peering via the console or on the cli via glcoud
Peering fails (with no error msg):
# Create the required routes to talk to prod project
- name: mytest-network
type: compute.v1.network
properties:
name: mytest
autoCreateSubnetworks: false
peerings:
- name: mytest-to-prod
network: projects/my-prod-project/global/networks/default
autoCreateRoutes: true
Peering Works:
$ gcloud compute networks peerings create mytest-to-prod --project=myproject --network=default --peer-network=projects/my-prod-project/global/networks/default --auto-create-routes
The Peering cannot be done at network creation time as per the API reference.
First the network needs to be created and once it has been created successfully, the addPeering method needs to be called.
This explains why your YAML definition created the network but not the peering and it worked after running the gcloud command that it calls the addPeering method.
There is a possibility of creating and doing the peering on one YAML file by using the Deployment manager actions:
resources:
- name: mytest-network1
type: compute.v1.network
properties:
name: mytest1
autoCreateSubnetworks: false
- name: mytest-network2
type: compute.v1.network
properties:
name: mytest2
autoCreateSubnetworks: false
- name: addPeering2-1
action: gcp-types/compute-v1:compute.networks.addPeering
metadata:
runtimePolicy:
- CREATE
properties:
network: mytest-network2
name: vpc-2-1
autoCreateRoutes: true
peerNetwork: $(ref.mytest-network1.selfLink)
metadata:
dependsOn:
- mytest-network1
- mytest-network2
- name: addPeering1-2
action: gcp-types/compute-v1:compute.networks.addPeering
metadata:
runtimePolicy:
- CREATE
properties:
network: mytest-network1
name: vpc-1-2
autoCreateRoutes: true
peerNetwork: $(ref.mytest-network2.selfLink)
metadata:
dependsOn:
- mytest-network1
- mytest-network2
You can copy-paste the YAML above, create the deployment and the peering should be done. The actions use the dependsOn option to make sure the network are created first and when deleting the deployment the peerings would be deleted by calling the removePeering method and then the networks would be deleted.
Note: The Deployment manager actions are undocumented yet but there are several examples in the GoogleCloudPlatform/deploymentmanager-samples repository such as this and this.
From gcloud works as expected, please update your YAML file to use "peerings[].network" when specifying the list of peered network resources.