I have created an apps cluster deployment on AWS EKS that is deployed using Helm. For proper operation of my app, I need to set env variables, which are secrets stored in AWS Secrets manager. Referencing a tutorial, I set up my values in values.yaml file someway like this
secretsData:
secretName: aws-secrets
providerName: aws
objectName: CodeBuild
Now I have created a secrets provider class as AWS recommends: secret-provider.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secret-provider-class
spec:
provider: {{ .Values.secretsData.providerName }}
parameters:
objects: |
- objectName: "{{ .Values.secretsData.objectName }}"
objectType: "secretsmanager"
jmesPath:
- path: SP1_DB_HOST
objectAlias: SP1_DB_HOST
- path: SP1_DB_USER
objectAlias: SP1_DB_USER
- path: SP1_DB_PASSWORD
objectAlias: SP1_DB_PASSWORD
- path: SP1_DB_PATH
objectAlias: SP1_DB_PATH
secretObjects:
- secretName: {{ .Values.secretsData.secretName }}
type: Opaque
data:
- objectName: SP1_DB_HOST
key: SP1_DB_HOST
- objectName: SP1_DB_USER
key: SP1_DB_USER
- objectName: SP1_DB_PASSWORD
key: SP1_DB_PASSWORD
- objectName: SP1_DB_PATH
key: SP1_DB_PATH
I mount this secret object in my deployment.yaml, the relevant section of the file looks like this:
volumeMounts:
- name: secrets-store-volume
mountPath: "/mnt/secrets"
readOnly: true
env:
- name: SP1_DB_HOST
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_HOST
- name: SP1_DB_PORT
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_PORT
further down in same deployment file, I define secrets-store-volume as :
volumes:
- name: secrets-store-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aws-secret-provider-class
All drivers are installed into cluster and permissions are set accordingly
with helm install mydeployment helm-folder/ --dry-run I can see all the files and values are populated as expected. Then with helm install mydeployment helm-folder/ I install the deployment into my cluster but with kubectl get all I can see the pod is stuck at Pending with warning Error: 'aws-secrets' not found and eventually gets timeout. In AWS CloudTrail log, I can see that the cluster made request to access the secret and there was no error fetching it. How can I solve this or maybe further debug it? Thank you for your time and efforts.
Error: 'aws-secrets' not found - looks like CSI Driver isn't creating kubernetes secret that you're using to reference values
Since yaml files looks correctly, I would say it's probably CSI Driver configuration Sync as Kubernetes secret - syncSecret.enabled (which is false by default)
So make sure that secrets-store-csi-driver runs with this flag set to true, for example:
helm upgrade --install csi-secrets-store \
--namespace kube-system secrets-store-csi-driver/secrets-store-csi-driver \
--set grpcSupportedProviders="aws" --set syncSecret.enabled="true"
Related
I am using the AWS secrets store CSI provider to sync secrets from the AWS Secret Manager into Kubernetes/EKS.
The SecretProviderClass is:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: test-provider
spec:
provider: aws
parameters:
objects: |
- objectName: mysecret
objectType: secretsmanager
jmesPath:
- path: APP_ENV
objectAlias: APP_ENV
- path: APP_DEBUG
objectAlias: APP_DEBUG
And the Pod mounting these secrets is:
apiVersion: v1
kind: Pod
metadata:
name: secret-pod
spec:
restartPolicy: Never
serviceAccountName: my-account
terminationGracePeriodSeconds: 2
containers:
- name: dotfile-test-container
image: registry.k8s.io/busybox
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/mnt/secret-volume"
volumes:
- name: secret-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: test-provider
The secret exists in the Secret Provider:
{
"APP_ENV": "staging",
"APP_DEBUG": false
}
(this is an example, I am aware I do not need to store these particular variables as secrets)
But when I create the resources, the Pod fails to run with
Warning
FailedMount
96s (x10 over 5m47s)
kubelet
MountVolume.SetUp failed for volume "secret-volume" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod pace/secret-dotfiles-pod,
err: rpc error: code = Unknown desc = Failed to fetch secret from all regions: mysecret
Turns out the error message is very misleading. The problem in my case was due to the type of the APP_DEBUG value. Changing it from a boolean to string
fixed the problem and now the pod starts correctly.
{
"APP_ENV": "staging",
"APP_DEBUG": "false"
}
Seems like a bug in the provider to me.
I'm creating a cloud formation code to build ECS cluster. Where I need to fetch some values from AWS parameter store. I don't find any example code sample for the same. Look like 'ValueFrom' in cloudFormation don't support!!
Can anyone confirm?
Following I'm trying to use:
ContainerDefinitions:
- Name: !Ref ServiceName
Image: !Ref Image
PortMappings:
- ContainerPort: !Ref ContainerPort
Environment:
- Name: DB_HOST
Value: arn:aws:ssm:us-east-2:111111111111:parameter/dev/rds/DB_HOST
- Name: DB_PASSWORD
Value: arn:aws:ssm:us-east-2:111111111111:parameter/dev/rds/DB_PASSWORD
- Name: DB_PORT
Value: 5432
In the above case, CloudFormation codes executed without error but it's treated DB_HOST and DB_PASSWORD as simple/direct text don't take form parameter store, check the screenshot highlighted:
So it only works for DB_PORT and doesn't work for DB_HOST and DB_PASSWORD until I manually change 'value' (highlighted in the screenshot) to 'valueFrom' like below picture:
Basically I'd like to use 'valueFrom' option through CloudFormation !!
I also tried:
Environment:
- Name: DB_HOST
ValueFrom: arn:aws:ssm:us-east-2:111111111111:parameter/dev/rds/DB_HOST
But it's not supported by cloud formation and give the error !!
You shoudn't be using Environment for that. Instead there is dedicated section called Secrets.
Using this section you can pass your secrets to the containers. For example:
Secrets:
- Name: DB_HOST
ValueFrom: arn:aws:ssm:us-east-2:111111111111:parameter/dev/rds/DB_HOST
After you manually change your Environment variables to ValueFrom, you can checkout your jason file in the TaskDefinition, it shows there as secrets, hence you should use Secrets instead of Environment in the ContainerDefinitions section of your CFT
Checkout the screenshot I attached
I have a deployment comprising a managed instance group and two instance templates (A and B). The deployment was initially created with the instance group referencing instance template A.
I tried updating the sourceImage in instance template B using deployment manager (gcloud beta deployment-manager deployments update my-deployment --template ...), but got the following error:
ERROR: (gcloud.beta.deployment-manager.deployments.update) Error in
Operation [operation-1538798895713-57787898f4ae9-8b478716-0bb72a09]:
errors:
- code: NO_METHOD_TO_UPDATE_FIELD
message: No method found to update field 'properties' on
resource 'fwp-app-preprod-instance-template-a' of type
'compute.v1.instanceTemplate'. The resource may need to be
recreated with the new field.
I should make it clear that the only change I made from the original deployment is the instance template's sourceImage.
Is it possible to perform an update of an instance template via deployment manager so that it references an updated sourceImage?
The error states clearly that the resource (instance template) may need to be recreated, and I'm happy for deployment manager to do that. But I have no idea how to instruct/force deployment manager to take that action.
I don't doubt it can be done outside of deployment manager, but I want to avoid configuration drift.
My app.jinja.schema:
imports:
- path: instance-group.jinja
- path: instance-template.jinja
My app.jinja:
resources:
- name: instance-template-a
type: instance-template.jinja
properties:
name: {{ env["deployment"] }}-instance-template-a
machineType: g1-small
sourceImage: "projects/my-project/global/images/my-image"
diskSizeGb: '30'
- name: instance-template-b
type: instance-template.jinja
properties:
name: {{ env["deployment"] }}-instance-template-b
machineType: g1-small
sourceImage: "projects/my-project/global/images/my-image"
diskSizeGb: '30'
- name: fwp-instance-group
type: instance-group.jinja
My instance-group.jinja:
resources:
- name: 'instance-group-{{ env["deployment"] }}'
type: compute.v1.regionInstanceGroupManager
properties:
baseInstanceName: ig-instance-{{ env["deployment"] }}
instanceTemplate: '$(ref.{{ env["deployment"] }}-instance-template-a.selfLink)'
targetSize: 1
region: australia-southeast1
- name: 'autoscaler-{{ env["deployment"] }}'
type: compute.v1.regionAutoscalers
properties:
autoscalingPolicy:
coolDownPeriodSec: 60
loadBalancingUtilization:
utilizationTarget: 0.9
maxNumReplicas: 10
minNumReplicas: 2
target: $(ref.instance-group-{{ env["deployment"] }}.selfLink)
region: australia-southeast1
And my instance-template.jinja
resources:
- name: {{ properties["name"] }}
type: compute.v1.instanceTemplate
properties:
name: {{ properties["name"] }}
description: ''
properties:
machineType: {{ properties["machineType"] }}
tags:
items:
- no-ip
- web-server
- http-server
- https-server
disks:
- type: 'PERSISTENT'
boot: true
mode: 'READ_WRITE'
autoDelete: true
deviceName: instance-device
initializeParams:
sourceImage: {{ properties["sourceImage"] }}
diskType: 'pd-standard'
diskSizeGb: {{ properties["diskSizeGb"] }}
canIpForward: false
networkInterfaces:
- network: projects/my-project/global/networks/vpc-fwp-nonprod
subnetwork: projects/my-project/regions/australia-southeast1/subnetworks/subnet-private-fwp-nonprod
aliasIpRanges: []
labels: { environment: {{ env["deployment"] }}, tenancy: "fwp-nonprod" }
scheduling:
preemptible: false
onHostMaintenance: MIGRATE
automaticRestart: true
nodeAffinities: []
serviceAccounts:
- email: some-service-account#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/cloud-platform
To recap the comments:
The DM config includes an instance template for the managed instance group. The change of source image is attempting to change the image used in the template.
Unfortunately, instance templates are immutable once created
"So it is not possible to update an existing instance template or change an instance template after it has been created."
This explains the error message returned. The proper way to change the image you want to use for a Managed Instance Group is to create a new template and perform a rolling update on the group and using the new instance template.
On kubernetes 1.6.1 (Openshift 3.6 CP) I'm trying to get the subdomain of my cluster using $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN) but it's not dereferencing at runtime. Not sure what I'm doing wrong, docs show this is how environment parameters should be acquired.
https://v1-6.docs.kubernetes.io/docs/api-reference/v1.6/#container-v1-core
- apiVersion: v1
kind: DeploymentConfig
spec:
template:
metadata:
labels:
deploymentconfig: ${APP_NAME}
name: ${APP_NAME}
spec:
containers:
- name: myapp
env:
- name: CLOUD_CLUSTER_SUBDOMAIN
value: $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN)
You'll need to set that value as an environment variable, this is the usage:
oc set env <object-selection> KEY_1=VAL_1
for example if your pod is named foo and your subdomain is foo.bar, you would use this command:
oc set env dc/foo OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN=foo.bar
With Test Kitchen, in the yaml configs... where is the best place to store globally used attributes that apply to multiple platforms and multiple suites?
To use my .kitchen.yml as an example:
---
provisioner:
name: chef_solo
platforms:
- name: centos-6.5
driver:
name: vagrant
- name: amazon
driver:
name: ec2
image_id: ami-ed8e9284
flavor_id: t2.medium
aws_ssh_key_id: <snip>
ssh_key: <snip>
availability_zone: us-east-1a
subnet_id: subnet-<snip>
require_chef_omnibus: true
iam_profile_name: <snip>
ebs_delete_on_termination: true
security_group_ids: sg-<snip>
# area in question (does not work here)
attributes:
teamcity:
server: 'build.example.com'
port: 80
username: 'example'
password: 'example'
# end area in question
suites:
- name: resin4
run_list:
- recipe[example_server::resin4]
- recipe[example_server::deploy_all_artifacts]
- name: deploy
run_list:
- recipe[example_server::deploy_all_artifacts]
- name: default
run_list:
- recipe[example_server::elasticsearch]
- recipe[example_server::resin4]
- recipe[example_server::deploy_all_artifacts]
I know there are other kitchen files, such as ~/kitchen/config.yml and .kitchen.local.yml but I've been unable to find a away to get attributes to apply to all platforms and suites. Is copy and pasting attributes to platforms the best way?
Is there a reason to specify these attributes in kitchen's yaml and not recipe[example_server::deploy_all_artifacts]? If necessary you could set overrides in kitchen.
Also, this post might be helpful: Access Attributes Across Recipes