inject value in 3rd party helm yaml template of Pod kind - templates

I am using 3rd party repository bitnami for mysql.
I know that values.yaml can be injected easily, if that yaml is in dependency section.
i.e, if I add dependency in Chart.yaml:
dependencies:
- name: mysql
version: 8.8.23
repository: "https://charts.bitnami.com/bitnami"
and in values.yaml:
mysql:
auth:
rootPassword: "12345"
database: my_database
primary:
service:
type: LoadBalancer
The root password is injected in bitnami/mysql in the proper parameter auth.rootPassword.
but in case I have my own pod.yaml in template folder:
apiVersion: v1
kind: Pod
spec:
containers:
- name: mysql-pod
image: bitnami/mysql
How can I inject this file the password and other parameters, as the same as I did as with values.yaml file.
I need to pass auth.rootPassword, etc...
Also, if there is ability to refer exactly the same pod, that is created on dependency, and not as another instance.

The chart contains a lot of things – a StatefulSet, a matching Service, a PodDisruptionBudget, a ConfigMap, and so on. You can't force that all into a single Pod, and in general you can't refer to things in Helm charts without including them as dependencies as you show originally.
Bitnami also happens to publish a separate bitnami/mysql Docker image that you could list as the image: in a StatefulSet's pod spec, but you would have to reconstruct all of the other machinery in that chart yourself.
Also, if there is ability to refer exactly the same pod, that is created on dependency, and not as another instance.
There's a typical convention in Helm that most objects are named RELEASE-CHART-SUFFIX, squashing together RELEASE and CHART if they're the same. You don't usually care about the StatefulSet's generated Pod so much as the Service that reaches it, which is generated by a typical Helm Service YAML file. If you're not setting various overrides, and aren't using replicated mode, then in this dependency context, you can combine .Release.Name and the dependency chart name mysql and no suffix to get
- name: MYSQL_HOST
value: {{ .Release.Name }}-mysql
I'm not aware of a more general way to get the Service's name. (I'm not aware of any charts at all that use Helm's exports mechanism and since it only republishes values it couldn't contain computed values in any case.)
For other details like the credentials, you can either refer to the generated Secret, or just use the .Values.mysql... directly.
- name: MYSQL_USER
value: {{ .Values.mysql.auth.username }}
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-mysql
key: mysql-password
I don't think there's a good way to figure out these names without either reading the chart's source yourself, observing what the chart actually installs, or reading the chart's documentation (the Bitnami charts are fairly well-documented but others may not be).

Related

ArgoCD ApplicationSet - How to preserve application and resources even when ApplicationSet is Deleted or Corrupted

I have an ApplicationSet which creates a few resources in Kubernetes. It is working fine. But, when I delete this ApplicationSet, the relevant Application also gets deleted from Argo, along with its resources. (I know this is expected by the ApplicationSet-Controller). But, I want to prevent this from happening.
Scenario: sometimes, when the ApplicationSet is corrupted, it will destroy the Application associated with it. The same when the ApplicationSet is deleted too.
I was reading this document, on adding the .syncPolicy.preserveResourcesOnDeletion to true in the ApplicationSet, but it doesn't work as expected. This is my current sync policy:
syncPolicy:
automated:
selfHeal: true
syncOptions:
- Validate=true
- CreateNamespace=true
- preserveResourcesOnDeletion=true
Question: How can I keep my Application safe, even when the ApplicationSet is deleted/corrupted?
There are two different places you can set a sync policy:
on the Application resource (i.e. in spec.template in the ApplicationSet)
on the ApplicationSet resource
You're looking for the second one. Set this in your ApplicationSet resource:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
spec:
syncPolicy:
preserveResourcesOnDeletion: true
I just noticed that the syncPolicy for that purpose should be written as
syncPolicy:
preserveResourcesOnDeletion: true
automated:
selfHeal: true
syncOptions:
- Validate=true
- CreateNamespace=true
when you install the argocd you can set the values like this:
applicationSet:
args:
policy: create-only
therefore Argocd will only create the application-set only once even if something happened to application set applications wont be affected by that
by default ArgoCD deletion action is cascaded by default which means that all the resources created by the AppSet will be deleted.
what you need is to set the cascade option to false when you are attempting to delete the Application/ApplicationSet something similar to the below
kubectl delete ApplicationSet (NAME) --cascade=false
for more information take a look at the docs here https://argocd-applicationset.readthedocs.io/en/stable/Application-Deletion/

How can I use conditional configuration in serverless.yml for lambda?

I need to configure a lambda via serverless.yml to use different provision concurrency for different environments. Below is my lambda configuration:
myLambda:
handler: src/lambdas
name: myLambda
provisionedConcurrency: ${self:custom.pc}
...
custom:
pc: ${env:PC}
The value PC is loaded from environment variable. It works for values greater than 0 but I can't set a value 0 in one environment. What I want to do is to disable provision concurrency in dev environment.
I have read through this doc https://forum.serverless.com/t/conditional-serverless-yml-based-on-stage/1763/3 but it doesn't seem to help in my case.
How can I set provisionedConcurrency conditional based on environment?
Method 1: Stage-based variables via default values
This is a fairly simple trick by using a cascading value variable. The first value is the one you want, the second one being a default, or fallback value. Also called cascading variables.
// serverless.yml
provider:
stage: "dev"
custom:
provisionedConcurrency:
live: 100
staging: 50
other: 10
myLambda:
handler: src/lambdas
name: myLambda
provisionedConcurrency: ${self:custom.provisionedConcurrency.${self:provider.stage}, self:custom.provisionedConcurrency.other}
This above with stage set to dev will default to "other" value of 10, but if you set stage via serverless deploy --stage live then it will use the live value of 100.
See here for more details: https://www.serverless.com/framework/docs/providers/aws/guide/variables#syntax
Method 2: Asynchonous Value via Javascript
You can use an js include and put your conditional logic there. It's called "asynchronous value support". Basically, this allows you to put logic in a javascript file which you include and it can return different values depending on various things (like, what AWS account you're on, or if certain variables are set, or whatever). Basically, it allows you to do this...
provisionedConcurrency: ${file(./detect_env.js):get_provisioned_concurrency}
Which works if you create a javascript file in this folder called detect_env.js, and it has the contents similar to...
module.exports.get_provisioned_concurrency = () => {
if ("put logic to detect which env you are deploying to, eg for live") {
return Promise.resolve('100');
} else {
// Otherwise fallback to 10
return Promise.resolve('10');
}
}
For more info see: https://www.serverless.com/framework/docs/providers/aws/guide/variables#with-a-new-variables-resolver
I felt I had to reply here even though this was asked months ago because none of the answers were even remotely close to the right answer and I really felt sorry for the author or anyone who lands here.
For really sticky problems, I find it's useful to go to the Cloudformation script instead and use the Cloudformation Intrinsic Functions.
For this case, if you know all the environments you could use Fn::FindInMap
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-findinmap.html
Or if it's JUST production which needs 0 then you could use the conditional Fn::If and a boolean Condition test in the Cloudformation template to test if environment equals production, use 0, else use the templated value from SLS.
Potential SLS:
resources:
Conditions:
UseZero: !Equals ["production", ${provider.stage}]
Resources:
myLambda:
ProvisionedConcurrency: !If [UseZero, 0, ${self:custom.pc}]
You can explicitly remove the ProvisionedConcurrency property as well if you want:
resources:
Conditions:
UseZero: !Equals ["production", ${provider.stage}]
Resources:
myLambda:
ProvisionedConcurrency: !If [UseZero, AWS::NoValue, ${self:custom.pc}]
Edit: You can still use SLS to deploy; it simply compiles into a Cloudformation JSON template which you can explicitly modify with the SLS resources field.
The Serverless Framework provides a really useful dashboard tool with a feature called Parameters. Essentially what it lets you do is connect your service to it then you can set different values for different stages and then use those values in your serverless.yml with syntax like ${param:VARAIBLE_NANE_HERE} and it gets replaced at deploy time with the right value for whatever stage you are currently deploying. Super handy. There are also a bunch of other features in the dashboard such as monitoring and troubleshooting.
You can find out more about Parameters at the official documentation here: https://www.serverless.com/framework/docs/guides/parameters/
And how to get started with the dashboard here: https://www.serverless.com/framework/docs/guides/dashboard#enabling-the-dashboard-on-existing-serverless-framework-services
Just using a variable with a null value for dev environments during on deploy/package and SLS will skip this property:
provisionedConcurrency: ${self:custom.variables.provisionedConcurrency}

How to remove a service account key on GCP using Ansible Playbook?

I am using an Ansible playbook to run certain modules that create service accounts and their respective keys. The code used to generate this is as found on the Ansible documentation:
- name: create a service account key
gcp_iam_service_account_key:
service_account: "{{ serviceaccount }}"
private_key_type: TYPE_GOOGLE_CREDENTIALS_FILE
path: "~/test_account.json"
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present
Now I am trying to remove the service account key, so I changed the state value from present to absent but that doesn't seem to do much, wanted to ask if I'm missing something or if there is anything else I could try?
I'm not sure if it could be possible since I couldn't find the module on the ansible documentation, but in the deletion for instances examples, I see that after the absent state they use a tag for the deletion, it could be a way to do it for the SA. e.g.
state: absent
tags:
- delete
Other way that could be useful is to directly do the request to the REST API, e.g.
DELETE https://iam.googleapis.com/v1/projects/[PROJECT-ID]/serviceAccounts/[SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com/keys/[KEY-ID]
I can confirm that it works when changing state from present to absent in version 1.0.2 of the google.cloud collection.
I believe that you expect the file in path: "~/test_account.json" to be deleted but in fact the key is deleted on the service account in GCP. You will have to delete the file yourself after the task has completed successfully.

Does Deployment Manager have Cloud Functions support (and support for having multiple cloud functions)?

I'm looking at this repo and very confused about what's happening here: https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/cloud_functions
In other Deployment Manager examples I see the "type" is set to the type of resource being deployed but in this example I see this:
resources:
- name: function
type: cloud_function.py # why not "type: cloudfunctions"?
properties:
# All the files that start with this prefix will be packed in the Cloud Function
codeLocation: function/
codeBucket: mybucket
codeBucketObject: function.zip
location: us-central1
timeout: 60s
runtime: nodejs8
availableMemoryMb: 256
entryPoint: handler
"type" is pointing to a python script (cloud_function.py) instead of a resource type. The script is over 100 lines long and does a whole bunch of stuff.
This looks like a hack, like its just scripting the GCP APIs? The reason I'd ever want to use something like Deployment Manager is to avoid a mess of deployment scripts but this looks like it's more spaghetti.
Does Deployment Manager not support Cloud Functions and this is a hacky workaround or is this how its supposed to work? The docs for this example are bad so I don't know what's happening
Also, I want to deploy multiple function into a single Deployment Manager stack- will have to edit the cloud_function.py script or can I just define multiple resources and have them all point to the same script?
Edit
I'm also confused about what these two imports are for at the top of the cloud_function.yaml:
imports:
# The function code will be defined for the files in function/
- path: function/index.js
- path: function/package.json
Why is it importing the actual code of the function it's deploying?
Deployment manager simply interacts with the different kind of Google APIs. This documentation gives you a list of supported resource types by Deployment manager. I would recommend you to run this command “gcloud deployment-manager types list | grep function” and you will find this “cloudfunctions.v1beta2.function” resource type is also supported by DM.
The template is using a gcp-type (that is in beta).The cloud_functions.py is a template. If you use a template, you can reuse it for multiple resources, you can this see example. For better understanding, easier to read/follow you can check this example of cloud functions through gcp-type.
I wan to add to the answer by Aarti S that gcloud deployment-manager types list | grep function doesn't work for me as I found how to all list of resource types, including resources that are in alpha:
gcloud beta deployment-manager types list --project gcp-types
Or just gcloud beta deployment-manager types list | grep function helps.

Is there a way to set project metadata in GCP deployment manager

We are using GCP Deployment manager for our infrastructure release. We need to have a shared place that could be accessible from all groups(e.g. project metadata). I think it should be great if we could have it as a part of our infrastructure as code, so we could connect it with all the groups.
I think that for now there is no such resource in GCP deployment manager, but I also would not like to have some separate script that will do this update out of the pattern.
Can someone help with this? what is the best way to store common metadata in the cloud, and if the cloud could not provide the right tool, how can we solve this issue in a clear/nice way?
Setting project wide metadata is done using the compute.v1.projects API which is not supported for DM. You can view a list of the supported resources for DM here.
You may want to suggest support for this resource through a Feature Request
Here is a yaml config file and its template for you:
The project.yaml configuration :
# Set project metadata
imports:
- path: project.jinja
resources:
- name: project
type: project.jinja
properties:
key: 'abcd'
value: 1234
And the project.jinja template:
{#
Template: Set Project Metadata
#}
resources:
- name: data
action: gcp-types/compute-v1:compute.projects.setCommonInstanceMetadata
metadata:
runtimePolicy:
- UPDATE_ON_CHANGE
properties:
items:
- key: {{ properties["key"] }}
value: {{ properties["value"] }}