ArgoCD ApplicationSet - How to preserve application and resources even when ApplicationSet is Deleted or Corrupted - argocd

I have an ApplicationSet which creates a few resources in Kubernetes. It is working fine. But, when I delete this ApplicationSet, the relevant Application also gets deleted from Argo, along with its resources. (I know this is expected by the ApplicationSet-Controller). But, I want to prevent this from happening.
Scenario: sometimes, when the ApplicationSet is corrupted, it will destroy the Application associated with it. The same when the ApplicationSet is deleted too.
I was reading this document, on adding the .syncPolicy.preserveResourcesOnDeletion to true in the ApplicationSet, but it doesn't work as expected. This is my current sync policy:
syncPolicy:
automated:
selfHeal: true
syncOptions:
- Validate=true
- CreateNamespace=true
- preserveResourcesOnDeletion=true
Question: How can I keep my Application safe, even when the ApplicationSet is deleted/corrupted?

There are two different places you can set a sync policy:
on the Application resource (i.e. in spec.template in the ApplicationSet)
on the ApplicationSet resource
You're looking for the second one. Set this in your ApplicationSet resource:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
spec:
syncPolicy:
preserveResourcesOnDeletion: true

I just noticed that the syncPolicy for that purpose should be written as
syncPolicy:
preserveResourcesOnDeletion: true
automated:
selfHeal: true
syncOptions:
- Validate=true
- CreateNamespace=true

when you install the argocd you can set the values like this:
applicationSet:
args:
policy: create-only
therefore Argocd will only create the application-set only once even if something happened to application set applications wont be affected by that

by default ArgoCD deletion action is cascaded by default which means that all the resources created by the AppSet will be deleted.
what you need is to set the cascade option to false when you are attempting to delete the Application/ApplicationSet something similar to the below
kubectl delete ApplicationSet (NAME) --cascade=false
for more information take a look at the docs here https://argocd-applicationset.readthedocs.io/en/stable/Application-Deletion/

Related

inject value in 3rd party helm yaml template of Pod kind

I am using 3rd party repository bitnami for mysql.
I know that values.yaml can be injected easily, if that yaml is in dependency section.
i.e, if I add dependency in Chart.yaml:
dependencies:
- name: mysql
version: 8.8.23
repository: "https://charts.bitnami.com/bitnami"
and in values.yaml:
mysql:
auth:
rootPassword: "12345"
database: my_database
primary:
service:
type: LoadBalancer
The root password is injected in bitnami/mysql in the proper parameter auth.rootPassword.
but in case I have my own pod.yaml in template folder:
apiVersion: v1
kind: Pod
spec:
containers:
- name: mysql-pod
image: bitnami/mysql
How can I inject this file the password and other parameters, as the same as I did as with values.yaml file.
I need to pass auth.rootPassword, etc...
Also, if there is ability to refer exactly the same pod, that is created on dependency, and not as another instance.
The chart contains a lot of things – a StatefulSet, a matching Service, a PodDisruptionBudget, a ConfigMap, and so on. You can't force that all into a single Pod, and in general you can't refer to things in Helm charts without including them as dependencies as you show originally.
Bitnami also happens to publish a separate bitnami/mysql Docker image that you could list as the image: in a StatefulSet's pod spec, but you would have to reconstruct all of the other machinery in that chart yourself.
Also, if there is ability to refer exactly the same pod, that is created on dependency, and not as another instance.
There's a typical convention in Helm that most objects are named RELEASE-CHART-SUFFIX, squashing together RELEASE and CHART if they're the same. You don't usually care about the StatefulSet's generated Pod so much as the Service that reaches it, which is generated by a typical Helm Service YAML file. If you're not setting various overrides, and aren't using replicated mode, then in this dependency context, you can combine .Release.Name and the dependency chart name mysql and no suffix to get
- name: MYSQL_HOST
value: {{ .Release.Name }}-mysql
I'm not aware of a more general way to get the Service's name. (I'm not aware of any charts at all that use Helm's exports mechanism and since it only republishes values it couldn't contain computed values in any case.)
For other details like the credentials, you can either refer to the generated Secret, or just use the .Values.mysql... directly.
- name: MYSQL_USER
value: {{ .Values.mysql.auth.username }}
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-mysql
key: mysql-password
I don't think there's a good way to figure out these names without either reading the chart's source yourself, observing what the chart actually installs, or reading the chart's documentation (the Bitnami charts are fairly well-documented but others may not be).

Migrate a CDK managed CloudFormation distribution from the CloudFrontWebDistribution to the Distribution API

I have an existing CDK setup in which a CloudFormation distribution is configured using the deprecated CloudFrontWebDistribution API, now I need to configure a OriginRequestPolicy, so after some Googling, switched to the Distribution API (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-cloudfront-readme.html) and reused the same "id" -
Distribution distribution = Distribution.Builder.create(this, "CFDistribution")
When I synth the stack I already see in the yaml that the ID - e.g. CloudFrontCFDistribution12345689 - is a different one than the one before.
When trying to deploy it will fail, since the HTTP Origin CNAMEs are already associated with the existing distribution. ("Invalid request provided: One or more of the CNAMEs you provided are already associated with a different resource. (Service: CloudFront, Status Code: 409, Request ID: 123457657, Extended Request ID: null)"
Is there a way to either add the OriginRequestPolicy (I just want to transfer an additional header) to the CloudFrontWebDistribution or a way to use the new Distribution API while maintaining the existing distribution instead of creating a new one?
(The same operation takes around 3 clicks in the AWS Console).
You could use the following trick to assign the logical ID yourself instead of relying on the autogenerated logical ID. The other option is to execute it in two steps, first update it without the additional CNAME and then do a second update with the additional CNAME.
const cfDistro = new Distribution(this, 'distro', {...});
cfDistro.node.defaultChild.overrideLogicalId('CloudfrontDistribution');
This will result in the following stack:
CloudfrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
...
Small edit to explain why this happens:
Since you're switching to a new construct, you're also getting a new logical ID. In order to ensure a rollback is possible, CloudFormation will first create all new resources and create the updated resources that need to be recreated. Only when creating and updating everything is done, it will clean up by removing the old resources. This is also the reason why a two-step approach would work when changing the logical IDs of the resources, or force a normal update by ensuring the same logical ID.
Thanks a lot #stijndepestel - simply assigning the existing logical ID worked on the first try.
Here's the Java variant of the code in the answer
import software.amazon.awscdk.services.cloudfront.CfnDistribution;
...
((CfnDistribution) distribution.getNode().getDefaultChild()).overrideLogicalId("CloudfrontDistribution");

How can I use conditional configuration in serverless.yml for lambda?

I need to configure a lambda via serverless.yml to use different provision concurrency for different environments. Below is my lambda configuration:
myLambda:
handler: src/lambdas
name: myLambda
provisionedConcurrency: ${self:custom.pc}
...
custom:
pc: ${env:PC}
The value PC is loaded from environment variable. It works for values greater than 0 but I can't set a value 0 in one environment. What I want to do is to disable provision concurrency in dev environment.
I have read through this doc https://forum.serverless.com/t/conditional-serverless-yml-based-on-stage/1763/3 but it doesn't seem to help in my case.
How can I set provisionedConcurrency conditional based on environment?
Method 1: Stage-based variables via default values
This is a fairly simple trick by using a cascading value variable. The first value is the one you want, the second one being a default, or fallback value. Also called cascading variables.
// serverless.yml
provider:
stage: "dev"
custom:
provisionedConcurrency:
live: 100
staging: 50
other: 10
myLambda:
handler: src/lambdas
name: myLambda
provisionedConcurrency: ${self:custom.provisionedConcurrency.${self:provider.stage}, self:custom.provisionedConcurrency.other}
This above with stage set to dev will default to "other" value of 10, but if you set stage via serverless deploy --stage live then it will use the live value of 100.
See here for more details: https://www.serverless.com/framework/docs/providers/aws/guide/variables#syntax
Method 2: Asynchonous Value via Javascript
You can use an js include and put your conditional logic there. It's called "asynchronous value support". Basically, this allows you to put logic in a javascript file which you include and it can return different values depending on various things (like, what AWS account you're on, or if certain variables are set, or whatever). Basically, it allows you to do this...
provisionedConcurrency: ${file(./detect_env.js):get_provisioned_concurrency}
Which works if you create a javascript file in this folder called detect_env.js, and it has the contents similar to...
module.exports.get_provisioned_concurrency = () => {
if ("put logic to detect which env you are deploying to, eg for live") {
return Promise.resolve('100');
} else {
// Otherwise fallback to 10
return Promise.resolve('10');
}
}
For more info see: https://www.serverless.com/framework/docs/providers/aws/guide/variables#with-a-new-variables-resolver
I felt I had to reply here even though this was asked months ago because none of the answers were even remotely close to the right answer and I really felt sorry for the author or anyone who lands here.
For really sticky problems, I find it's useful to go to the Cloudformation script instead and use the Cloudformation Intrinsic Functions.
For this case, if you know all the environments you could use Fn::FindInMap
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-findinmap.html
Or if it's JUST production which needs 0 then you could use the conditional Fn::If and a boolean Condition test in the Cloudformation template to test if environment equals production, use 0, else use the templated value from SLS.
Potential SLS:
resources:
Conditions:
UseZero: !Equals ["production", ${provider.stage}]
Resources:
myLambda:
ProvisionedConcurrency: !If [UseZero, 0, ${self:custom.pc}]
You can explicitly remove the ProvisionedConcurrency property as well if you want:
resources:
Conditions:
UseZero: !Equals ["production", ${provider.stage}]
Resources:
myLambda:
ProvisionedConcurrency: !If [UseZero, AWS::NoValue, ${self:custom.pc}]
Edit: You can still use SLS to deploy; it simply compiles into a Cloudformation JSON template which you can explicitly modify with the SLS resources field.
The Serverless Framework provides a really useful dashboard tool with a feature called Parameters. Essentially what it lets you do is connect your service to it then you can set different values for different stages and then use those values in your serverless.yml with syntax like ${param:VARAIBLE_NANE_HERE} and it gets replaced at deploy time with the right value for whatever stage you are currently deploying. Super handy. There are also a bunch of other features in the dashboard such as monitoring and troubleshooting.
You can find out more about Parameters at the official documentation here: https://www.serverless.com/framework/docs/guides/parameters/
And how to get started with the dashboard here: https://www.serverless.com/framework/docs/guides/dashboard#enabling-the-dashboard-on-existing-serverless-framework-services
Just using a variable with a null value for dev environments during on deploy/package and SLS will skip this property:
provisionedConcurrency: ${self:custom.variables.provisionedConcurrency}

How to remove a service account key on GCP using Ansible Playbook?

I am using an Ansible playbook to run certain modules that create service accounts and their respective keys. The code used to generate this is as found on the Ansible documentation:
- name: create a service account key
gcp_iam_service_account_key:
service_account: "{{ serviceaccount }}"
private_key_type: TYPE_GOOGLE_CREDENTIALS_FILE
path: "~/test_account.json"
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present
Now I am trying to remove the service account key, so I changed the state value from present to absent but that doesn't seem to do much, wanted to ask if I'm missing something or if there is anything else I could try?
I'm not sure if it could be possible since I couldn't find the module on the ansible documentation, but in the deletion for instances examples, I see that after the absent state they use a tag for the deletion, it could be a way to do it for the SA. e.g.
state: absent
tags:
- delete
Other way that could be useful is to directly do the request to the REST API, e.g.
DELETE https://iam.googleapis.com/v1/projects/[PROJECT-ID]/serviceAccounts/[SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com/keys/[KEY-ID]
I can confirm that it works when changing state from present to absent in version 1.0.2 of the google.cloud collection.
I believe that you expect the file in path: "~/test_account.json" to be deleted but in fact the key is deleted on the service account in GCP. You will have to delete the file yourself after the task has completed successfully.

GCP project creation via deploymentmanager

So im trying to create a project with google cloud deployment manager,
Ive structured the setup roughly as below:
# Structure
Org -> Folder1 -> Seed-Project(Location where I am running deployment manager from)
Organization:
IAM:
-> {Seed-Project-Number}#cloudservices.gserviceaccount.com:
- Compute Network Admin
- Compute Shared VPC Admin
- Organisation Viewer
- Project Creator
# DeploymentManager Resource:
type cloudresourcemanager.v1.project
name MyNewProject
parent
id: '{folder1-id}'
type: folder
projectId: MyNewProject
The desired result is that MyNewProject should be created under Folder1.
However; It appears as if the deployment manager service account does not have sufficent permissions:
$ CLOUDSDK_CORE_PROJECT=Seed-Project gcloud deployment-manager deployments \
create MyNewDeployment \
--config config.yaml \
--verbosity=debug
Error message:
- code: RESOURCE_ERROR
location: /deployments/MyNewDeployment/resources/MyNewProject
message: '{"ResourceType":"cloudresourcemanager.v1.project",
"ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/MyNewProject","httpMethod":"GET"}}'
I've done some digging, and it appears to be calling the resourcemanager.projects.get method; The 'Compute Shared VPC Admin (roles/compute.xpnAdmin)' role should provide this permission as documented here: https://cloud.google.com/iam/docs/understanding-roles
Except that doesn't seem to be the case, whats going on ?
Edit
Id like to add some additional information gathered from debugging efforts:
These are the API requests from the deployment manager, (from the seed project).
You can see that the caller is an anonymous service account, this isn't what id expect to see. (Id expect to see {Seed-Project-Number}#cloudservices.gserviceaccount.com as the calling account here)
Edit-2
config.yaml
imports:
- path: composite_types/project/project.py
name: project.py
resources:
- name: MyNewProject
type: project.py
properties:
parent:
type: folder
id: "{folder1-id}"
billingAccountId: billingAccounts/REDACTED
activateApis:
- compute.googleapis.com
- deploymentmanager.googleapis.com
- pubsub.googleapis.com
serviceAccounts: []
composite_types/project/* is an exact copy of the templates found here:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/community/cloud-foundation/templates/project
The key thing is that this is a GET operation, not an attempt to create the project. This is to verify global uniqueness of the project-id requested, and if not unique, PERMISSION_DENIED is thrown.
Lousy error message, lots of wasted developer hours !
Probably late, but just to share that I ran into similar issue today.Double checked every permission mentioned in the Readme for the serviceAccount under which the deployment manager job runs ({Seed-Project-Number}#cloudservices.gserviceaccount.com in the question), turned out that the Billing Account User role was not assigned contrary to what I thought earlier. Granting that and running it again worked.