Ignore resources using kustomize - argocd

I'm deploying argocd onto your cluster but want to ignore dex server resources (sa, deployment, roles)
Is there a way to ignore the resources based on their label argocd-dex-server (all resources have this label)
Is it $patch delete or any other better approach ?

Related

How to handle resources that must be created by Pulumi but afterwards are managed elsewhere?

For instance, to implement a Blue/Green deployment pipeline with CodeDeploy on AWS ECS, I first use Pulumi to provision Load Balancer, Listeners and Target Groups (among other resources). But after I supply the Listeners to create the DeploymentConfig on CodeDeploy, the Blue/Green deployment controller will manipulate the Listeners on each deployment.
At this point, if I run pulumi preview again, it shows that the Listeners have diverged (due to a new deployment) and Pulumi will attempt to "fix" that, conflicting with the deployment controller.
The only solution I see so far is to use pulumi state delete to make Pulumi stop tracking those resources, but it doesn't seems like the right way to go.
What is the proper way to handle resources that are first create by Pulumi but afterwards managed elsewhere?

Multicluster istio without exposing kubeconfig between clusters

I managed to get multicluster istio working following the documentation.
However this requires the kubeconfig of the clusters to be setup on each other. I am looking for an alternative to doing that. Based on presentation from solo.io and admiral, it seems that it might be possible to setup ServiceEntries to accomplish this manually. Istio docs are scarce in this this area. Does anyone have pointers on how to make this work?
There are some advantages to setting up the discovery manually or thru our CD processes...
if one cluster gets compromised, the creds to other clusters dont leak
allows us to limit the which services are discovered
I posted the question on twitter as well and hope to get some feedback from the Istio contributors.
As per Admiral docs:
Admiral acts as a controller watching k8s clusters that have a credential stored as a secret object which the namespace Admiral is running in. Admiral delivers Istio configuration to each cluster to enable services to communicate.
No matter how you manage contol-plane configuration (manually or with controller) - you have store and provision credentials somehow. In this case with use of the secrets
You can store your secrets securely in git with sealed-secrets.
You can read more here.

How do I tag the ECS Cluster that's automatically created by an AWS Batch Compute Environment?

I am trying to deploy a Batch Compute Environment in a heavily restricted AWS environment. For billing purposes, all resources created need to be tagged (e.g. billTo: billId), and if I try to create a resource without this tag I am blocked by an explicit deny. When the Batch Compute Environment tries to create an ECS Cluster, I get the following error because it does not pass tags to it.
User: arn:aws:sts::<accountId>:assumed-role/<roleName> is not authorized to perform ecs:CreateCluster on resource: * with an explicit deny
There are two places to specify tags when creating a Batch Compute Environment (tag the compute environment and tag the EC2 resources used by the compute environment). I tried adding the billTo tag in both places but still hit the same error.
Does anyone know if it is possible to get Batch to tag the ECS Cluster it tries to create when making a new Batch Compute Environment?
Note: I also tried figuring out how to pass an existing ECS Cluster, but this is not possible (How to Set an existing ECS cluster to a compute environment in AWS Batch)
As of 2021-07-15, AWS does not provide a way to tag the ECS Cluster automatically made by the Batch Compute Environment. The solution is to
Get in contact with the system administrators
Have them lift the SCP (service control policy) causing the explicit deny for a short window
Create the Batch Compute Environment and add the tag to the ECS Cluster
Have the system administrators put the SCP back in place
Hope you don't have to redeploy and bug the system administrators again
Hopefully AWS will fix this issue and allow the ECS Cluster to be tagged by the Batch Compute Environment.

Attach Capacity Provider to a ECS Cluster created in different Cloudformation stacks

My team's current task is to develop an ECS Cluster to be able to migrate from Elastic Beanstalk.
Since we have our entire infrastructure in Cloudformation we had to stich this implementation to our current templates.
The idea is to have the cluster created separately with the underlying infrastructure that will be shared between the services that will eventually be deployed to the cluster.
There is a different template for the empty ECS Cluster and another for the Services (and all resources specific to Services).
The capacity providers are created with (and attached to) the services. From what I could find there is no way in Cloudformation to make a capacity provider appear as part of the cluster after the cluster is created. The capacity provider appears on the Services but not on the cluster and the Service ends up with no resources to provision its tasks.
The workaround I have right now is to define a lambda as a custom resource to call the putClusterCapacityProvider action, and pass an empty array to the defaultCapacityProviderStrategy property.
This strategy is working but feels a little too hacky. Am I missing something? Is there another way to have the capacity providers appear on the cluster after the cluster is created?
CloudFormation has been updated to enable Capacity Provider tuning on existing Clusters via AWS::ECS::ClusterCapacityProviderAssociations.
See https://aws.amazon.com/blogs/containers/managing-compute-for-amazon-ecs-clusters-with-capacity-providers/ for details

How to expand Kubernetes node instance profiles created by kops?

I'm running a Kubernetes cluster on AWS, managed with kops. Now I want to run external-dns, which requires additional permissions in the nodes instance role. My question is: what is the best way to make these changes?
I could edit the role manually in AWS, but I want to automate my setup. I could also edit the role through the API (using the CLI, Cloudformation, Terraform, etc), but then I have a two-phase setup which seems fragmented and inelegant. Ideally I'd want to tell kops about my additional needs, and have it manage those with the ones it manages itself. Is there any way to do this?