How to remove a service account key on GCP using Ansible Playbook? - google-cloud-platform

I am using an Ansible playbook to run certain modules that create service accounts and their respective keys. The code used to generate this is as found on the Ansible documentation:
- name: create a service account key
gcp_iam_service_account_key:
service_account: "{{ serviceaccount }}"
private_key_type: TYPE_GOOGLE_CREDENTIALS_FILE
path: "~/test_account.json"
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present
Now I am trying to remove the service account key, so I changed the state value from present to absent but that doesn't seem to do much, wanted to ask if I'm missing something or if there is anything else I could try?

I'm not sure if it could be possible since I couldn't find the module on the ansible documentation, but in the deletion for instances examples, I see that after the absent state they use a tag for the deletion, it could be a way to do it for the SA. e.g.
state: absent
tags:
- delete
Other way that could be useful is to directly do the request to the REST API, e.g.
DELETE https://iam.googleapis.com/v1/projects/[PROJECT-ID]/serviceAccounts/[SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com/keys/[KEY-ID]

I can confirm that it works when changing state from present to absent in version 1.0.2 of the google.cloud collection.
I believe that you expect the file in path: "~/test_account.json" to be deleted but in fact the key is deleted on the service account in GCP. You will have to delete the file yourself after the task has completed successfully.

Related

ArgoCD ApplicationSet - How to preserve application and resources even when ApplicationSet is Deleted or Corrupted

I have an ApplicationSet which creates a few resources in Kubernetes. It is working fine. But, when I delete this ApplicationSet, the relevant Application also gets deleted from Argo, along with its resources. (I know this is expected by the ApplicationSet-Controller). But, I want to prevent this from happening.
Scenario: sometimes, when the ApplicationSet is corrupted, it will destroy the Application associated with it. The same when the ApplicationSet is deleted too.
I was reading this document, on adding the .syncPolicy.preserveResourcesOnDeletion to true in the ApplicationSet, but it doesn't work as expected. This is my current sync policy:
syncPolicy:
automated:
selfHeal: true
syncOptions:
- Validate=true
- CreateNamespace=true
- preserveResourcesOnDeletion=true
Question: How can I keep my Application safe, even when the ApplicationSet is deleted/corrupted?
There are two different places you can set a sync policy:
on the Application resource (i.e. in spec.template in the ApplicationSet)
on the ApplicationSet resource
You're looking for the second one. Set this in your ApplicationSet resource:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
spec:
syncPolicy:
preserveResourcesOnDeletion: true
I just noticed that the syncPolicy for that purpose should be written as
syncPolicy:
preserveResourcesOnDeletion: true
automated:
selfHeal: true
syncOptions:
- Validate=true
- CreateNamespace=true
when you install the argocd you can set the values like this:
applicationSet:
args:
policy: create-only
therefore Argocd will only create the application-set only once even if something happened to application set applications wont be affected by that
by default ArgoCD deletion action is cascaded by default which means that all the resources created by the AppSet will be deleted.
what you need is to set the cascade option to false when you are attempting to delete the Application/ApplicationSet something similar to the below
kubectl delete ApplicationSet (NAME) --cascade=false
for more information take a look at the docs here https://argocd-applicationset.readthedocs.io/en/stable/Application-Deletion/

Concatenate AWS Secrets in aws-cdk for ECS container

how do you go about making a postgres URI connection string from a Credentials.fromGeneratedSecret() call without writing the secrets out using toString()?
I think I read somewhere making a lambda that does that, but man that seems kinda overkill-ish
const dbCreds = Credentials.fromGeneratedSecret("postgres")
const username = dbCreds.username
const password = dbCreds.password
const uri = `postgresql://${username}:${password}#somerdurl/mydb?schema=public`
Pretty sure I can't do the above. However my hasura and api ECS containers need connection strings like the above, so I figure this is probably a solved thing?
If you want to import a secret that already exists in the Secret's Manager you could just do a lookup of the secret by name or ARN. Take a look at the documentation referring how to get a value from AWS Secrets Manager.
Once you have your secret in the code it is easy to pass it on as an environment variable to your application. With CDK it is even possible to pass secrets from Secrets Manager or AWS Systems Manager Param Store directly onto the CDK construct. One such example would be (as pointed in the documentation):
taskDefinition.addContainer('container', {
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"),
memoryLimitMiB: 1024,
environment: { // clear text, not for sensitive data
STAGE: 'prod',
},
environmentFiles: [ // list of environment files hosted either on local disk or S3
ecs.EnvironmentFile.fromAsset('./demo-env-file.env'),
ecs.EnvironmentFile.fromBucket(s3Bucket, 'assets/demo-env-file.env'),
],
secrets: { // Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store at container start-up.
SECRET: ecs.Secret.fromSecretsManager(secret),
DB_PASSWORD: ecs.Secret.fromSecretsManager(dbSecret, 'password'), // Reference a specific JSON field, (requires platform version 1.4.0 or later for Fargate tasks)
PARAMETER: ecs.Secret.fromSsmParameter(parameter),
}
});
Overall, in this case, you would not have to do any parsing or printing of the actual secret within the CDK. You can handle all of that processing within you application using properly set environment variables.
However, only from your question it is not clear what exactly you are trying to do. Still, the provided resources should get you in the correct direction.

How to retrieve Secret Manager data in buildspec.yaml

Im working on creating the CodeBuild which is integrated with SonarQube, So I pass values and sonar credentials directly in my Buildspec.yaml
Instead of Hardcoding directly, I tried to retrieve using the below command from SecretManager as it is mentioned in the below link. But it is not getting the correct values. it throws an error.
Command : '{{resolve:secretsmanager:MyRDSSecret:SecretString:username}}'
Link: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html#dynamic-references-secretsmanager
Error [ERROR] SonarQube server [{{resolve:secretsmanager:arn:aws:secretsmanager:us-east-1:********:secret:**********:SecretString:SonarURL}}] can not be reached
How I used echo '{{resolve:secretsmanager:arn:aws:secretsmanager:us-east-1:***:secret:**************:SecretString:*******}}'
Note: All the * inside my commard are the secretname and secreturl
CodeBuild just launched this today - https://aws.amazon.com/about-aws/whats-new/2019/11/aws-codebuild-adds-support-for-aws-secrets-manager/
If you wish to retrieve secrets in your buildspec file, I would recommend to use Systems Manager Parameter Store which is natively integrated with CodeBuild. Systems Manager is a service in itself, search it from the AWS Console homepage, then Paramater Store is in the bottom left of the Systems Manager Console page.
Lets assume you want to include Access Key and Secret Key in buildspec.yml file:
- Create AccessKey/SecretKey pair for a IAM User
- Save the above keys in an SSM parameter store as secure string (e.g. '/CodeBuild/AWS_ACCESS_KEY_ID' and '/CodeBuild/AWS_SECRET_ACCESS_KEY')
- Export the two values in your build environment using the following buildspec directive(s):
version: 0.2
env:
parameter-store:
AWS_ACCESS_KEY_ID_PARAM: /CodeBuild/AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY_PARAM: /CodeBuild/AWS_SECRET_ACCESS_KEY
phases:
build:
commands:
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_PARAM
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_PARAM
# Your Ansible commands below
- ansible-playbook -i hosts ec2-key.yml
[1] Build Specification Reference for CodeBuild - Build Spec Syntax - https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
The dynamic reference syntax you are trying to use only works with the Cloud Formation (CFN) service. In some cases, CFN restricts where these dynamic references to secrets will expand. Specifically, they do not expand in places where the secrets might be visible in the console, such as in EC2 metadata.
If you are trying to setup Code Build via CFN, this may be what you are seeing. However, as shariqmaws mentioned, you can use parameter store and either store your secret there or use parameter store as a pass through to secrets manager (in case you want to use secrets manager to rotate your secrets or for other reasons).
version: 0.2
env:
parameter-store:
AWS_ACCESS_KEY_ID : /terraform-cicd/AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY : /terraform-cicd/AWS_SECRET_ACCESS_KEY
AWS_CODECOMMIT_SSH_ID : /terraform-cicd/AWS_CODECOMMIT_SSH_ID
secrets-manager:
AWS_CODECOMMIT_SSH_PRIVATE: /terraform-cicd/AWS_CODECOMMIT_SSH_PRIVATE

GCP project creation via deploymentmanager

So im trying to create a project with google cloud deployment manager,
Ive structured the setup roughly as below:
# Structure
Org -> Folder1 -> Seed-Project(Location where I am running deployment manager from)
Organization:
IAM:
-> {Seed-Project-Number}#cloudservices.gserviceaccount.com:
- Compute Network Admin
- Compute Shared VPC Admin
- Organisation Viewer
- Project Creator
# DeploymentManager Resource:
type cloudresourcemanager.v1.project
name MyNewProject
parent
id: '{folder1-id}'
type: folder
projectId: MyNewProject
The desired result is that MyNewProject should be created under Folder1.
However; It appears as if the deployment manager service account does not have sufficent permissions:
$ CLOUDSDK_CORE_PROJECT=Seed-Project gcloud deployment-manager deployments \
create MyNewDeployment \
--config config.yaml \
--verbosity=debug
Error messageļ¼š
- code: RESOURCE_ERROR
location: /deployments/MyNewDeployment/resources/MyNewProject
message: '{"ResourceType":"cloudresourcemanager.v1.project",
"ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/MyNewProject","httpMethod":"GET"}}'
I've done some digging, and it appears to be calling the resourcemanager.projects.get method; The 'Compute Shared VPC Admin (roles/compute.xpnAdmin)' role should provide this permission as documented here: https://cloud.google.com/iam/docs/understanding-roles
Except that doesn't seem to be the case, whats going on ?
Edit
Id like to add some additional information gathered from debugging efforts:
These are the API requests from the deployment manager, (from the seed project).
You can see that the caller is an anonymous service account, this isn't what id expect to see. (Id expect to see {Seed-Project-Number}#cloudservices.gserviceaccount.com as the calling account here)
Edit-2
config.yaml
imports:
- path: composite_types/project/project.py
name: project.py
resources:
- name: MyNewProject
type: project.py
properties:
parent:
type: folder
id: "{folder1-id}"
billingAccountId: billingAccounts/REDACTED
activateApis:
- compute.googleapis.com
- deploymentmanager.googleapis.com
- pubsub.googleapis.com
serviceAccounts: []
composite_types/project/* is an exact copy of the templates found here:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/community/cloud-foundation/templates/project
The key thing is that this is a GET operation, not an attempt to create the project. This is to verify global uniqueness of the project-id requested, and if not unique, PERMISSION_DENIED is thrown.
Lousy error message, lots of wasted developer hours !
Probably late, but just to share that I ran into similar issue today.Double checked every permission mentioned in the Readme for the serviceAccount under which the deployment manager job runs ({Seed-Project-Number}#cloudservices.gserviceaccount.com in the question), turned out that the Billing Account User role was not assigned contrary to what I thought earlier. Granting that and running it again worked.

Is there a way to set project metadata in GCP deployment manager

We are using GCP Deployment manager for our infrastructure release. We need to have a shared place that could be accessible from all groups(e.g. project metadata). I think it should be great if we could have it as a part of our infrastructure as code, so we could connect it with all the groups.
I think that for now there is no such resource in GCP deployment manager, but I also would not like to have some separate script that will do this update out of the pattern.
Can someone help with this? what is the best way to store common metadata in the cloud, and if the cloud could not provide the right tool, how can we solve this issue in a clear/nice way?
Setting project wide metadata is done using the compute.v1.projects API which is not supported for DM. You can view a list of the supported resources for DM here.
You may want to suggest support for this resource through a Feature Request
Here is a yaml config file and its template for you:
The project.yaml configuration :
# Set project metadata
imports:
- path: project.jinja
resources:
- name: project
type: project.jinja
properties:
key: 'abcd'
value: 1234
And the project.jinja template:
{#
Template: Set Project Metadata
#}
resources:
- name: data
action: gcp-types/compute-v1:compute.projects.setCommonInstanceMetadata
metadata:
runtimePolicy:
- UPDATE_ON_CHANGE
properties:
items:
- key: {{ properties["key"] }}
value: {{ properties["value"] }}