Is there a way to set project metadata in GCP deployment manager - google-cloud-platform

We are using GCP Deployment manager for our infrastructure release. We need to have a shared place that could be accessible from all groups(e.g. project metadata). I think it should be great if we could have it as a part of our infrastructure as code, so we could connect it with all the groups.
I think that for now there is no such resource in GCP deployment manager, but I also would not like to have some separate script that will do this update out of the pattern.
Can someone help with this? what is the best way to store common metadata in the cloud, and if the cloud could not provide the right tool, how can we solve this issue in a clear/nice way?

Setting project wide metadata is done using the compute.v1.projects API which is not supported for DM. You can view a list of the supported resources for DM here.
You may want to suggest support for this resource through a Feature Request

Here is a yaml config file and its template for you:
The project.yaml configuration :
# Set project metadata
imports:
- path: project.jinja
resources:
- name: project
type: project.jinja
properties:
key: 'abcd'
value: 1234
And the project.jinja template:
{#
Template: Set Project Metadata
#}
resources:
- name: data
action: gcp-types/compute-v1:compute.projects.setCommonInstanceMetadata
metadata:
runtimePolicy:
- UPDATE_ON_CHANGE
properties:
items:
- key: {{ properties["key"] }}
value: {{ properties["value"] }}

Related

inject value in 3rd party helm yaml template of Pod kind

I am using 3rd party repository bitnami for mysql.
I know that values.yaml can be injected easily, if that yaml is in dependency section.
i.e, if I add dependency in Chart.yaml:
dependencies:
- name: mysql
version: 8.8.23
repository: "https://charts.bitnami.com/bitnami"
and in values.yaml:
mysql:
auth:
rootPassword: "12345"
database: my_database
primary:
service:
type: LoadBalancer
The root password is injected in bitnami/mysql in the proper parameter auth.rootPassword.
but in case I have my own pod.yaml in template folder:
apiVersion: v1
kind: Pod
spec:
containers:
- name: mysql-pod
image: bitnami/mysql
How can I inject this file the password and other parameters, as the same as I did as with values.yaml file.
I need to pass auth.rootPassword, etc...
Also, if there is ability to refer exactly the same pod, that is created on dependency, and not as another instance.
The chart contains a lot of things – a StatefulSet, a matching Service, a PodDisruptionBudget, a ConfigMap, and so on. You can't force that all into a single Pod, and in general you can't refer to things in Helm charts without including them as dependencies as you show originally.
Bitnami also happens to publish a separate bitnami/mysql Docker image that you could list as the image: in a StatefulSet's pod spec, but you would have to reconstruct all of the other machinery in that chart yourself.
Also, if there is ability to refer exactly the same pod, that is created on dependency, and not as another instance.
There's a typical convention in Helm that most objects are named RELEASE-CHART-SUFFIX, squashing together RELEASE and CHART if they're the same. You don't usually care about the StatefulSet's generated Pod so much as the Service that reaches it, which is generated by a typical Helm Service YAML file. If you're not setting various overrides, and aren't using replicated mode, then in this dependency context, you can combine .Release.Name and the dependency chart name mysql and no suffix to get
- name: MYSQL_HOST
value: {{ .Release.Name }}-mysql
I'm not aware of a more general way to get the Service's name. (I'm not aware of any charts at all that use Helm's exports mechanism and since it only republishes values it couldn't contain computed values in any case.)
For other details like the credentials, you can either refer to the generated Secret, or just use the .Values.mysql... directly.
- name: MYSQL_USER
value: {{ .Values.mysql.auth.username }}
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-mysql
key: mysql-password
I don't think there's a good way to figure out these names without either reading the chart's source yourself, observing what the chart actually installs, or reading the chart's documentation (the Bitnami charts are fairly well-documented but others may not be).

AWS Lambda Rest API: A sibling ({id}) of this resource already has a variable path part -- only one is allowed Unable to create resource at path

I'm particular new to Lambda and to AWS in general. I'm trying to setup a simple REST API Service with Lambda. I've used CloudFormat and CodePipeline to have a simple Express app.
I'm trying to figure out why during the deployment phase, during ExecuteChangeSet I have this error:
Errors found during import: Unable to create resource at path '/stations/{stationId}/allowedUsers': A sibling ({id}) of this resource already has a variable path part -- only one is allowed Unable to create resource at path '/stations/{stationId}/allowedUsers/{userId}': A sibling ({id}) of this resource already has a variable path part -- only one is allowed
This is what I have inside the template.yml
Events:
AllowedUsers:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers
Method: get
AddAllowedUsers:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers
Method: post
DeleteAllowedUsers:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers/{userId}
Method: delete
GetAllowedUser:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers/{userId}
Method: get
I searched a bit for this error but I'm not sure how to solve it.
For me, the issue was different from what is described in the GitHub issue Bryan mentioned.
I was using two different parameter names. Finishing the refactoring and using a single id name fixed the issue.
Example:
DeleteAllowedUsers:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers/{id}
Method: delete
GetAllowedUser:
Type: Api
Properties:
Path: /stations/{stationId}/allowedUsers/{userId}
Method: get
Here is the walk around for this problem. It was posted on github by pettyalex.
link :https://github.com/serverless/serverless/issues/3785
You might encounter this issue when updating a variable path while using serverless ( and serverless.yaml ) to provision the AWS gatewayApi, here is a walk-around:
comment out the endpoint function to remove it completely
uncomment and deploy again

How to remove a service account key on GCP using Ansible Playbook?

I am using an Ansible playbook to run certain modules that create service accounts and their respective keys. The code used to generate this is as found on the Ansible documentation:
- name: create a service account key
gcp_iam_service_account_key:
service_account: "{{ serviceaccount }}"
private_key_type: TYPE_GOOGLE_CREDENTIALS_FILE
path: "~/test_account.json"
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present
Now I am trying to remove the service account key, so I changed the state value from present to absent but that doesn't seem to do much, wanted to ask if I'm missing something or if there is anything else I could try?
I'm not sure if it could be possible since I couldn't find the module on the ansible documentation, but in the deletion for instances examples, I see that after the absent state they use a tag for the deletion, it could be a way to do it for the SA. e.g.
state: absent
tags:
- delete
Other way that could be useful is to directly do the request to the REST API, e.g.
DELETE https://iam.googleapis.com/v1/projects/[PROJECT-ID]/serviceAccounts/[SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com/keys/[KEY-ID]
I can confirm that it works when changing state from present to absent in version 1.0.2 of the google.cloud collection.
I believe that you expect the file in path: "~/test_account.json" to be deleted but in fact the key is deleted on the service account in GCP. You will have to delete the file yourself after the task has completed successfully.

GCP project creation via deploymentmanager

So im trying to create a project with google cloud deployment manager,
Ive structured the setup roughly as below:
# Structure
Org -> Folder1 -> Seed-Project(Location where I am running deployment manager from)
Organization:
IAM:
-> {Seed-Project-Number}#cloudservices.gserviceaccount.com:
- Compute Network Admin
- Compute Shared VPC Admin
- Organisation Viewer
- Project Creator
# DeploymentManager Resource:
type cloudresourcemanager.v1.project
name MyNewProject
parent
id: '{folder1-id}'
type: folder
projectId: MyNewProject
The desired result is that MyNewProject should be created under Folder1.
However; It appears as if the deployment manager service account does not have sufficent permissions:
$ CLOUDSDK_CORE_PROJECT=Seed-Project gcloud deployment-manager deployments \
create MyNewDeployment \
--config config.yaml \
--verbosity=debug
Error message:
- code: RESOURCE_ERROR
location: /deployments/MyNewDeployment/resources/MyNewProject
message: '{"ResourceType":"cloudresourcemanager.v1.project",
"ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/MyNewProject","httpMethod":"GET"}}'
I've done some digging, and it appears to be calling the resourcemanager.projects.get method; The 'Compute Shared VPC Admin (roles/compute.xpnAdmin)' role should provide this permission as documented here: https://cloud.google.com/iam/docs/understanding-roles
Except that doesn't seem to be the case, whats going on ?
Edit
Id like to add some additional information gathered from debugging efforts:
These are the API requests from the deployment manager, (from the seed project).
You can see that the caller is an anonymous service account, this isn't what id expect to see. (Id expect to see {Seed-Project-Number}#cloudservices.gserviceaccount.com as the calling account here)
Edit-2
config.yaml
imports:
- path: composite_types/project/project.py
name: project.py
resources:
- name: MyNewProject
type: project.py
properties:
parent:
type: folder
id: "{folder1-id}"
billingAccountId: billingAccounts/REDACTED
activateApis:
- compute.googleapis.com
- deploymentmanager.googleapis.com
- pubsub.googleapis.com
serviceAccounts: []
composite_types/project/* is an exact copy of the templates found here:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/community/cloud-foundation/templates/project
The key thing is that this is a GET operation, not an attempt to create the project. This is to verify global uniqueness of the project-id requested, and if not unique, PERMISSION_DENIED is thrown.
Lousy error message, lots of wasted developer hours !
Probably late, but just to share that I ran into similar issue today.Double checked every permission mentioned in the Readme for the serviceAccount under which the deployment manager job runs ({Seed-Project-Number}#cloudservices.gserviceaccount.com in the question), turned out that the Billing Account User role was not assigned contrary to what I thought earlier. Granting that and running it again worked.

Use private ips in google dataflow job being created via google provided template

I'm trying to set up a dataflow job via deployment manager using the google provided template Cloud_PubSub_to_Avro.
To do this I had to register dataflow as a type provider, like this:
resources:
- name: 'register-dataflow'
action: 'gcp-types/deploymentmanager-v2beta:deploymentmanager.typeProviders.insert'
properties:
name: 'dataflow'
descriptorUrl: 'https://dataflow.googleapis.com/$discovery/rest?version=v1b3'
options:
inputMappings:
- fieldName: Authorization
location: HEADER
value: >
$.concat("Bearer ", $.googleOauth2AccessToken())
Then I created my job template that looks something like
resources:
- name: "my-topic-to-avro"
type: 'my-project-id/dataflow:dataflow.projects.locations.templates.launch'
properties:
projectId: my-project-id
gcsPath: "gs://dataflow-templates/latest/Cloud_PubSub_to_Avro"
jobName: "my-topic-to-avro"
location: "europe-west1"
parameters:
inputTopic: "projects/my-project-id/topics/my-topic"
outputDirectory: "gs://my-bucket/avro/my-topic/"
avroTempDirectory: "gs://my-bucket/avro/tmp/my-topic/"
Now I'm trying to understand how I can tell my job to not use public ips. From this it looks like I need to set --usePublicIps=false but I can't figure out where to place this parameter, or if this is even possible.
A possible workaround I found here would be to remove the access-config, but again I haven't been able to figure out how to do this, if at all possible.
Is what I'm trying to do possible through provided templates or will I have to use the dataflow API?
Any help appreciated.