Is it possible to get a service account key that is deployed via Google Deployment Manager (iam.v1.serviceAccounts.key resource) as a result of request to DM?
I have seen an option to expose it in outputs (https://cloud.google.com/deployment-manager/docs/configuration/expose-information-outputs) , but can't see any possibility to get the key as a response of Deployment Manager insert/update API methods.
To fetch the key you can set up output or reference to the PrivatekeyData in the same configuration as creating the key. If there is not a reference or output to that field, then DM will ignore it.
Example config looks like:
outputs:
- name: key
value: $(ref.iam-key.privateKeyData)
resources:
- name: iam-account
type: iam.v1.serviceAccount
properties:
accountId: iam-account
displayName: iam-account-display
- name: iam-key
type: iam.v1.serviceAccounts.key
properties:
parent: $(ref.iam-account.name)
When running the above yaml file with
gcloud deployment-manager deployments create [DemploymentName] --config key.yaml.
This creates a service account with an associated key. You can look up at the manifest associated with the configuration. You can also access Deployment-> Deployment properties-> Layout in the cloud console.
Related
I am creating an Electron app that connects to AWS services. Before services can be accessed, the users need to authenticate using AWS Cognito. In order for users to authenticate, I need to hardcode in the client app the app region, user pool id, identity pool id, and the app client id. Hard coding this is a terrible idea because these values will change from client to client.
In my app the users NEVER interact directly with the database, otherwise I would have them query the database for this data. Users connect to an Elastic Beanstalk endpoint and my EC2 instances are the only ones allowed to communicate with the database. This improves security.
What is the best way to avoid hard coding this kind of data?
Generally, config should be stored in the environment (see https://12factor.net/).
What this means differs for different environments, and I know nothing about electron, but your configuration values will be known at build-time, so when you are building your clients, you could build an environment.js file whose values can be referenced from your app.
Example using CloudFormation and CodePipeline
So, perhaps you are using CloudFormation to provision your Cognito infrastructure. In this case, you can export variables that can be referenced by other CloudFormation templates.
These exported app client id, user pool id, identity pool id, etc, can be injected them into a CloudFormation template that defines a CodePipeline instance that you might use to build your electron app, of which the following could be a fragment:
...
BuildElectronProject:
Type: AWS::CodeBuild::Project
Properties:
Name: electron-build
Artifacts:
Type: CODEPIPELINE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
EnvironmentVariables:
-
Name: AWS_REGION
Value: !Ref 'AWS::Region'
-
Name: USER_POOL_ID
Value: !ImportValue 'user-pool-id'
-
Name: SERVER_URL
Value: !Join
- ''
-
- !If [ IsProd, 'https://', 'http://' ]
- !FindInMap [ Environments, !Ref Environment, ServerUrl ]
...
Then, when you build your app, you can use the environment variables in CodeBuild to create the environment.js file that is included as part of your distributable electron build.
I’m trying to create a deployment manager template for bigquery data transfer to initiate a scheduled query. I’ve created a type provider for transfer configs and when I call the type provider for a scheduled query, I get the following error:
"P4 service account needs iam.serviceAccounts.getAccessToken permission."
However, I’ve already given it ‘Service Account Token Creator’ permission on with "gcloud project add-iam-policy-binding .." How else would I be able to solve this?
Type Provider:
- name: custom-type-provider
type: deploymentmanager.v2beta.typeProvider
properties:
descriptorUrl: "https://bigquerydatatransfer.googleapis.com/$discovery/rest?version=v1"
options:
inputMappings:
- fieldName: Authorization
location: HEADER
value: >
$.concat("Bearer ", $.googleOauth2AccessToken())
Calling the type provider:
- name: test
type: project_id:custom-type-provider:projects.transferConfigs
properties:
parent: project/project_id
..
..
I think you've hit a limitation on Scheduled Queries, where you have to use user accounts instead of service accounts in order to do the queries.
There is a feature request to allow service accounts to act on behalf for this particular action.
I am trying to create a bucket using Deployment manager but when I want to create the deployment, I get the following error:
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1525606425901-56b87ed1537c9-70ca4aca-72406eee]: errors:
- code: RESOURCE_ERROR
location: /deployments/posts/resources/posts
message: '{"ResourceType":"storage.v1.bucket","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"errors":[{"domain":"global","message":"myprojectid#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to posts.","reason":"forbidden"}],"message":"myprojectid#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to posts.","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/storage/v1/b/posts","httpMethod":"GET","suggestion":"Consider
granting permissions to myprojectid#cloudservices.gserviceaccount.com"}}'
If I understand it correctly, the deployment manager uses a service account (as described in the message) to actually create all my resources. I've checked IAM and made sure that the service role (myprojectid#cloudservices.gserviceaccount.com) does have access as "Editor" and even added "Storage Admin" (which includes storage.buckets.get) to be extra sure. However, I still get the same error message.
Am I assigning the permissions to the wrong IAM user / what am I doing wrong?
command used:
gcloud deployment-manager deployments create posts --config posts.yml
my deployment template:
bucket.jinja
resources:
- name: {{ properties['name'] }}
type: storage.v1.bucket
properties:
name: {{ properties['name'] }}
location: europe-west1
lifecycle:
rule:
- action:
type: Delete
condition:
age: 30
isLive: true
labels:
datatype: {{ properties['datatype'] }}
storageClass: REGIONAL
posts.yml
imports:
- path: bucket.jinja
resources:
- name: posts
type: bucket.jinja
properties:
name: posts
datatype: posts
I tested your code with success and I believe that the issue was that you were trying to create/update a bucket own by a different user belonging to a different project upon which your service account has no power.
Therefore please try to redeploy changing the name that likely is a unique one and let me know if this solves the issue. This can be an issue in some scenario because either you choose name pretty long or there is the risk that is already taken.
Notice that you have to change the name of the bucket since it has to be unique across all the project of all the users.
This could seem an excessive requirement, but it makes possible to create static website or to refer to file with the standard URL:
https://storage.googleapis.com/nomebucket/folder/nomefile
From the trace error I believe that this is the issue, you are trying to create a bucket that does not exist and you do not own.
Notice that if you remove the permissions from the service account you do not receive the message telling you that the service account does not have any power on the bucket:
xxx#cloudservices.gserviceaccount.com does not have storage.buckets.get access to posts.
But instead a message pointing you that the service account has no power on the project:
Service account xxx#cloudservices.gserviceaccount.com is not authorized
to take actions for project xxx. Please add xxx#cloudservices.gserviceaccount.com
as an editor under project xxx using Google Developers Console
Notice that if you try to create a bucket you already own there is no issue.
$ gcloud deployment-manager deployments create posts22 --config posts.yml
The fingerprint of the deployment is xxx==
Waiting for create [operation-xxx-xxx-xxx-xxx]...done.
Create operation operation-xxx-xxx-xxx-xxx completed successfully.
NAME TYPE STATE ERRORS INTENT
nomebuckettest4536 storage.v1.bucket COMPLETED []
I'm trying to create GKE REGION cluster (beta feature) with GCP deployment manager.
But I got error. Is there any way to use GKE beta features (include region cluster) with deployment manager?
ERROR: (gcloud.beta.deployment-manager.deployments.create) Error in
Operation [operation-1525054837836-56b077fdf48e0-a571296c-604523fb]:
errors:
- code: RESOURCE_ERROR
location: /deployments/test-cluster1/resources/source-cluster
message: '{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"v1 API cannot be used to access GKE regional clusters. See https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta for more information.","status":"INVALID_ARGUMENT","statusMessage":"Bad Request","requestPath":"https://container.googleapis.com/v1/projects/project_id/zones/us-central1/clusters","httpMethod":"POST"}}'
In the error message, link of gcp help.
https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta
Configured as described there but error still appears.
My deployment manager yaml file looks like,
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1
cluster:
name: source
initialNodeCount: 3
Yet, zonal cluster is completely work. So I think it's related to usage of container v1beta api in deployment-manager commands.
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1-b
cluster:
name: source
initialNodeCount: 3
Thanks.
The error message you are receiving appears to be related to the fact that you are attempting to use a beta feature but you are specifying a Deployment Manager resource as using API v1 (i.e. container.v1.cluster). This means there's inconstancy between the beta resource you are trying to create and the specified resource.
I've had a look into this and discovered that the ability to add regional clusters via Deployment Manager is a very recent addition to Google Cloud Platform as detailed in this public feature request which has only recently been implemented.
It seems you would need to specify the API type as 'gcp-types/container-v1beta1:projects.locations.clusters' for this to work, and rather than using the 'zone' or 'region' key in the YAML, you would instead use a parent property that includes locations.
So your YAML would look something like this (replace PROJECT_ID with your own).
resources:
- type: gcp-types/container-v1beta1:projects.locations.clusters
name: source-cluster
properties:
parent: projects/PROJECT_ID/locations/us-central1
cluster:
name: source
initialNodeCount: 3
I'm trying to create a Google Cloud Deployment Manager configuration to deploy and manage a Google Cloud Container cluster. So far, creating a configuration to create a cluster works, however updating fails. If I change a setting, the execution of the script fails with an error message I can't decipher:
code: RESOURCE_ERROR
location: /deployments/my-first-cluster/resources/my-first-test-cluster-setup
message:
'{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"cluster\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"cluster\": Cannot find field."}]}],"statusMessage":"Bad
Request","requestPath":"https://container.googleapis.com/v1/projects/*****/zones/europe-west1-b/clusters/my-first-cluster"}}'
The relevant configuration:
resources:
- name: my-first-test-cluster-setup
type: container.v1.cluster
properties:
zone: europe-west1-b
cluster:
name: my-first-cluster
description: My first cluster setup
nodePools:
- name: my-cluster-node-pool
config:
machineType: n1-standard-1
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 3
maxNodeCount: 5
management:
autoUpgrade: true
autoRepair: true
It looks like this is a bug in Deployment Manager which means that it is not able to update GKE clusters. The bug is reported here. It has the same strange 'unknown name "cluster"' message that you see.
There is no suggestion on the ticket about workarounds or resolution.
We have seen this same problem when updating a different cluster property.