Use GKE beta features with GCP Deployment Manager - google-cloud-platform

I'm trying to create GKE REGION cluster (beta feature) with GCP deployment manager.
But I got error. Is there any way to use GKE beta features (include region cluster) with deployment manager?
ERROR: (gcloud.beta.deployment-manager.deployments.create) Error in
Operation [operation-1525054837836-56b077fdf48e0-a571296c-604523fb]:
errors:
- code: RESOURCE_ERROR
location: /deployments/test-cluster1/resources/source-cluster
message: '{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"v1 API cannot be used to access GKE regional clusters. See https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta for more information.","status":"INVALID_ARGUMENT","statusMessage":"Bad Request","requestPath":"https://container.googleapis.com/v1/projects/project_id/zones/us-central1/clusters","httpMethod":"POST"}}'
In the error message, link of gcp help.
https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta
Configured as described there but error still appears.
My deployment manager yaml file looks like,
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1
cluster:
name: source
initialNodeCount: 3
Yet, zonal cluster is completely work. So I think it's related to usage of container v1beta api in deployment-manager commands.
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1-b
cluster:
name: source
initialNodeCount: 3
Thanks.

The error message you are receiving appears to be related to the fact that you are attempting to use a beta feature but you are specifying a Deployment Manager resource as using API v1 (i.e. container.v1.cluster). This means there's inconstancy between the beta resource you are trying to create and the specified resource.
I've had a look into this and discovered that the ability to add regional clusters via Deployment Manager is a very recent addition to Google Cloud Platform as detailed in this public feature request which has only recently been implemented.
It seems you would need to specify the API type as 'gcp-types/container-v1beta1:projects.locations.clusters' for this to work, and rather than using the 'zone' or 'region' key in the YAML, you would instead use a parent property that includes locations.
So your YAML would look something like this (replace PROJECT_ID with your own).
resources:
- type: gcp-types/container-v1beta1:projects.locations.clusters
name: source-cluster
properties:
parent: projects/PROJECT_ID/locations/us-central1
cluster:
name: source
initialNodeCount: 3

Related

Enable Audit Logs using Cloud Deployment Manager GCP

I am trying to enable Audit Logs i.e. Data Access Logs using Cloud Deployment Manager in GCP but I am getting some error, below is the script I have created in YAML.
resources:
- name: get-iam-policy
action: gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.getIamPolicy
properties:
resource: <project_id>
metadata:
runtimePolicy:
- 'UPDATE_ALWAYS'
- name: patch-iam-policy
action: gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.setIamPolicy
properties:
resource: <project_id>
policy:
etag: $(ref.get-iam-policy.etag)
auditConfigs:
- auditLogConfigs:
- logType: ADMIN_READ
service: allServices
Above code is stored in file deploy.yaml and below command is used to create the deployment
gcloud deployment-manager deployments create test --config deploy.yaml
I am getting error below
(gcloud.deployment-manager.deployments.create) Error in Operation [operation-1621334235876-5c2984b310ba1-9e1e602f-d982565d]: errors:
- code: RESOURCE_ERROR
location: /deployments/test/resources/patch-iam-policy
message: '{"ResourceType":"gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.setIamPolicy","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"auditConfigs\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"auditConfigs\": Cannot find field."}]}],"statusMessage":"Bad
Request","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/******:setIamPolicy","httpMethod":"POST"}}'
I did some trying on this example and looked up some documentation for this.
Your code seems correct, and all yaml syntax validators say the same.
After I try to deploy your code I get exactly the same error message as you.
This looks like it may be a bug so I would recomment raising an issuge on Google IssueTracker.

Unable to set high availability for Cloud SQL

I am trying to create Cloud SQL using Deployment Manager.
Most of my configuration works apart from settings.availabilityType
jinja file -- That works
resources:
- name: dev-01
type: gcp-types/sqladmin-v1beta4:instances
properties:
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
region: europe-west1
databaseVersion: POSTGRES_9_6
settings:
tier: db-custom-1-3840
storageAutoResize: true
dataDiskSizeGb: PD_SSD
dataDiskType: 10
replicationType: SYNCHRONOUS
failoverReplica:
available: true
backupConfiguration:
enabled: true
locationPreference:
zone: europe-west1-b
activationPolicy: ALWAYS
jinja file -- That doesn't work
resources:
- name: dev-01
type: gcp-types/sqladmin-v1beta4:instances
properties:
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
region: europe-west1
databaseVersion: POSTGRES_9_6
settings:
tier: db-custom-1-3840
storageAutoResize: true
dataDiskSizeGb: PD_SSD
dataDiskType: 10
replicationType: SYNCHRONOUS
failoverReplica:
available: true
backupConfiguration:
enabled: true
locationPreference:
zone: europe-west1-b
activationPolicy: ALWAYS
availabilityType: REGIONAL
I am getting error... Invalid API call...
EDIT#1
From the GUI I can add the HA with one click and without any existing failover instances.
That is because you are trying to deploy a HA configuration with a locationPreference. The gcloud command to create HA for Cloud SQL HA instance just expects region and failover related details. See here
Follow this repo and you will find some good samples out there.
Specifically this part of the code gives you the template to follow
Another user had a similar issue to yours in this thread.
That worth to take a look, especially Jordi Miralles answer
For additional information, you should take a look into GCP docs.
Overview of the high availability configuration
Enabling and disabling high availability on an instance
Cloud SQL instance resource
Regarding the edit note, the regional availability configuration (the one for PostgreSQL) does not require a failover instance, since it's based on regional persistent disks. More info on the docs.
Failover instances were only for MySQL instances, and it's now considered legacy (and the docs imply it's going to be deprecated in 2020) in favor of the same HA system as PostgreSQL: regional persistent disks.

GCP Deployment Manger not creating network peerings

I have a deploymgr template that creates a bunch of network assets and VMs and it runs fine with no errors reported, however no VPC peerings are ever created. It works fine if I create a peering via the console or on the cli via glcoud
Peering fails (with no error msg):
# Create the required routes to talk to prod project
- name: mytest-network
type: compute.v1.network
properties:
name: mytest
autoCreateSubnetworks: false
peerings:
- name: mytest-to-prod
network: projects/my-prod-project/global/networks/default
autoCreateRoutes: true
Peering Works:
$ gcloud compute networks peerings create mytest-to-prod --project=myproject --network=default --peer-network=projects/my-prod-project/global/networks/default --auto-create-routes
The Peering cannot be done at network creation time as per the API reference.
First the network needs to be created and once it has been created successfully, the addPeering method needs to be called.
This explains why your YAML definition created the network but not the peering and it worked after running the gcloud command that it calls the addPeering method.
There is a possibility of creating and doing the peering on one YAML file by using the Deployment manager actions:
resources:
- name: mytest-network1
type: compute.v1.network
properties:
name: mytest1
autoCreateSubnetworks: false
- name: mytest-network2
type: compute.v1.network
properties:
name: mytest2
autoCreateSubnetworks: false
- name: addPeering2-1
action: gcp-types/compute-v1:compute.networks.addPeering
metadata:
runtimePolicy:
- CREATE
properties:
network: mytest-network2
name: vpc-2-1
autoCreateRoutes: true
peerNetwork: $(ref.mytest-network1.selfLink)
metadata:
dependsOn:
- mytest-network1
- mytest-network2
- name: addPeering1-2
action: gcp-types/compute-v1:compute.networks.addPeering
metadata:
runtimePolicy:
- CREATE
properties:
network: mytest-network1
name: vpc-1-2
autoCreateRoutes: true
peerNetwork: $(ref.mytest-network2.selfLink)
metadata:
dependsOn:
- mytest-network1
- mytest-network2
You can copy-paste the YAML above, create the deployment and the peering should be done. The actions use the dependsOn option to make sure the network are created first and when deleting the deployment the peerings would be deleted by calling the removePeering method and then the networks would be deleted.
Note: The Deployment manager actions are undocumented yet but there are several examples in the GoogleCloudPlatform/deploymentmanager-samples repository such as this and this.
From gcloud works as expected, please update your YAML file to use "peerings[].network" when specifying the list of peered network resources.

openshift origin - using dynamic ebs volumes

I am running openshift origin 3.6 ( kube v1.6.1+5115d708d7) in AWS. Ansible inventory contains cloud provider configuration and I can see the config files on the master nodes.
# From inventory
# AWS
openshift_cloudprovider_kind=aws
openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
I have also provisioned a storageclass
# oc get storageclass
NAME TYPE
fast (default) kubernetes.io/aws-ebs
However, when i try to create a pvc:
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: "testclaim"
namespace: testns
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "3Gi"
storageClassName: fast
It just goes in infinite loop trying to get the pvc created. Events show me this error:
(combined from similar events): Failed to provision volume with StorageClass "fast": UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: $(encoded-message) status code: 403, request id: d0742e84-a2e1-4bfd-b642-c6f1a61ddc1b
Unfortunately I cannot decode the encoded message using aws cli as it gives error.
aws sts decode-authorization-message -–encoded-message $(encoded-message)
Error: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
I haven't tried pv+pvc creation as I am looking for dynamic provisioning. Any guidance as to what I might be doing wrong.
So far I have been able to deploy pods, services etc and they seem to be working fine.
That error appears to be an AWS IAM error:
UnauthorizedOperation
You are not authorized to perform this operation. Check your IAM
policies, and ensure that you are using the correct access keys. For
more information, see Controlling Access. If the returned message is
encoded, you can decode it using the DecodeAuthorizationMessage
action. For more information, see DecodeAuthorizationMessage in the
AWS Security Token Service API Reference.
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html#CommonErrors
You'll need to create the appropriate IAM Policies: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExamplePolicies_EC2.html#iam-example-manage-volumes

Google cloud deployment manager update Container cluster

I'm trying to create a Google Cloud Deployment Manager configuration to deploy and manage a Google Cloud Container cluster. So far, creating a configuration to create a cluster works, however updating fails. If I change a setting, the execution of the script fails with an error message I can't decipher:
code: RESOURCE_ERROR
location: /deployments/my-first-cluster/resources/my-first-test-cluster-setup
message:
'{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"cluster\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"cluster\": Cannot find field."}]}],"statusMessage":"Bad
Request","requestPath":"https://container.googleapis.com/v1/projects/*****/zones/europe-west1-b/clusters/my-first-cluster"}}'
The relevant configuration:
resources:
- name: my-first-test-cluster-setup
type: container.v1.cluster
properties:
zone: europe-west1-b
cluster:
name: my-first-cluster
description: My first cluster setup
nodePools:
- name: my-cluster-node-pool
config:
machineType: n1-standard-1
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 3
maxNodeCount: 5
management:
autoUpgrade: true
autoRepair: true
It looks like this is a bug in Deployment Manager which means that it is not able to update GKE clusters. The bug is reported here. It has the same strange 'unknown name "cluster"' message that you see.
There is no suggestion on the ticket about workarounds or resolution.
We have seen this same problem when updating a different cluster property.