Google cloud deployment manager update Container cluster - google-cloud-platform

I'm trying to create a Google Cloud Deployment Manager configuration to deploy and manage a Google Cloud Container cluster. So far, creating a configuration to create a cluster works, however updating fails. If I change a setting, the execution of the script fails with an error message I can't decipher:
code: RESOURCE_ERROR
location: /deployments/my-first-cluster/resources/my-first-test-cluster-setup
message:
'{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"cluster\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"cluster\": Cannot find field."}]}],"statusMessage":"Bad
Request","requestPath":"https://container.googleapis.com/v1/projects/*****/zones/europe-west1-b/clusters/my-first-cluster"}}'
The relevant configuration:
resources:
- name: my-first-test-cluster-setup
type: container.v1.cluster
properties:
zone: europe-west1-b
cluster:
name: my-first-cluster
description: My first cluster setup
nodePools:
- name: my-cluster-node-pool
config:
machineType: n1-standard-1
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 3
maxNodeCount: 5
management:
autoUpgrade: true
autoRepair: true

It looks like this is a bug in Deployment Manager which means that it is not able to update GKE clusters. The bug is reported here. It has the same strange 'unknown name "cluster"' message that you see.
There is no suggestion on the ticket about workarounds or resolution.
We have seen this same problem when updating a different cluster property.

Related

AWS MWAA: Some of the provided configurations do not have the expected format: scheduler, e.g: core.log_format

I am deploying Managed Airflow (MWAA) in AWS and getting the below error in the cloudformation. I am giving the log_format same as what mentioned in the airflow documentation but still CFN is giving the error.
Resource handler returned message: "Invalid request provided: Some of
the provided configurations do not have the expected format:
scheduler, e.g: core.log_format. (Service: Mwaa, Status Code: 400,
HandlerErrorCode: InvalidRequest)
Following are the Airflow configuration options I am giving in the cloudformation.
AirflowConfigurationOptions:
core:
parallelism: 64
default_task_retries: 3
default_timezone: Australia/Melbourne
dag_concurrency: 16
maximum_active_runs_per_dag: 16
load_examples: False
load_default_connections: False
log_format: "[%%(asctime)s] {{%%(filename)s:%%(lineno)d}} %%(levelname)s - %%(message)s"
webserver:
default_ui_timezone: Australia/Melbourne
scheduler:
catchup_by_default: False
allow_trigger_in_future: True
operators:
default_owner: vulcan
smart_sensor:
use_smart_sensor: True
shards: 8
logging:
remote_logging: True
remote_log_conn_id: s3
remote_base_log_folder:
- Fn::ImportValue: xxxxx
The correct way to mention the airflow configurations in CFN is:
env:
Type: AWS::MWAA::Environment
Properties:
AirflowConfigurationOptions:
core.parallelism: 64
core.default_task_retries: 3
core.default_timezone: Australia/Melbourne
And remote login connection should be first created using AWS Secret manager and than it can be used in the remote logging airflow config in cfn.

Enable Audit Logs using Cloud Deployment Manager GCP

I am trying to enable Audit Logs i.e. Data Access Logs using Cloud Deployment Manager in GCP but I am getting some error, below is the script I have created in YAML.
resources:
- name: get-iam-policy
action: gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.getIamPolicy
properties:
resource: <project_id>
metadata:
runtimePolicy:
- 'UPDATE_ALWAYS'
- name: patch-iam-policy
action: gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.setIamPolicy
properties:
resource: <project_id>
policy:
etag: $(ref.get-iam-policy.etag)
auditConfigs:
- auditLogConfigs:
- logType: ADMIN_READ
service: allServices
Above code is stored in file deploy.yaml and below command is used to create the deployment
gcloud deployment-manager deployments create test --config deploy.yaml
I am getting error below
(gcloud.deployment-manager.deployments.create) Error in Operation [operation-1621334235876-5c2984b310ba1-9e1e602f-d982565d]: errors:
- code: RESOURCE_ERROR
location: /deployments/test/resources/patch-iam-policy
message: '{"ResourceType":"gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.setIamPolicy","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"auditConfigs\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"auditConfigs\": Cannot find field."}]}],"statusMessage":"Bad
Request","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/******:setIamPolicy","httpMethod":"POST"}}'
I did some trying on this example and looked up some documentation for this.
Your code seems correct, and all yaml syntax validators say the same.
After I try to deploy your code I get exactly the same error message as you.
This looks like it may be a bug so I would recomment raising an issuge on Google IssueTracker.

Unable to set high availability for Cloud SQL

I am trying to create Cloud SQL using Deployment Manager.
Most of my configuration works apart from settings.availabilityType
jinja file -- That works
resources:
- name: dev-01
type: gcp-types/sqladmin-v1beta4:instances
properties:
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
region: europe-west1
databaseVersion: POSTGRES_9_6
settings:
tier: db-custom-1-3840
storageAutoResize: true
dataDiskSizeGb: PD_SSD
dataDiskType: 10
replicationType: SYNCHRONOUS
failoverReplica:
available: true
backupConfiguration:
enabled: true
locationPreference:
zone: europe-west1-b
activationPolicy: ALWAYS
jinja file -- That doesn't work
resources:
- name: dev-01
type: gcp-types/sqladmin-v1beta4:instances
properties:
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
region: europe-west1
databaseVersion: POSTGRES_9_6
settings:
tier: db-custom-1-3840
storageAutoResize: true
dataDiskSizeGb: PD_SSD
dataDiskType: 10
replicationType: SYNCHRONOUS
failoverReplica:
available: true
backupConfiguration:
enabled: true
locationPreference:
zone: europe-west1-b
activationPolicy: ALWAYS
availabilityType: REGIONAL
I am getting error... Invalid API call...
EDIT#1
From the GUI I can add the HA with one click and without any existing failover instances.
That is because you are trying to deploy a HA configuration with a locationPreference. The gcloud command to create HA for Cloud SQL HA instance just expects region and failover related details. See here
Follow this repo and you will find some good samples out there.
Specifically this part of the code gives you the template to follow
Another user had a similar issue to yours in this thread.
That worth to take a look, especially Jordi Miralles answer
For additional information, you should take a look into GCP docs.
Overview of the high availability configuration
Enabling and disabling high availability on an instance
Cloud SQL instance resource
Regarding the edit note, the regional availability configuration (the one for PostgreSQL) does not require a failover instance, since it's based on regional persistent disks. More info on the docs.
Failover instances were only for MySQL instances, and it's now considered legacy (and the docs imply it's going to be deprecated in 2020) in favor of the same HA system as PostgreSQL: regional persistent disks.

Use GKE beta features with GCP Deployment Manager

I'm trying to create GKE REGION cluster (beta feature) with GCP deployment manager.
But I got error. Is there any way to use GKE beta features (include region cluster) with deployment manager?
ERROR: (gcloud.beta.deployment-manager.deployments.create) Error in
Operation [operation-1525054837836-56b077fdf48e0-a571296c-604523fb]:
errors:
- code: RESOURCE_ERROR
location: /deployments/test-cluster1/resources/source-cluster
message: '{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"v1 API cannot be used to access GKE regional clusters. See https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta for more information.","status":"INVALID_ARGUMENT","statusMessage":"Bad Request","requestPath":"https://container.googleapis.com/v1/projects/project_id/zones/us-central1/clusters","httpMethod":"POST"}}'
In the error message, link of gcp help.
https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta
Configured as described there but error still appears.
My deployment manager yaml file looks like,
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1
cluster:
name: source
initialNodeCount: 3
Yet, zonal cluster is completely work. So I think it's related to usage of container v1beta api in deployment-manager commands.
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1-b
cluster:
name: source
initialNodeCount: 3
Thanks.
The error message you are receiving appears to be related to the fact that you are attempting to use a beta feature but you are specifying a Deployment Manager resource as using API v1 (i.e. container.v1.cluster). This means there's inconstancy between the beta resource you are trying to create and the specified resource.
I've had a look into this and discovered that the ability to add regional clusters via Deployment Manager is a very recent addition to Google Cloud Platform as detailed in this public feature request which has only recently been implemented.
It seems you would need to specify the API type as 'gcp-types/container-v1beta1:projects.locations.clusters' for this to work, and rather than using the 'zone' or 'region' key in the YAML, you would instead use a parent property that includes locations.
So your YAML would look something like this (replace PROJECT_ID with your own).
resources:
- type: gcp-types/container-v1beta1:projects.locations.clusters
name: source-cluster
properties:
parent: projects/PROJECT_ID/locations/us-central1
cluster:
name: source
initialNodeCount: 3

openshift origin - using dynamic ebs volumes

I am running openshift origin 3.6 ( kube v1.6.1+5115d708d7) in AWS. Ansible inventory contains cloud provider configuration and I can see the config files on the master nodes.
# From inventory
# AWS
openshift_cloudprovider_kind=aws
openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
I have also provisioned a storageclass
# oc get storageclass
NAME TYPE
fast (default) kubernetes.io/aws-ebs
However, when i try to create a pvc:
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: "testclaim"
namespace: testns
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "3Gi"
storageClassName: fast
It just goes in infinite loop trying to get the pvc created. Events show me this error:
(combined from similar events): Failed to provision volume with StorageClass "fast": UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: $(encoded-message) status code: 403, request id: d0742e84-a2e1-4bfd-b642-c6f1a61ddc1b
Unfortunately I cannot decode the encoded message using aws cli as it gives error.
aws sts decode-authorization-message -–encoded-message $(encoded-message)
Error: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
I haven't tried pv+pvc creation as I am looking for dynamic provisioning. Any guidance as to what I might be doing wrong.
So far I have been able to deploy pods, services etc and they seem to be working fine.
That error appears to be an AWS IAM error:
UnauthorizedOperation
You are not authorized to perform this operation. Check your IAM
policies, and ensure that you are using the correct access keys. For
more information, see Controlling Access. If the returned message is
encoded, you can decode it using the DecodeAuthorizationMessage
action. For more information, see DecodeAuthorizationMessage in the
AWS Security Token Service API Reference.
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html#CommonErrors
You'll need to create the appropriate IAM Policies: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExamplePolicies_EC2.html#iam-example-manage-volumes