Unable to set high availability for Cloud SQL - google-cloud-platform

I am trying to create Cloud SQL using Deployment Manager.
Most of my configuration works apart from settings.availabilityType
jinja file -- That works
resources:
- name: dev-01
type: gcp-types/sqladmin-v1beta4:instances
properties:
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
region: europe-west1
databaseVersion: POSTGRES_9_6
settings:
tier: db-custom-1-3840
storageAutoResize: true
dataDiskSizeGb: PD_SSD
dataDiskType: 10
replicationType: SYNCHRONOUS
failoverReplica:
available: true
backupConfiguration:
enabled: true
locationPreference:
zone: europe-west1-b
activationPolicy: ALWAYS
jinja file -- That doesn't work
resources:
- name: dev-01
type: gcp-types/sqladmin-v1beta4:instances
properties:
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
region: europe-west1
databaseVersion: POSTGRES_9_6
settings:
tier: db-custom-1-3840
storageAutoResize: true
dataDiskSizeGb: PD_SSD
dataDiskType: 10
replicationType: SYNCHRONOUS
failoverReplica:
available: true
backupConfiguration:
enabled: true
locationPreference:
zone: europe-west1-b
activationPolicy: ALWAYS
availabilityType: REGIONAL
I am getting error... Invalid API call...
EDIT#1
From the GUI I can add the HA with one click and without any existing failover instances.

That is because you are trying to deploy a HA configuration with a locationPreference. The gcloud command to create HA for Cloud SQL HA instance just expects region and failover related details. See here
Follow this repo and you will find some good samples out there.
Specifically this part of the code gives you the template to follow

Another user had a similar issue to yours in this thread.
That worth to take a look, especially Jordi Miralles answer
For additional information, you should take a look into GCP docs.
Overview of the high availability configuration
Enabling and disabling high availability on an instance
Cloud SQL instance resource

Regarding the edit note, the regional availability configuration (the one for PostgreSQL) does not require a failover instance, since it's based on regional persistent disks. More info on the docs.
Failover instances were only for MySQL instances, and it's now considered legacy (and the docs imply it's going to be deprecated in 2020) in favor of the same HA system as PostgreSQL: regional persistent disks.

Related

Include OS type (Linux/Windows) in Cloud Custodian's EC2 findings for AWS Security Hub

We have a Cloud Custodian policy for AWS EC2 that posts its findings to AWS Security Hub.
Is there a way to include the EC2 OS type (Linux/Windows) in the details that are sent to Security Hub by Cloud Custodian?
We're pushing Security Hub findings to Sumo Logic & need to query these findings by OS.
Here's our policy:
policies:
- name: ec2-report-compliant-base-linux
resource: ec2
mode:
type: periodic
schedule: rate(1 hour)
filters:
- PlatformDetails: Linux/UNIX
- type: value
key: ImageId
op: in
value:
- ami-0123456789
- ami-1234567890
- ami-2345678901
actions:
- type: post-finding
confidence: 100
severity: 0
severity_normalized: 0
compliance_status: PASSED
title: Compliant AMI
types:
- "Software and Configuration Checks/AWS Security Best Practices/Compliant Linux AMI"
Although it's technically possible to query by the "type" in this example to get Linux instances...
%Type = Software and Configuration Checks/AWS Security Best Practices/Compliant Linux AMI
...there are other similar use cases we have, where we need to query by OS type directly in Sumo Logic.
So, is there a way to include OS type in the findings posted by Cloud Custodian to Security Hub?

How to Create a Network and SubNetwork using Google deployment-manager (GCP)

I just started to study GCP deployment-manager and I'm creating a file to create one network and one subnetwork. I did a test using 2 different files (one for each) and worked fine. Now, when I combine the creation of network and subnetwork, there's 2 problems:
During creation, when network finish the creation and the step of subnetwork starts, looks like that network info is not yet created and I got an error of resource not found. But If I run an update again, the subnet is created.
During the delete, deployment-manager tries to delete first the network before the subnetwork and I got the message "resource is in use, you can't delete".
So,m I would like to get a help here with best practices about this. Many thanks.
My config:
main.yml
imports:
- path: network.jinja
- path: subnetwork.jinja
resources:
- name: network
type: network.jinja
- name: subnetwork
type: subnetwork.jinja
network.jinja
resources:
- type: gcp-types/compute-v1:networks
name: network-{{ env["deployment"] }}
properties:
routingConfig:
routingMode: REGIONAL
autoCreateSubnetworks: false
subnetwork.jinja
resources:
- type: gcp-types/compute-v1:subnetworks
name: subnetwork-{{ env["deployment"] }}
properties:
region: us-central1
network: https://www.googleapis.com/compute/v1/projects/XXXXXXXX/global/networks/network-{{ env["deployment"] }}
ipCidrRange: 10.10.10.0/24
privateIpGoogleAccess: false
It is likely your issue resulted because Deployment Manager didn't recognize that there were dependencies between your resources. Here is a working YAML that I used:
resources:
- type: gcp-types/compute-v1:networks
name: network-mynet
properties:
routingConfig:
routingMode: REGIONAL
autoCreateSubnetworks: false
- type: gcp-types/compute-v1:subnetworks
name: subnetwork-mynet
properties:
region: us-central1
network: $(ref.network-mynet.selfLink)
ipCidrRange: 10.10.10.0/24
privateIpGoogleAccess: false
I believe that the major difference is that in this example, the network element within the subnetworks definition uses a Deployment Manager references. By leverarging this technique, we have more of a declarative solution and relationships can be deduced.

GKE preemptible VMs using deployment manager

I'm planning to completely adopt the GCP service creation unto Deployment Manager. But based on the documentation, I can not see any options on converting the nodes to be created into the cluster into preemptibles.
I am hoping that there is a way but just not written on the document as by experience there should be some options that are not written in the document.
Below is the jinja template for it
resources:
- name: practice-gke-clusters
type: container.v1.cluster
properties:
zone: asia-east2-a
cluster:
name: practice-gke-clusters
network: $(ref.practice-gke-network.selfLink)
subnetwork: $(ref.practice-gke-network-subnet-1.selfLink)
initialNodeCount: 1
loggingService: logging.googleapis.com
monitoringService: monitoring.googleapis.com
nodeConfig:
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
Preemptible VMs are in Beta stage at Google Kubernetes Engine (GKE). As per documentation, it seems you need to add preemptible value as "True" into the deployment script such as this.

Use GKE beta features with GCP Deployment Manager

I'm trying to create GKE REGION cluster (beta feature) with GCP deployment manager.
But I got error. Is there any way to use GKE beta features (include region cluster) with deployment manager?
ERROR: (gcloud.beta.deployment-manager.deployments.create) Error in
Operation [operation-1525054837836-56b077fdf48e0-a571296c-604523fb]:
errors:
- code: RESOURCE_ERROR
location: /deployments/test-cluster1/resources/source-cluster
message: '{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"v1 API cannot be used to access GKE regional clusters. See https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta for more information.","status":"INVALID_ARGUMENT","statusMessage":"Bad Request","requestPath":"https://container.googleapis.com/v1/projects/project_id/zones/us-central1/clusters","httpMethod":"POST"}}'
In the error message, link of gcp help.
https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta
Configured as described there but error still appears.
My deployment manager yaml file looks like,
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1
cluster:
name: source
initialNodeCount: 3
Yet, zonal cluster is completely work. So I think it's related to usage of container v1beta api in deployment-manager commands.
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1-b
cluster:
name: source
initialNodeCount: 3
Thanks.
The error message you are receiving appears to be related to the fact that you are attempting to use a beta feature but you are specifying a Deployment Manager resource as using API v1 (i.e. container.v1.cluster). This means there's inconstancy between the beta resource you are trying to create and the specified resource.
I've had a look into this and discovered that the ability to add regional clusters via Deployment Manager is a very recent addition to Google Cloud Platform as detailed in this public feature request which has only recently been implemented.
It seems you would need to specify the API type as 'gcp-types/container-v1beta1:projects.locations.clusters' for this to work, and rather than using the 'zone' or 'region' key in the YAML, you would instead use a parent property that includes locations.
So your YAML would look something like this (replace PROJECT_ID with your own).
resources:
- type: gcp-types/container-v1beta1:projects.locations.clusters
name: source-cluster
properties:
parent: projects/PROJECT_ID/locations/us-central1
cluster:
name: source
initialNodeCount: 3

Google cloud deployment manager update Container cluster

I'm trying to create a Google Cloud Deployment Manager configuration to deploy and manage a Google Cloud Container cluster. So far, creating a configuration to create a cluster works, however updating fails. If I change a setting, the execution of the script fails with an error message I can't decipher:
code: RESOURCE_ERROR
location: /deployments/my-first-cluster/resources/my-first-test-cluster-setup
message:
'{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"cluster\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"cluster\": Cannot find field."}]}],"statusMessage":"Bad
Request","requestPath":"https://container.googleapis.com/v1/projects/*****/zones/europe-west1-b/clusters/my-first-cluster"}}'
The relevant configuration:
resources:
- name: my-first-test-cluster-setup
type: container.v1.cluster
properties:
zone: europe-west1-b
cluster:
name: my-first-cluster
description: My first cluster setup
nodePools:
- name: my-cluster-node-pool
config:
machineType: n1-standard-1
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 3
maxNodeCount: 5
management:
autoUpgrade: true
autoRepair: true
It looks like this is a bug in Deployment Manager which means that it is not able to update GKE clusters. The bug is reported here. It has the same strange 'unknown name "cluster"' message that you see.
There is no suggestion on the ticket about workarounds or resolution.
We have seen this same problem when updating a different cluster property.