Deployment Manager cannot update instance templates - NO_METHOD_TO_UPDATE_FIELD - google-cloud-platform

I have a deployment comprising a managed instance group and two instance templates (A and B). The deployment was initially created with the instance group referencing instance template A.
I tried updating the sourceImage in instance template B using deployment manager (gcloud beta deployment-manager deployments update my-deployment --template ...), but got the following error:
ERROR: (gcloud.beta.deployment-manager.deployments.update) Error in
Operation [operation-1538798895713-57787898f4ae9-8b478716-0bb72a09]:
errors:
- code: NO_METHOD_TO_UPDATE_FIELD
message: No method found to update field 'properties' on
resource 'fwp-app-preprod-instance-template-a' of type
'compute.v1.instanceTemplate'. The resource may need to be
recreated with the new field.
I should make it clear that the only change I made from the original deployment is the instance template's sourceImage.
Is it possible to perform an update of an instance template via deployment manager so that it references an updated sourceImage?
The error states clearly that the resource (instance template) may need to be recreated, and I'm happy for deployment manager to do that. But I have no idea how to instruct/force deployment manager to take that action.
I don't doubt it can be done outside of deployment manager, but I want to avoid configuration drift.
My app.jinja.schema:
imports:
- path: instance-group.jinja
- path: instance-template.jinja
My app.jinja:
resources:
- name: instance-template-a
type: instance-template.jinja
properties:
name: {{ env["deployment"] }}-instance-template-a
machineType: g1-small
sourceImage: "projects/my-project/global/images/my-image"
diskSizeGb: '30'
- name: instance-template-b
type: instance-template.jinja
properties:
name: {{ env["deployment"] }}-instance-template-b
machineType: g1-small
sourceImage: "projects/my-project/global/images/my-image"
diskSizeGb: '30'
- name: fwp-instance-group
type: instance-group.jinja
My instance-group.jinja:
resources:
- name: 'instance-group-{{ env["deployment"] }}'
type: compute.v1.regionInstanceGroupManager
properties:
baseInstanceName: ig-instance-{{ env["deployment"] }}
instanceTemplate: '$(ref.{{ env["deployment"] }}-instance-template-a.selfLink)'
targetSize: 1
region: australia-southeast1
- name: 'autoscaler-{{ env["deployment"] }}'
type: compute.v1.regionAutoscalers
properties:
autoscalingPolicy:
coolDownPeriodSec: 60
loadBalancingUtilization:
utilizationTarget: 0.9
maxNumReplicas: 10
minNumReplicas: 2
target: $(ref.instance-group-{{ env["deployment"] }}.selfLink)
region: australia-southeast1
And my instance-template.jinja
resources:
- name: {{ properties["name"] }}
type: compute.v1.instanceTemplate
properties:
name: {{ properties["name"] }}
description: ''
properties:
machineType: {{ properties["machineType"] }}
tags:
items:
- no-ip
- web-server
- http-server
- https-server
disks:
- type: 'PERSISTENT'
boot: true
mode: 'READ_WRITE'
autoDelete: true
deviceName: instance-device
initializeParams:
sourceImage: {{ properties["sourceImage"] }}
diskType: 'pd-standard'
diskSizeGb: {{ properties["diskSizeGb"] }}
canIpForward: false
networkInterfaces:
- network: projects/my-project/global/networks/vpc-fwp-nonprod
subnetwork: projects/my-project/regions/australia-southeast1/subnetworks/subnet-private-fwp-nonprod
aliasIpRanges: []
labels: { environment: {{ env["deployment"] }}, tenancy: "fwp-nonprod" }
scheduling:
preemptible: false
onHostMaintenance: MIGRATE
automaticRestart: true
nodeAffinities: []
serviceAccounts:
- email: some-service-account#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/cloud-platform

To recap the comments:
The DM config includes an instance template for the managed instance group. The change of source image is attempting to change the image used in the template.
Unfortunately, instance templates are immutable once created
"So it is not possible to update an existing instance template or change an instance template after it has been created."
This explains the error message returned. The proper way to change the image you want to use for a Managed Instance Group is to create a new template and perform a rolling update on the group and using the new instance template.

Related

Mounting AWS Secrets Manager on Kubernetes/Helm chart

I have created an apps cluster deployment on AWS EKS that is deployed using Helm. For proper operation of my app, I need to set env variables, which are secrets stored in AWS Secrets manager. Referencing a tutorial, I set up my values in values.yaml file someway like this
secretsData:
secretName: aws-secrets
providerName: aws
objectName: CodeBuild
Now I have created a secrets provider class as AWS recommends: secret-provider.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secret-provider-class
spec:
provider: {{ .Values.secretsData.providerName }}
parameters:
objects: |
- objectName: "{{ .Values.secretsData.objectName }}"
objectType: "secretsmanager"
jmesPath:
- path: SP1_DB_HOST
objectAlias: SP1_DB_HOST
- path: SP1_DB_USER
objectAlias: SP1_DB_USER
- path: SP1_DB_PASSWORD
objectAlias: SP1_DB_PASSWORD
- path: SP1_DB_PATH
objectAlias: SP1_DB_PATH
secretObjects:
- secretName: {{ .Values.secretsData.secretName }}
type: Opaque
data:
- objectName: SP1_DB_HOST
key: SP1_DB_HOST
- objectName: SP1_DB_USER
key: SP1_DB_USER
- objectName: SP1_DB_PASSWORD
key: SP1_DB_PASSWORD
- objectName: SP1_DB_PATH
key: SP1_DB_PATH
I mount this secret object in my deployment.yaml, the relevant section of the file looks like this:
volumeMounts:
- name: secrets-store-volume
mountPath: "/mnt/secrets"
readOnly: true
env:
- name: SP1_DB_HOST
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_HOST
- name: SP1_DB_PORT
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_PORT
further down in same deployment file, I define secrets-store-volume as :
volumes:
- name: secrets-store-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aws-secret-provider-class
All drivers are installed into cluster and permissions are set accordingly
with helm install mydeployment helm-folder/ --dry-run I can see all the files and values are populated as expected. Then with helm install mydeployment helm-folder/ I install the deployment into my cluster but with kubectl get all I can see the pod is stuck at Pending with warning Error: 'aws-secrets' not found and eventually gets timeout. In AWS CloudTrail log, I can see that the cluster made request to access the secret and there was no error fetching it. How can I solve this or maybe further debug it? Thank you for your time and efforts.
Error: 'aws-secrets' not found - looks like CSI Driver isn't creating kubernetes secret that you're using to reference values
Since yaml files looks correctly, I would say it's probably CSI Driver configuration Sync as Kubernetes secret - syncSecret.enabled (which is false by default)
So make sure that secrets-store-csi-driver runs with this flag set to true, for example:
helm upgrade --install csi-secrets-store \
--namespace kube-system secrets-store-csi-driver/secrets-store-csi-driver \
--set grpcSupportedProviders="aws" --set syncSecret.enabled="true"

Cloud SQL creation with Deployment Manager - "Precondition check failed." error

I'm using the gcp-types/sqladmin-v1beta4:instances Resource Type to create a Cloud SQL instance using the Deployment Manager and I'm getting the error below:
{
"ResourceType":"gcp-types/sqladmin-v1beta4:instances",
"ResourceErrorCode":"400",
"ResourceErrorMessage":{
"code":400,
"message":"Precondition check failed.",
"status":"FAILED_PRECONDITION",
"statusMessage":"Bad Request",
"requestPath":"https://www.googleapis.com/sql/v1beta4/projects/[PROJECT_NAME]/instances",
"httpMethod":"POST"
}
}
Here's the configuration inside the JINJA file:
{% set deployment_name = env['deployment'] %}
{% set INSTANCE_NAME = deployment_name + '-instance' %}
resources:
- name: {{ INSTANCE_NAME }}
type: gcp-types/sqladmin-v1beta4:instances
properties:
region: us-central1
rootPassword: root
settings:
tier: db-n1-standard-1
backupConfiguration:
binaryLogEnabled: true
enabled: true
- name: demand_ml_db
type: gcp-types/sqladmin-v1beta4:databases
properties:
name: demand_ml_db
instance: $(ref.{{ INSTANCE_NAME }}.name)
charset: utf8
The FAILED_PRECONDITION error - while not very descriptive, tends to be thrown when you're attempting to deploy over a previous Cloud SQL instance that was recently deleted; as a matter of fact, the instance you selected for deletion is not cleaned up instantly. There's an Issue Tracker thread regarding this here.
I was able to verify this on my end as well. The deployment using the JINJA file you've specified worked fine at first, but when I deleted it, and re-deployed - I received the same error.
The most simple approach is to try using a different deployment (or instance) name.

Connecting function to empty cloud storage bucket

I am trying to use the following info from Google deployment manager examples on GitHub.
empty_bucket_in_function.yaml
empty_bucket_cf.yaml
imports:
- path: empty_bucket_cf.jinja
resources:
- name: my-function
type: empty_bucket_cf.jinja
properties:
project: <PROJECT_NAME>
region: europe-west1
entryPoint: handler
runtime: nodejs8
bucket: lskflsjfsj
empty_bucket_cf.jinja
{% set BUCKET = properties['bucket'] + '-bucket' %} resources:
#- type: cloudfunctions.v1.function
- type: gcp-types/cloudfunctions-v1:projects.locations.functions name: my-function properties:
parent: projects/{{ properties['project'] }}/locations/{{ properties['region'] }}
location: {{ properties['region'] }}
function: my-{{ properties['bucket'] }}
sourceArchiveUrl: gs://$(ref.{{ BUCKET }}.name)/my-function
entryPoint: {{ properties['entryPoint'] }}
runtime: {{ properties['runtime'] }}
eventTrigger:
resource: $(ref.my-topic.name)
eventType: providers/cloud.pubsub/eventTypes/topic.publish
#- type: pubsub.v1.topic
- type: gcp-types/pubsub-v1:projects.topics name: my-topic properties:
topic: {{ properties['bucket'] }}-topic
#- type: storage.v1.bucket
- type: gcp-types/storage-v1:buckets name: {{ BUCKET }} properties:
predefinedAcl: projectPrivate
projection: full
location: US
storageClass: STANDARD
While deploying using deployment manager I am getting error as
testsetup has resource warnings
my-function: {"ResourceType":"gcp-types/cloudfunctions-v1:projects.locations.functions","ResourceErrorCode":"400","ResourceErrorMessage":"Failed to retrieve function source code"}
Deployment properties
Any idea why this is not a bug in Google Cloud Platform GitHub repository. Isn't it the purpose of empty_bucket config to create CFs with empty bucket.
Note: Sometimes it executes successfully as well.
I don't what Google had in mind when they publish this example, but it can't work. If your bucket is empty, the function has no code. However, when you deploy a function, the code is compiled/parsed, the entry-point checked (exists, correct signature,...), and deployed on the environment.
Here, no entry point, no code to compile/parse thus no deployment -> It's normal, but the example is disturbing. You can open an issue on the repos.

How to create mysql database with user and password in google-cloud-platform using deployment manager?

I need to add database,root or user,password in the following:
- name: deployed-database-instance
type: sqladmin.v1beta4.instance
properties:
backendType: SECOND_GEN
databaseVersion: MYSQL_5_7
settings:
tier: db-f1-micro
I believe this example from this github repo, would be a good place to start testing. From my test I was able to create a instance, database and a user. See my modified version below, of the example I provided, I have mainly just removed the failover replica and modified the delete user block to insert instead of delete:
{% set deployment_name = env['deployment'] %}
{% set instance_name = deployment_name + '-instance' %}
{% set database_name = deployment_name + '-db' %}
resources:
- name: {{ instance_name }}
type: gcp-types/sqladmin-v1beta4:instances
properties:
region: {{ properties['region'] }}
settings:
tier: {{ properties['tier'] }}
backupConfiguration:
binaryLogEnabled: true
enabled: true
- name: {{ database_name }}
type: gcp-types/sqladmin-v1beta4:databases
properties:
name: {{ database_name }}
instance: $(ref.{{ instance_name }}.name)
charset: utf8
- name: insert-user-root
action: gcp-types/sqladmin-v1beta4:sql.users.insert
metadata:
runtimePolicy:
- CREATE
dependsOn:
- {{ database_name }}
properties:
project: {{ env['project'] }}
instance: $(ref.{{ env['deployment'] }}-instance.name)
name: testuser
host: "%"
password: testpass
So what I did was:
1) Clone the repo;
2) Went to directory .\examples\v2\sqladmin\jinja;
3) Modified the sqladmin.jinja file as above;
4) Opened gcloud command prompt and went to said directory in #2;
5) Deployed using 'gcloud deployment-manager deployments create my-database --config sqladmin.yaml'
All you would need to do is play with the name of resources.
I generated this from Python, but I think in jinja it would be:
properties:
region: {{ properties['region'] }}
rootPassword: '12345'
settings:
tier: {{ properties['tier'] }}
backupConfiguration:
binaryLogEnabled: true
enabled: true
I just found this out today, sorry for the late reply.

CLOUD DEPLOYMENT MANAGER: Internal Load Balancer create issue

I am using the following to try and create an internal load balancer via Deployment Manager using the following code
- name: {{ env["name"] }}-port389-healthcheck
type: compute.v1.healthChecks
properties:
type: tcp
tcpHealthCheck: {
port: 389
}
- name: {{ env["name"] }}-port389-backend-service
type: compute.beta.backendService
properties:
healthChecks:
- $(ref.{{ env["name"] }}-port389-healthcheck.selfLink)
backends:
- group: $(ref.{{ env['name'] }}-master-instance-groups-managed.instanceGroup)
- group: $(ref.{{ env['name'] }}-slave-instance-groups-managed.instanceGroup)
protocol: TCP
region: {{ properties['region'] }}
loadBalancingScheme: INTERNAL
- name: {{ env["name"] }}-port389-forwarding-rule
type: compute.beta.forwardingRule
properties:
loadBalancingScheme: INTERNAL
ports:
- 389
network: default
region: {{ properties["region"] }}
backendService: $(ref.{{ env["name"] }}-port389-backend-service.selfLink)
It errrors when run with the following
Waiting for create operation-1478651694403-540d36cfdcdb9-cba25532-08697daf...failed.
ERROR: (gcloud.beta.deployment-manager.deployments.create) Error in Operation operation-1478651694403-540d36cfdcdb9-cba25532-08697daf:
errors:
- code: RESOURCE_ERROR
location: /deployments/forgerock/resources/forgerock-frontend-port389-backend-service-us-central1
message: 'Unexpected response from resource of type compute.beta.backendService:
400 {"code":400,"errors":[{"domain":"global","message":"Invalid value for field
''resource.loadBalancingScheme'': ''INTERNAL''. Load balancing scheme must be
external for a global backend service.","reason":"invalid"}],"message":"Invalid
value for field ''resource.loadBalancingScheme'': ''INTERNAL''. Load balancing
scheme must be external for a global backend service.","statusMessage":"Bad Request","requestPath":"https://www.googleapis.com/compute/beta/projects/carbide-tenure-557/global/backendServices"}'
It would appear to be creating using the https://www.googleapis.com/compute/beta/projects/carbide-tenure-557/global/backendServices instead of https://www.googleapis.com/compute/beta/projects/carbide-tenure-557/backendServices
I know this is beta functionality, but trying to develop this solution using GDM instead of a mixture of gcloud commands and GDM
For type instead of type: compute.beta.backendService use:
type: compute.v1.regionBackendService