Cannot connect to Google Cloud SQL from Kubernetes Engine - google-cloud-platform

I have one Google Cloud SQL Second generation instance and one Google Kubernetes Engine cluster. The problem is that I cannot connect to the Cloud SQL with the Private IP. I have enabled Private IP in the Cloud SQL dashboard and assigned it to my VPC network. However, the container still can't connect.
Is it maybe related to peering routes? Do I need to create one?
PS. I followed this guide https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
Result of gcloud container clusters describe:
$ gcloud container clusters describe sirodoht-32-fec8e2914780bf2c
addonsConfig:
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
clusterIpv4Cidr: 10.40.0.0/14
createTime: '2019-05-12T17:07:17+00:00'
currentMasterVersion: 1.12.7-gke.10
currentNodeCount: 6
currentNodeVersion: 1.11.8-gke.6
defaultMaxPodsConstraint:
maxPodsPerNode: '110'
endpoint: <retracted ip>
initialClusterVersion: 1.11.8-gke.6
initialNodeCount: 1
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/sirodoht-32/zones/europe-west3-a/instanceGroupManagers/gke-sirodoht-32-fep8e29-pool-94e97802-grp
ipAllocationPolicy:
clusterIpv4Cidr: 10.40.0.0/14
clusterIpv4CidrBlock: 10.40.0.0/14
clusterSecondaryRangeName: gke-sirodoht-32-fec8e2914780bf2c-pods-4439d109
servicesIpv4Cidr: 10.170.0.0/20
servicesIpv4CidrBlock: 10.170.0.0/20
servicesSecondaryRangeName: gke-sirodoht-32-fec8e2914780bf2c-services-4439d109
useIpAliases: true
labelFingerprint: a9dc16a7
legacyAbac: {}
location: europe-west3-a
locations:
- europe-west3-a
loggingService: logging.googleapis.com
maintenancePolicy:
window:
dailyMaintenanceWindow:
duration: PT4H0M0S
startTime: 00:00
masterAuth:
clientCertificate: <retracted>
clientKey: <retracted>
clusterCaCertificate: <retracted>
monitoringService: monitoring.googleapis.com
name: sirodoht-32-fec8e2914780bf2c
network: compute-network-aaa8ff1ec6b52012
networkConfig:
network: projects/sirodoht-32/global/networks/compute-network-aaa8ff1ec6b52012
subnetwork: projects/sirodoht-32/regions/europe-west3/subnetworks/subnet-bb2c9eb79b29a825
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-4
oauthScopes:
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/trace.append
serviceAccount: default
nodePools:
- config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-4
oauthScopes:
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/trace.append
serviceAccount: default
initialNodeCount: 6
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/sirodoht-32/zones/europe-west3-a/instanceGroupManagers/gke-sirodoht-32-fep8e29-pool-94e97802-grp
management: {}
maxPodsConstraint:
maxPodsPerNode: '110'
name: pool
podIpv4CidrSize: 24
selfLink: https://container.googleapis.com/v1/projects/sirodoht-32/zones/europe-west3-a/clusters/sirodoht-32-fec8e2914780bf2c/nodePools/pool
status: RUNNING
version: 1.11.8-gke.6
selfLink: https://container.googleapis.com/v1/projects/sirodoht-32/zones/europe-west3-a/clusters/sirodoht-32-fec8e2914780bf2c
servicesIpv4Cidr: 10.170.0.0/20
status: RUNNING
subnetwork: subnet-bb2c9eb79b29a825
zone: europe-west3-a
* - There is an upgrade available for your cluster(s).
To upgrade nodes to the latest available version, run
$ gcloud container clusters upgrade sirodoht-32-fec8e2914780bf2c

Private IP's are only accessible by other resources on the same Virtual Private Cloud (VPC). Follow these instructions to setup a GKE cluster on the same VPC as your Cloud SQL instance.
For more information on the environment requirements for using Private IP on Cloud SQL, please see this page.

Related

How to connect from a public GKE pod to a GCP Cloud SQL using a private connection

I have a Java application running in a docker container. I am deploying all this to my GKE cluster. I'd like to have it connected to a CloudSQL instance via private IP. However I struggle for two days now to get it working. I followed the following guide:
https://cloud.google.com/sql/docs/mysql/configure-private-services-access#gcloud_1
I managed to create a private service connection and also gave my CloudSQL instance the address range. As far as I understood this should be sufficient for the Pod to be able to connect to the CloudSQL instance.
However it just does not work. I pass the private IP from CloudSQL as the host for the Java application JDBC (Database) connection.
│ 2022-02-14 22:03:31.299 WARN 1 --- [ main] o.h.e.j.e.i.JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata │
│ │
│ com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
Here are some details to the problem.
The address allocation
➜ google-cloud-sdk gcloud compute addresses list
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
google-managed-services-default 10.77.0.0/16 INTERNAL VPC_PEERING default RESERVED
The vpc peering connection
➜ google-cloud-sdk gcloud services vpc-peerings list --network=default
---
network: projects/1071923183712/global/networks/default
peering: servicenetworking-googleapis-com
reservedPeeringRanges:
- google-managed-services-default
service: services/servicenetworking.googleapis.com
Here is my CloudSQL info. Please not that the PRIVATE IP Address is 10.77.0.5 and therefore matches the address range from above 10.77.0.0/16. I guess this part is working.
➜ google-cloud-sdk gcloud sql instances describe alpha-3
backendType: SECOND_GEN
connectionName: barbarus:europe-west4:alpha-3
createTime: '2022-02-14T19:28:02.465Z'
databaseInstalledVersion: MYSQL_5_7_36
databaseVersion: MYSQL_5_7
etag: 758de240b161b946689e5732d8e71d396c772c0e03904c46af3b61f59b1038a0
gceZone: europe-west4-a
instanceType: CLOUD_SQL_INSTANCE
ipAddresses:
- ipAddress: 34.90.174.243
type: PRIMARY
- ipAddress: 10.77.0.5
type: PRIVATE
kind: sql#instance
name: alpha-3
project: barbarus
region: europe-west4
selfLink: https://sqladmin.googleapis.com/sql/v1beta4/projects/barbarus/instances/alpha-3
serverCaCert:
cert: |-
-----BEGIN CERTIFICATE-----
//...
-----END CERTIFICATE-----
certSerialNumber: '0'
commonName: C=US,O=Google\, Inc,CN=Google Cloud SQL Server CA,dnQualifier=d495898b-f6c7-4e2f-9c59-c02ccf2c1395
createTime: '2022-02-14T19:29:35.325Z'
expirationTime: '2032-02-12T19:30:35.325Z'
instance: alpha-3
kind: sql#sslCert
sha1Fingerprint: 3ee799b139bf335ef39554b07a5027c9319087cb
serviceAccountEmailAddress: p1071923183712-d99fsz#gcp-sa-cloud-sql.iam.gserviceaccount.com
settings:
activationPolicy: ALWAYS
availabilityType: ZONAL
backupConfiguration:
backupRetentionSettings:
retainedBackups: 7
retentionUnit: COUNT
binaryLogEnabled: true
enabled: true
kind: sql#backupConfiguration
location: us
startTime: 12:00
transactionLogRetentionDays: 7
dataDiskSizeGb: '10'
dataDiskType: PD_HDD
ipConfiguration:
allocatedIpRange: google-managed-services-default
ipv4Enabled: true
privateNetwork: projects/barbarus/global/networks/default
requireSsl: false
kind: sql#settings
locationPreference:
kind: sql#locationPreference
zone: europe-west4-a
pricingPlan: PER_USE
replicationType: SYNCHRONOUS
settingsVersion: '1'
storageAutoResize: true
storageAutoResizeLimit: '0'
tier: db-f1-micro
state: RUNNABLE
The problem I see is with the Pod's IP Address. It is 10.0.5.3 and that is not in the range of 10.77.0.0/16 and therefore the pod can't see the CloudSQL instance.
See here is the Pod's info:
Name: game-server-5b9dd47cbd-vt2gw
Namespace: default
Priority: 0
Node: gke-barbarus-node-pool-1a5ea7d5-bg3m/10.164.15.216
Start Time: Tue, 15 Feb 2022 00:33:56 +0100
Labels: app=game-server
app.kubernetes.io/managed-by=gcp-cloud-build-deploy
pod-template-hash=5b9dd47cbd
Annotations: <none>
Status: Running
IP: 10.0.5.3
IPs:
IP: 10.0.5.3
Controlled By: ReplicaSet/game-server-5b9dd47cbd
Containers:
game-server:
Container ID: containerd://57d9540b1e5f5cb3fcc4517fa42377282943d292ba810c83cd7eb50bd4f1e3dd
Image: eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141
Image ID: eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Feb 2022 00:36:48 +0100
Finished: Tue, 15 Feb 2022 00:38:01 +0100
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Feb 2022 00:35:23 +0100
Finished: Tue, 15 Feb 2022 00:36:35 +0100
Ready: False
Restart Count: 2
Environment:
SQL_CONNECTION: <set to the key 'SQL_CONNECTION' of config map 'game-server'> Optional: false
SQL_USER: <set to the key 'SQL_USER' of config map 'game-server'> Optional: false
SQL_DATABASE: <set to the key 'SQL_DATABASE' of config map 'game-server'> Optional: false
SQL_PASSWORD: <set to the key 'SQL_PASSWORD' of config map 'game-server'> Optional: false
LOG_LEVEL: <set to the key 'LOG_LEVEL' of config map 'game-server'> Optional: false
WORLD_ID: <set to the key 'WORLD_ID' of config map 'game-server'> Optional: false
WORLD_SIZE: <set to the key 'WORLD_SIZE' of config map 'game-server'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sknlk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-sknlk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m7s default-scheduler Successfully assigned default/game-server-5b9dd47cbd-vt2gw to gke-barbarus-node-pool-1a5ea7d5-bg3m
Normal Pulling 4m6s kubelet Pulling image "eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141"
Normal Pulled 3m55s kubelet Successfully pulled image "eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141" in 11.09487284s
Normal Created 75s (x3 over 3m54s) kubelet Created container game-server
Normal Started 75s (x3 over 3m54s) kubelet Started container game-server
Normal Pulled 75s (x2 over 2m41s) kubelet Container image "eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141" already present on machine
Warning BackOff 1s (x2 over 87s) kubelet Back-off restarting failed container
Finally this is what gcloud container clusters describe gives me:
➜ google-cloud-sdk gcloud container clusters describe --region=europe-west4 barbarus
addonsConfig:
gcePersistentDiskCsiDriverConfig:
enabled: true
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
autopilot: {}
autoscaling:
autoscalingProfile: BALANCED
binaryAuthorization: {}
clusterIpv4Cidr: 10.0.0.0/14
createTime: '2022-02-14T19:34:03+00:00'
currentMasterVersion: 1.21.6-gke.1500
currentNodeCount: 3
currentNodeVersion: 1.21.6-gke.1500
databaseEncryption:
state: DECRYPTED
endpoint: 34.141.141.150
id: 39e7249b48c24d23a8b70b0c11cd18901565336b397147dab4778dc75dfc34e2
initialClusterVersion: 1.21.6-gke.1500
initialNodeCount: 1
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-a/instanceGroupManagers/gke-barbarus-node-pool-e291e3d6-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-b/instanceGroupManagers/gke-barbarus-node-pool-5aa35c39-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-c/instanceGroupManagers/gke-barbarus-node-pool-380645b7-grp
ipAllocationPolicy:
useRoutes: true
labelFingerprint: a9dc16a7
legacyAbac: {}
location: europe-west4
locations:
- europe-west4-a
- europe-west4-b
- europe-west4-c
loggingConfig:
componentConfig:
enableComponents:
- SYSTEM_COMPONENTS
- WORKLOADS
loggingService: logging.googleapis.com/kubernetes
maintenancePolicy:
resourceVersion: e3b0c442
masterAuth:
clusterCaCertificate: // ...
masterAuthorizedNetworksConfig: {}
monitoringConfig:
componentConfig:
enableComponents:
- SYSTEM_COMPONENTS
monitoringService: monitoring.googleapis.com/kubernetes
name: barbarus
network: default
networkConfig:
defaultSnatStatus: {}
network: projects/barbarus/global/networks/default
serviceExternalIpsConfig: {}
subnetwork: projects/barbarus/regions/europe-west4/subnetworks/default
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-medium
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
serviceAccount: default#barbarus.iam.gserviceaccount.com
shieldedInstanceConfig:
enableIntegrityMonitoring: true
nodeIpv4CidrSize: 24
nodePoolDefaults:
nodeConfigDefaults: {}
nodePools:
- config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-medium
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
serviceAccount: default#barbarus.iam.gserviceaccount.com
shieldedInstanceConfig:
enableIntegrityMonitoring: true
initialNodeCount: 1
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-a/instanceGroupManagers/gke-barbarus-node-pool-e291e3d6-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-b/instanceGroupManagers/gke-barbarus-node-pool-5aa35c39-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-c/instanceGroupManagers/gke-barbarus-node-pool-380645b7-grp
locations:
- europe-west4-a
- europe-west4-b
- europe-west4-c
management:
autoRepair: true
autoUpgrade: true
name: node-pool
podIpv4CidrSize: 24
selfLink: https://container.googleapis.com/v1/projects/barbarus/locations/europe-west4/clusters/barbarus/nodePools/node-pool
status: RUNNING
upgradeSettings:
maxSurge: 1
version: 1.21.6-gke.1500
notificationConfig:
pubsub: {}
releaseChannel:
channel: REGULAR
selfLink: https://container.googleapis.com/v1/projects/barbarus/locations/europe-west4/clusters/barbarus
servicesIpv4Cidr: 10.3.240.0/20
shieldedNodes:
enabled: true
status: RUNNING
subnetwork: default
zone: europe-west4
I have no idea how I can give the pod a reference to the address allocation I made for the private service connection.
What I tried is to spin up a GKE cluster with a Cluster default pod address range of 10.77.0.0/16 which sounded logical since I want the pods to appear in the same address range as the CloudSQL. However GCP gives me an error when I try to do that:
(1) insufficient regional quota to satisfy request: resource "CPUS": request requires '9.0' and is short '1.0'. project has a quota of '8.0' with '8.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=hait-barbarus (2) insufficient regional quota to satisfy request: resource "IN_USE_ADDRESSES": request requires '9.0' and is short '5.0'. project has a quota of '4.0' with '4.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=hait-barbarus.
So I am not able to give the pods the proper address range for the private service connection how can they ever discover the CloudSQL instance?
EDIT #1: The GKE cluster's service account has the SQL Client role.

Hostname not resolving for CE in same network

I'm doing a deployment of 4 CE in 2 different zones (bastion in europe-west1-c and the other ones in europe-west2-c). I can ssh from cassandra-node-1 to cassandra-node-2 just using the hostname:
pedro_gordo_gmail_com#cassandra-node-1:~$ ssh cassandra-node-2
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-1049-gcp x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
0 packages can be updated.
0 updates are security updates.
New release '18.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Sun Dec 1 13:48:17 2019 from 10.154.0.14
groups: cannot find name for group ID 926993188
But I can't do the same from the bastion CE:
pedro_gordo_gmail_com#bastion:~$ ssh cassandra-node-1
ssh: Could not resolve hostname cassandra-node-1: Name or service not known
But I can ssh using the internal/external IP:
pedro_gordo_gmail_com#bastion:~$ ssh 10.154.0.14
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-1049-gcp x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
* Overheard at KubeCon: "microk8s.status just blew my mind".
https://microk8s.io/docs/commands#microk8s.status
0 packages can be updated.
0 updates are security updates.
New release '18.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Sun Dec 1 13:48:10 2019 from 173.194.92.32
groups: cannot find name for group ID 926993188
According to this GCP documentation, if I choose a custom name for my CE, then I need to edit the DNS. But on the other hand, if I don't provide a name: in my deployment-manager config, then I get the following error when I try to deploy:
gcloud deployment-manager deployments create cluster --config create-vms.yaml
ERROR: (gcloud.deployment-manager.deployments.create) ResponseError: code=412, message=Missing resource name in resource "type: compute.v1.instance
This is my deployment-manager configuration. How can I change this so that I can ssh from bastion to cassandra-node-1/2/3 just using the hostname?
# Copyright 2016 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Import all templates used in deployment
# Declare all resources. In this case, one highly available service
# as defined in the ha-service.py template.
resources:
- type: compute.v1.instance
name: bastion
properties:
zone: europe-west1-c
machineType: https://www.googleapis.com/compute/v1/projects/affable-seat-213016/zones/europe-west1-c/machineTypes/n1-standard-1
disks:
- deviceName: boot
boot: true
autoDelete: true
initializeParams:
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1604-xenial-v20190514
networkInterfaces:
- accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
metadata:
items:
- key: startup-script
value: |
#!/bin/bash
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
- type: compute.v1.instance
name: cassandra-node-1
properties:
zone: europe-west2-c
machineType: https://www.googleapis.com/compute/v1/projects/affable-seat-213016/zones/europe-west2-c/machineTypes/n1-standard-1
disks:
- deviceName: boot
boot: true
autoDelete: true
initializeParams:
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1604-xenial-v20190514
- deviceName: data
boot: false
autoDelete: true
initializeParams:
diskSizeGb: 1
diskType: zones/europe-west2-c/diskTypes/pd-ssd
networkInterfaces:
- accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
- type: compute.v1.instance
name: cassandra-node-2
properties:
zone: europe-west2-c
machineType: projects/affable-seat-213016/zones/europe-west2-c/machineTypes/n1-standard-1
disks:
- deviceName: boot
boot: true
autoDelete: true
initializeParams:
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1604-xenial-v20190514
- deviceName: data
boot: false
autoDelete: true
initializeParams:
diskSizeGb: 1
diskType: zones/europe-west2-c/diskTypes/pd-ssd
networkInterfaces:
- accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
- type: compute.v1.instance
name: cassandra-node-3
properties:
zone: europe-west2-c
machineType: https://www.googleapis.com/compute/v1/projects/affable-seat-213016/zones/europe-west2-c/machineTypes/n1-standard-1
disks:
- deviceName: boot
boot: true
autoDelete: true
initializeParams:
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1604-xenial-v20190514
- deviceName: data
boot: false
autoDelete: true
initializeParams:
diskSizeGb: 1
diskType: zones/europe-west2-c/diskTypes/pd-ssd
networkInterfaces:
- accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
You have two solutions:
Use Google Cloud DNS and set up a private zone to resolve hostnames for your VPC.
Use the Compute Engine internal DNS name.
However, for method #2, I do not remember if hostname resolution for internal names is resolved across zones as the Compute Engine internal DNS is used for name resolution. Method #1 will always work provided that DNS is set up correctly.

How to add load balancer to aws ecs service with ansible

I want to add a load balancer to a ecs service module with ansible. Therefore, I am using the following code:
- name: create ECS service on VPC network
ecs_service:
state: present
name: console-test-service
cluster: new_cluster
desired_count: 0
network_configuration:
subnets:
- subnet-abcd1234
security_groups:
- sg-aaaa1111
- my_security_group
Now, I want to add a load balancer with the load_balancers parameter. However, It is required a list of load balancers. How can I add a list of names of the load balancer that I want to define?
For example:
load_balancers:
- name_of_my_load_balancer
returns the following error:
raise
ParamValidationError(report=report.generate_report())\nbotocore.exceptions.ParamValidationError:
Parameter validation failed:\nInvalid type for parameter
loadBalancers[0], value: name_of_my_load_balancer, type: , valid
types: \n"
It needs a dictionary which includes the target group ARN, container name and the container port.
- name: create ECS service on VPC network
ecs_service:
state: present
name: console-test-service
cluster: new_cluster
desired_count: 0
load_balancers:
- targetGroupArn: arn:aws:elasticloadbalancing:eu-west-1:453157221:targetgroup/tg/16331647320e8a42
containerName: laravel
containerPort: 80
network_configuration:
subnets:
- subnet-abcd1234
security_groups:
- sg-aaaa1111
- my_security_group

Deploy node-pool in different subnetwork in same yaml file

I am creating a yaml config to deploy a gke cluster with multi-node-pool. I like to be able to create a new cluster and put each node-pool in a different subnetwork. Can this be done.
I have tried putting the subnetwork in different part of the properties under the second node-pool but it errors out. Below is the following error.
message: '{"ResourceType":"gcp-types/container-v1:projects.locations.clusters.nodePools","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"subnetwork\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"subnetwork\": Cannot find field."}]}],"statusMessage":"Bad
The current code for the both node-pools. first node is creates but second one error out.
resources:
- name: myclus
type: gcp-types/container-v1:projects.locations.clusters
properties:
parent: projects/[PROJECT_ID]/locations/[ZONE/REGION]
cluster:
name: my-clus
zone: us-east4
subnetwork: dev-web ### leave this field blank if using the default network
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 1
config:
machineType: n1-standard-1
imageType: cos
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
- name: my-clus
type: gcp-types/container-v1:projects.locations.clusters.nodePools
properties:
parent: projects/[PROJECT_ID]/locations/[ZONE/REGION]/clusters/$(ref.myclus.name)
subnetwork: dev-web ### leave this field blank if using the default
nodePool:
name: my-clus-pool2
initialNodeCount: 1
version: "1.13"
config:
machineType: n1-standard-1
imageType: cos
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
I like the expected out come to have 2 node-pools in 2 different subnetworks.
I found out that this is actually not a limitation of Deployment Manager but a limitation of GKE.
We can’t assign a different subnet to different node pools, the network and subnets are defined at the cluster level. There is no “Subnetwork” field in the node pool API.
Here is a link you can refer to for more information.

K8S Unable to mount AWS EBS as a persistent volume for pod

Question
Please suggest the cause of the error of not being able to mount AWS EBS volume in pod.
journalctl -b -f -u kubelet
1480 kubelet.go:1625] Unable to mount volumes for pod "nginx_default(ddc938ee-edda-11e7-ae06-06bb783bb15c)": timeout expired waiting for volumes to attach/mount for pod "default"/"nginx". list of unattached/unmounted volumes=[ebs]; skipping pod
1480 pod_workers.go:186] Error syncing pod ddc938ee-edda-11e7-ae06-06bb783bb15c ("nginx_default(ddc938ee-edda-11e7-ae06-06bb783bb15c)"), skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"nginx". list of unattached/unmounted volumes=[ebs]
1480 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "pv-ebs" (UniqueName: "kubernetes.io/aws-ebs/vol-0d275986ce24f4304") pod "nginx" (UID: "ddc938ee-edda-11e7-ae06-06bb783bb15c")
1480 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/aws-ebs/vol-0d275986ce24f4304\"" failed. No retries permitted until 2017-12-31 03:34:03.644604131 +0000 UTC m=+6842.543441523 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pv-ebs\" (UniqueName: \"kubernetes.io/aws-ebs/vol-0d275986ce24f4304\") pod \"nginx\" (UID: \"ddc938ee-edda-11e7-ae06-06bb783bb15c\") "
Steps
Deployed K8S 1.9 using kubeadm (without EBS volume mount, pods work) in AWS (us-west-1 and AZ is us-west-1b).
Configure an IAM role as per Kubernetes - Cloud Providers and kubelets failing to start when using 'aws' as cloud provider.
Assign the IAM role to EC2 instances as per Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console.
Deploy PV/PVC/POD as in the manifest.
The status from the kubectl:
kubectl get
NAME READY STATUS RESTARTS AGE IP NODE
nginx 0/1 ContainerCreating 0 29m <none> ip-172-31-1-43.us-west-1.compute.internal
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pv-ebs 5Gi RWO Recycle Bound default/pvc-ebs 33m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/pvc-ebs Bound pv-ebs 5Gi RWO 33m
kubectl describe pod nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned nginx to ip-172-31-1-43.us-west-1.compute.internal
Normal SuccessfulMountVolume 27m kubelet, ip-172-31-1-43.us-west-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-dt698"
Warning FailedMount 6s (x12 over 25m) kubelet, ip-172-31-1-43.us-west-1.compute.internal Unable to mount volumes for pod "nginx_default(ddc938ee-edda-11e7-ae06-06bb783bb15c)": timeout expired waiting for volumes to attach/mount for pod "default"/"nginx". Warning FailedMount 6s (x12 over 25m) kubelet, ip-172-31-1-43.us-west-1.compute.internal Unable to mount volumes for pod "nginx_default(ddc938ee-edda-11e7-ae06-06bb783bb15c)": timeout expired waiting for volumes to attach/mount for pod "default"/"nginx".
Manifest
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-ebs
labels:
type: amazonEBS
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-0d275986ce24f4304
fsType: ext4
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-ebs
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: Pod
apiVersion: v1
metadata:
name: nginx
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: ebs
volumes:
- name: ebs
persistentVolumeClaim:
claimName: pvc-ebs
IAM Policy
Environment
$ kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"gitTreeState": "clean",
"buildDate": "2017-12-15T21:07:38Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
},
"serverVersion": {
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"gitTreeState": "clean",
"buildDate": "2017-12-15T20:55:30Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
}
}
$ cat /etc/centos-release
CentOS Linux release 7.4.1708 (Core)
EC2
EBS
Solution
Found the documentation which shows how to configure AWS cloud provider.
K8S AWS Cloud Provider Notes
Steps
Tag EC2 instances and SG with the KubernetesCluster=${kubernetes cluster name}. If created with kubeadm, it is kubernetes as in Ability to configure user and cluster name in AdminKubeConfigFile
Run kubeadm init --config kubeadm.yaml.
kubeadm.yaml (Ansible template)
kind: MasterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha1
api:
advertiseAddress: {{ K8S_ADVERTISE_ADDRESS }}
networking:
podSubnet: {{ K8S_SERVICE_ADDRESSES }}
cloudProvider: {{ K8S_CLOUD_PROVIDER }}
Result
$ journalctl -b -f CONTAINER_ID=$(docker ps | grep k8s_kube-controller-manager | awk '{ print $1 }')
Jan 02 04:48:28 ip-172-31-4-117.us-west-1.compute.internal dockerd-current[8063]: I0102 04:48:28.752141
1 reconciler.go:287] attacherDetacher.AttachVolume started for volume "kuard-pv" (UniqueName: "kubernetes.io/aws-ebs/vol-0d275986ce24f4304") from node "ip-172-3
Jan 02 04:48:39 ip-172-31-4-117.us-west-1.compute.internal dockerd-current[8063]: I0102 04:48:39.309178
1 operation_generator.go:308] AttachVolume.Attach succeeded for volume "kuard-pv" (UniqueName: "kubernetes.io/aws-ebs/vol-0d275986ce24f4304") from node "ip-172-
$ kubectl describe pod kuard
...
Volumes:
kuard-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: kuard-pvc
ReadOnly: false
$ kubectl describe pv kuard-pv
Name: kuard-pv
Labels: failure-domain.beta.kubernetes.io/region=us-west-1
failure-domain.beta.kubernetes.io/zone=us-west-1b
type=amazonEBS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"amazonEBS"},"name":"kuard-pv","namespace":""},"spec":{"acce...
pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/kuard-pvc
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 5Gi
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-0d275986ce24f4304
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
$ kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"gitTreeState": "clean",
"buildDate": "2017-12-15T21:07:38Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
},
"serverVersion": {
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"gitTreeState": "clean",
"buildDate": "2017-12-15T20:55:30Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
}
}