Hostname not resolving for CE in same network - google-cloud-platform

I'm doing a deployment of 4 CE in 2 different zones (bastion in europe-west1-c and the other ones in europe-west2-c). I can ssh from cassandra-node-1 to cassandra-node-2 just using the hostname:
pedro_gordo_gmail_com#cassandra-node-1:~$ ssh cassandra-node-2
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-1049-gcp x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
0 packages can be updated.
0 updates are security updates.
New release '18.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Sun Dec 1 13:48:17 2019 from 10.154.0.14
groups: cannot find name for group ID 926993188
But I can't do the same from the bastion CE:
pedro_gordo_gmail_com#bastion:~$ ssh cassandra-node-1
ssh: Could not resolve hostname cassandra-node-1: Name or service not known
But I can ssh using the internal/external IP:
pedro_gordo_gmail_com#bastion:~$ ssh 10.154.0.14
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-1049-gcp x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
* Overheard at KubeCon: "microk8s.status just blew my mind".
https://microk8s.io/docs/commands#microk8s.status
0 packages can be updated.
0 updates are security updates.
New release '18.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Sun Dec 1 13:48:10 2019 from 173.194.92.32
groups: cannot find name for group ID 926993188
According to this GCP documentation, if I choose a custom name for my CE, then I need to edit the DNS. But on the other hand, if I don't provide a name: in my deployment-manager config, then I get the following error when I try to deploy:
gcloud deployment-manager deployments create cluster --config create-vms.yaml
ERROR: (gcloud.deployment-manager.deployments.create) ResponseError: code=412, message=Missing resource name in resource "type: compute.v1.instance
This is my deployment-manager configuration. How can I change this so that I can ssh from bastion to cassandra-node-1/2/3 just using the hostname?
# Copyright 2016 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Import all templates used in deployment
# Declare all resources. In this case, one highly available service
# as defined in the ha-service.py template.
resources:
- type: compute.v1.instance
name: bastion
properties:
zone: europe-west1-c
machineType: https://www.googleapis.com/compute/v1/projects/affable-seat-213016/zones/europe-west1-c/machineTypes/n1-standard-1
disks:
- deviceName: boot
boot: true
autoDelete: true
initializeParams:
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1604-xenial-v20190514
networkInterfaces:
- accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
metadata:
items:
- key: startup-script
value: |
#!/bin/bash
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
- type: compute.v1.instance
name: cassandra-node-1
properties:
zone: europe-west2-c
machineType: https://www.googleapis.com/compute/v1/projects/affable-seat-213016/zones/europe-west2-c/machineTypes/n1-standard-1
disks:
- deviceName: boot
boot: true
autoDelete: true
initializeParams:
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1604-xenial-v20190514
- deviceName: data
boot: false
autoDelete: true
initializeParams:
diskSizeGb: 1
diskType: zones/europe-west2-c/diskTypes/pd-ssd
networkInterfaces:
- accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
- type: compute.v1.instance
name: cassandra-node-2
properties:
zone: europe-west2-c
machineType: projects/affable-seat-213016/zones/europe-west2-c/machineTypes/n1-standard-1
disks:
- deviceName: boot
boot: true
autoDelete: true
initializeParams:
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1604-xenial-v20190514
- deviceName: data
boot: false
autoDelete: true
initializeParams:
diskSizeGb: 1
diskType: zones/europe-west2-c/diskTypes/pd-ssd
networkInterfaces:
- accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
- type: compute.v1.instance
name: cassandra-node-3
properties:
zone: europe-west2-c
machineType: https://www.googleapis.com/compute/v1/projects/affable-seat-213016/zones/europe-west2-c/machineTypes/n1-standard-1
disks:
- deviceName: boot
boot: true
autoDelete: true
initializeParams:
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1604-xenial-v20190514
- deviceName: data
boot: false
autoDelete: true
initializeParams:
diskSizeGb: 1
diskType: zones/europe-west2-c/diskTypes/pd-ssd
networkInterfaces:
- accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT

You have two solutions:
Use Google Cloud DNS and set up a private zone to resolve hostnames for your VPC.
Use the Compute Engine internal DNS name.
However, for method #2, I do not remember if hostname resolution for internal names is resolved across zones as the Compute Engine internal DNS is used for name resolution. Method #1 will always work provided that DNS is set up correctly.

Related

How to connect from a public GKE pod to a GCP Cloud SQL using a private connection

I have a Java application running in a docker container. I am deploying all this to my GKE cluster. I'd like to have it connected to a CloudSQL instance via private IP. However I struggle for two days now to get it working. I followed the following guide:
https://cloud.google.com/sql/docs/mysql/configure-private-services-access#gcloud_1
I managed to create a private service connection and also gave my CloudSQL instance the address range. As far as I understood this should be sufficient for the Pod to be able to connect to the CloudSQL instance.
However it just does not work. I pass the private IP from CloudSQL as the host for the Java application JDBC (Database) connection.
│ 2022-02-14 22:03:31.299 WARN 1 --- [ main] o.h.e.j.e.i.JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata │
│ │
│ com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
Here are some details to the problem.
The address allocation
➜ google-cloud-sdk gcloud compute addresses list
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
google-managed-services-default 10.77.0.0/16 INTERNAL VPC_PEERING default RESERVED
The vpc peering connection
➜ google-cloud-sdk gcloud services vpc-peerings list --network=default
---
network: projects/1071923183712/global/networks/default
peering: servicenetworking-googleapis-com
reservedPeeringRanges:
- google-managed-services-default
service: services/servicenetworking.googleapis.com
Here is my CloudSQL info. Please not that the PRIVATE IP Address is 10.77.0.5 and therefore matches the address range from above 10.77.0.0/16. I guess this part is working.
➜ google-cloud-sdk gcloud sql instances describe alpha-3
backendType: SECOND_GEN
connectionName: barbarus:europe-west4:alpha-3
createTime: '2022-02-14T19:28:02.465Z'
databaseInstalledVersion: MYSQL_5_7_36
databaseVersion: MYSQL_5_7
etag: 758de240b161b946689e5732d8e71d396c772c0e03904c46af3b61f59b1038a0
gceZone: europe-west4-a
instanceType: CLOUD_SQL_INSTANCE
ipAddresses:
- ipAddress: 34.90.174.243
type: PRIMARY
- ipAddress: 10.77.0.5
type: PRIVATE
kind: sql#instance
name: alpha-3
project: barbarus
region: europe-west4
selfLink: https://sqladmin.googleapis.com/sql/v1beta4/projects/barbarus/instances/alpha-3
serverCaCert:
cert: |-
-----BEGIN CERTIFICATE-----
//...
-----END CERTIFICATE-----
certSerialNumber: '0'
commonName: C=US,O=Google\, Inc,CN=Google Cloud SQL Server CA,dnQualifier=d495898b-f6c7-4e2f-9c59-c02ccf2c1395
createTime: '2022-02-14T19:29:35.325Z'
expirationTime: '2032-02-12T19:30:35.325Z'
instance: alpha-3
kind: sql#sslCert
sha1Fingerprint: 3ee799b139bf335ef39554b07a5027c9319087cb
serviceAccountEmailAddress: p1071923183712-d99fsz#gcp-sa-cloud-sql.iam.gserviceaccount.com
settings:
activationPolicy: ALWAYS
availabilityType: ZONAL
backupConfiguration:
backupRetentionSettings:
retainedBackups: 7
retentionUnit: COUNT
binaryLogEnabled: true
enabled: true
kind: sql#backupConfiguration
location: us
startTime: 12:00
transactionLogRetentionDays: 7
dataDiskSizeGb: '10'
dataDiskType: PD_HDD
ipConfiguration:
allocatedIpRange: google-managed-services-default
ipv4Enabled: true
privateNetwork: projects/barbarus/global/networks/default
requireSsl: false
kind: sql#settings
locationPreference:
kind: sql#locationPreference
zone: europe-west4-a
pricingPlan: PER_USE
replicationType: SYNCHRONOUS
settingsVersion: '1'
storageAutoResize: true
storageAutoResizeLimit: '0'
tier: db-f1-micro
state: RUNNABLE
The problem I see is with the Pod's IP Address. It is 10.0.5.3 and that is not in the range of 10.77.0.0/16 and therefore the pod can't see the CloudSQL instance.
See here is the Pod's info:
Name: game-server-5b9dd47cbd-vt2gw
Namespace: default
Priority: 0
Node: gke-barbarus-node-pool-1a5ea7d5-bg3m/10.164.15.216
Start Time: Tue, 15 Feb 2022 00:33:56 +0100
Labels: app=game-server
app.kubernetes.io/managed-by=gcp-cloud-build-deploy
pod-template-hash=5b9dd47cbd
Annotations: <none>
Status: Running
IP: 10.0.5.3
IPs:
IP: 10.0.5.3
Controlled By: ReplicaSet/game-server-5b9dd47cbd
Containers:
game-server:
Container ID: containerd://57d9540b1e5f5cb3fcc4517fa42377282943d292ba810c83cd7eb50bd4f1e3dd
Image: eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141
Image ID: eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Feb 2022 00:36:48 +0100
Finished: Tue, 15 Feb 2022 00:38:01 +0100
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Feb 2022 00:35:23 +0100
Finished: Tue, 15 Feb 2022 00:36:35 +0100
Ready: False
Restart Count: 2
Environment:
SQL_CONNECTION: <set to the key 'SQL_CONNECTION' of config map 'game-server'> Optional: false
SQL_USER: <set to the key 'SQL_USER' of config map 'game-server'> Optional: false
SQL_DATABASE: <set to the key 'SQL_DATABASE' of config map 'game-server'> Optional: false
SQL_PASSWORD: <set to the key 'SQL_PASSWORD' of config map 'game-server'> Optional: false
LOG_LEVEL: <set to the key 'LOG_LEVEL' of config map 'game-server'> Optional: false
WORLD_ID: <set to the key 'WORLD_ID' of config map 'game-server'> Optional: false
WORLD_SIZE: <set to the key 'WORLD_SIZE' of config map 'game-server'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sknlk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-sknlk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m7s default-scheduler Successfully assigned default/game-server-5b9dd47cbd-vt2gw to gke-barbarus-node-pool-1a5ea7d5-bg3m
Normal Pulling 4m6s kubelet Pulling image "eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141"
Normal Pulled 3m55s kubelet Successfully pulled image "eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141" in 11.09487284s
Normal Created 75s (x3 over 3m54s) kubelet Created container game-server
Normal Started 75s (x3 over 3m54s) kubelet Started container game-server
Normal Pulled 75s (x2 over 2m41s) kubelet Container image "eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141" already present on machine
Warning BackOff 1s (x2 over 87s) kubelet Back-off restarting failed container
Finally this is what gcloud container clusters describe gives me:
➜ google-cloud-sdk gcloud container clusters describe --region=europe-west4 barbarus
addonsConfig:
gcePersistentDiskCsiDriverConfig:
enabled: true
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
autopilot: {}
autoscaling:
autoscalingProfile: BALANCED
binaryAuthorization: {}
clusterIpv4Cidr: 10.0.0.0/14
createTime: '2022-02-14T19:34:03+00:00'
currentMasterVersion: 1.21.6-gke.1500
currentNodeCount: 3
currentNodeVersion: 1.21.6-gke.1500
databaseEncryption:
state: DECRYPTED
endpoint: 34.141.141.150
id: 39e7249b48c24d23a8b70b0c11cd18901565336b397147dab4778dc75dfc34e2
initialClusterVersion: 1.21.6-gke.1500
initialNodeCount: 1
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-a/instanceGroupManagers/gke-barbarus-node-pool-e291e3d6-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-b/instanceGroupManagers/gke-barbarus-node-pool-5aa35c39-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-c/instanceGroupManagers/gke-barbarus-node-pool-380645b7-grp
ipAllocationPolicy:
useRoutes: true
labelFingerprint: a9dc16a7
legacyAbac: {}
location: europe-west4
locations:
- europe-west4-a
- europe-west4-b
- europe-west4-c
loggingConfig:
componentConfig:
enableComponents:
- SYSTEM_COMPONENTS
- WORKLOADS
loggingService: logging.googleapis.com/kubernetes
maintenancePolicy:
resourceVersion: e3b0c442
masterAuth:
clusterCaCertificate: // ...
masterAuthorizedNetworksConfig: {}
monitoringConfig:
componentConfig:
enableComponents:
- SYSTEM_COMPONENTS
monitoringService: monitoring.googleapis.com/kubernetes
name: barbarus
network: default
networkConfig:
defaultSnatStatus: {}
network: projects/barbarus/global/networks/default
serviceExternalIpsConfig: {}
subnetwork: projects/barbarus/regions/europe-west4/subnetworks/default
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-medium
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
serviceAccount: default#barbarus.iam.gserviceaccount.com
shieldedInstanceConfig:
enableIntegrityMonitoring: true
nodeIpv4CidrSize: 24
nodePoolDefaults:
nodeConfigDefaults: {}
nodePools:
- config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-medium
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
serviceAccount: default#barbarus.iam.gserviceaccount.com
shieldedInstanceConfig:
enableIntegrityMonitoring: true
initialNodeCount: 1
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-a/instanceGroupManagers/gke-barbarus-node-pool-e291e3d6-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-b/instanceGroupManagers/gke-barbarus-node-pool-5aa35c39-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-c/instanceGroupManagers/gke-barbarus-node-pool-380645b7-grp
locations:
- europe-west4-a
- europe-west4-b
- europe-west4-c
management:
autoRepair: true
autoUpgrade: true
name: node-pool
podIpv4CidrSize: 24
selfLink: https://container.googleapis.com/v1/projects/barbarus/locations/europe-west4/clusters/barbarus/nodePools/node-pool
status: RUNNING
upgradeSettings:
maxSurge: 1
version: 1.21.6-gke.1500
notificationConfig:
pubsub: {}
releaseChannel:
channel: REGULAR
selfLink: https://container.googleapis.com/v1/projects/barbarus/locations/europe-west4/clusters/barbarus
servicesIpv4Cidr: 10.3.240.0/20
shieldedNodes:
enabled: true
status: RUNNING
subnetwork: default
zone: europe-west4
I have no idea how I can give the pod a reference to the address allocation I made for the private service connection.
What I tried is to spin up a GKE cluster with a Cluster default pod address range of 10.77.0.0/16 which sounded logical since I want the pods to appear in the same address range as the CloudSQL. However GCP gives me an error when I try to do that:
(1) insufficient regional quota to satisfy request: resource "CPUS": request requires '9.0' and is short '1.0'. project has a quota of '8.0' with '8.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=hait-barbarus (2) insufficient regional quota to satisfy request: resource "IN_USE_ADDRESSES": request requires '9.0' and is short '5.0'. project has a quota of '4.0' with '4.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=hait-barbarus.
So I am not able to give the pods the proper address range for the private service connection how can they ever discover the CloudSQL instance?
EDIT #1: The GKE cluster's service account has the SQL Client role.

Deploy node-pool in different subnetwork in same yaml file

I am creating a yaml config to deploy a gke cluster with multi-node-pool. I like to be able to create a new cluster and put each node-pool in a different subnetwork. Can this be done.
I have tried putting the subnetwork in different part of the properties under the second node-pool but it errors out. Below is the following error.
message: '{"ResourceType":"gcp-types/container-v1:projects.locations.clusters.nodePools","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"subnetwork\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"subnetwork\": Cannot find field."}]}],"statusMessage":"Bad
The current code for the both node-pools. first node is creates but second one error out.
resources:
- name: myclus
type: gcp-types/container-v1:projects.locations.clusters
properties:
parent: projects/[PROJECT_ID]/locations/[ZONE/REGION]
cluster:
name: my-clus
zone: us-east4
subnetwork: dev-web ### leave this field blank if using the default network
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 1
config:
machineType: n1-standard-1
imageType: cos
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
- name: my-clus
type: gcp-types/container-v1:projects.locations.clusters.nodePools
properties:
parent: projects/[PROJECT_ID]/locations/[ZONE/REGION]/clusters/$(ref.myclus.name)
subnetwork: dev-web ### leave this field blank if using the default
nodePool:
name: my-clus-pool2
initialNodeCount: 1
version: "1.13"
config:
machineType: n1-standard-1
imageType: cos
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
I like the expected out come to have 2 node-pools in 2 different subnetworks.
I found out that this is actually not a limitation of Deployment Manager but a limitation of GKE.
We can’t assign a different subnet to different node pools, the network and subnets are defined at the cluster level. There is no “Subnetwork” field in the node pool API.
Here is a link you can refer to for more information.

Cannot connect to Google Cloud SQL from Kubernetes Engine

I have one Google Cloud SQL Second generation instance and one Google Kubernetes Engine cluster. The problem is that I cannot connect to the Cloud SQL with the Private IP. I have enabled Private IP in the Cloud SQL dashboard and assigned it to my VPC network. However, the container still can't connect.
Is it maybe related to peering routes? Do I need to create one?
PS. I followed this guide https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
Result of gcloud container clusters describe:
$ gcloud container clusters describe sirodoht-32-fec8e2914780bf2c
addonsConfig:
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
clusterIpv4Cidr: 10.40.0.0/14
createTime: '2019-05-12T17:07:17+00:00'
currentMasterVersion: 1.12.7-gke.10
currentNodeCount: 6
currentNodeVersion: 1.11.8-gke.6
defaultMaxPodsConstraint:
maxPodsPerNode: '110'
endpoint: <retracted ip>
initialClusterVersion: 1.11.8-gke.6
initialNodeCount: 1
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/sirodoht-32/zones/europe-west3-a/instanceGroupManagers/gke-sirodoht-32-fep8e29-pool-94e97802-grp
ipAllocationPolicy:
clusterIpv4Cidr: 10.40.0.0/14
clusterIpv4CidrBlock: 10.40.0.0/14
clusterSecondaryRangeName: gke-sirodoht-32-fec8e2914780bf2c-pods-4439d109
servicesIpv4Cidr: 10.170.0.0/20
servicesIpv4CidrBlock: 10.170.0.0/20
servicesSecondaryRangeName: gke-sirodoht-32-fec8e2914780bf2c-services-4439d109
useIpAliases: true
labelFingerprint: a9dc16a7
legacyAbac: {}
location: europe-west3-a
locations:
- europe-west3-a
loggingService: logging.googleapis.com
maintenancePolicy:
window:
dailyMaintenanceWindow:
duration: PT4H0M0S
startTime: 00:00
masterAuth:
clientCertificate: <retracted>
clientKey: <retracted>
clusterCaCertificate: <retracted>
monitoringService: monitoring.googleapis.com
name: sirodoht-32-fec8e2914780bf2c
network: compute-network-aaa8ff1ec6b52012
networkConfig:
network: projects/sirodoht-32/global/networks/compute-network-aaa8ff1ec6b52012
subnetwork: projects/sirodoht-32/regions/europe-west3/subnetworks/subnet-bb2c9eb79b29a825
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-4
oauthScopes:
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/trace.append
serviceAccount: default
nodePools:
- config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-4
oauthScopes:
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/trace.append
serviceAccount: default
initialNodeCount: 6
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/sirodoht-32/zones/europe-west3-a/instanceGroupManagers/gke-sirodoht-32-fep8e29-pool-94e97802-grp
management: {}
maxPodsConstraint:
maxPodsPerNode: '110'
name: pool
podIpv4CidrSize: 24
selfLink: https://container.googleapis.com/v1/projects/sirodoht-32/zones/europe-west3-a/clusters/sirodoht-32-fec8e2914780bf2c/nodePools/pool
status: RUNNING
version: 1.11.8-gke.6
selfLink: https://container.googleapis.com/v1/projects/sirodoht-32/zones/europe-west3-a/clusters/sirodoht-32-fec8e2914780bf2c
servicesIpv4Cidr: 10.170.0.0/20
status: RUNNING
subnetwork: subnet-bb2c9eb79b29a825
zone: europe-west3-a
* - There is an upgrade available for your cluster(s).
To upgrade nodes to the latest available version, run
$ gcloud container clusters upgrade sirodoht-32-fec8e2914780bf2c
Private IP's are only accessible by other resources on the same Virtual Private Cloud (VPC). Follow these instructions to setup a GKE cluster on the same VPC as your Cloud SQL instance.
For more information on the environment requirements for using Private IP on Cloud SQL, please see this page.

Kitchen-EC2 SSH prompting password for an instance inside VPC

I am trying to spin up an ec2 instance inside a VPC on a private subnet. Every time I run kitchen test, I am able to spin up the instance with the right security groups and in the right subnet range. When test-kitchen is trying to SSH on to the instance, it is asking for password. However, when I manually try to ssh (ssh <private_ip> -i <path_to_ssh_key> -l ubuntu) on to the machine I succeed without being prompted for a password.
The following is my .kitchen.yml file
---
driver:
name: ec2
aws_ssh_key_id: id-spanning
security_group_ids: ['sg-9....5']
region: us-east-1
availability_zone: us-east-1a
require_chef_omnibus: true
subnet_id: subnet-5...0
associate_public_ip: false
instance_type: m3.medium
interface: private
transport:
ssh_key: ~/.ssh/id-spanning.pem
connection_timeout: 10
connection_retries: 5
username: ubuntu
provisioner:
name: chef_solo
platforms:
- name: Ubuntu-14.04
driver:
image_id: ami-8821cae0
suites:
- name: default
run_list:
attributes:
I have the aws credentials in place on the environment variables. The following is my output.
kitchen test
-----> Starting Kitchen (v1.4.0)
-----> Cleaning up any prior instances of <default-Ubuntu-1404>
-----> Destroying <default-Ubuntu-1404>...
EC2 instance <i-16f468c6> destroyed.
Finished destroying <default-Ubuntu-1404> (0m1.90s).
-----> Testing <default-Ubuntu-1404>
-----> Creating <default-Ubuntu-1404>...
Creating <>...
If you are not using an account that qualifies under the AWS
free-tier, you may be charged to run these suites. The charge
should be minimal, but neither Test Kitchen nor its maintainers
are responsible for your incurred costs.
Instance <i-8fad345f> requested.
EC2 instance <i-8fad345f> created.
Waited 0/300s for instance <i-8fad345f> to become ready.
Waited 5/300s for instance <i-8fad345f> to become ready.
Waited 10/300s for instance <i-8fad345f> to become ready.
Waited 15/300s for instance <i-8fad345f> to become ready.
Waited 20/300s for instance <i-8fad345f> to become ready.
Waited 25/300s for instance <i-8fad345f> to become ready.
EC2 instance <i-8fad345f> ready.
Password:
I tried several times and haven't had any luck on bypassing the password to allow test-kitchen to ssh on to the instance. The following is my kitchen diagnose output.
---
timestamp: 2015-05-26 15:34:29 UTC
kitchen_version: 1.4.0
instances:
default-Ubuntu-1404:
platform:
os_type: unix
shell_type: bourne
state_file:
hostname: ''
server_id: i-1.....6
driver:
associate_public_ip: false
availability_zone: us-east-1a
aws_access_key_id:
aws_secret_access_key:
aws_session_token:
aws_ssh_key_id: id-spanning
block_device_mappings:
ebs_optimized: false
flavor_id:
iam_profile_name:
image_id: ami-8821cae0
instance_type: m3.medium
interface: private
kitchen_root: "/Users/jonnas2/Desktop/apache101"
log_level: :info
name: ec2
price:
private_ip_address:
region: us-east-1
retryable_sleep: 5
retryable_tries: 60
security_group_ids:
- sg-9....5
shared_credentials_profile:
subnet_id: subnet-5....0
tags:
created-by: test-kitchen
test_base_path: "/Users/jonnas2/Desktop/apache101/test/integration"
user_data:
username:
provisioner:
attributes: {}
chef_metadata_url:
chef_omnibus_install_options:
chef_omnibus_root: "/opt/chef"
chef_omnibus_url: https://www.chef.io/chef/install.sh
chef_solo_path: "/opt/chef/bin/chef-solo"
clients_path:
cookbook_files_glob: README.*,metadata {json,rb},attributes/**/*,definitions/**/*,files/**/*,libraries/**/*,providers/**/*,recipes/**/*,resources/**/*,templates/**/*
data_bags_path:
data_path:
encrypted_data_bag_secret_key_path:
environments_path:
http_proxy:
https_proxy:
kitchen_root: "/Users/jonnas2/Desktop/apache101"
log_file:
log_level: :info
name: chef_solo
nodes_path:
require_chef_omnibus: true
roles_path:
root_path: "/tmp/kitchen"
run_list: []
solo_rb: {}
sudo: true
sudo_command: sudo -E
test_base_path: "/Users/jonnas2/Desktop/apache101/test/integration"
transport:
compression: zlib
compression_level: 6
connection_retries: 5
connection_retry_sleep: 1
connection_timeout: 10
keepalive: true
keepalive_interval: 60
kitchen_root: "/Users/jonnas2/Desktop/apache101"
log_level: :info
max_wait_until_ready: 600
name: ssh
port: 22
ssh_key: "/Users/jonnas2/.ssh/id-spanning.pem"
test_base_path: "/Users/jonnas2/Desktop/apache101/test/integration"
username: ubuntu
verifier:
busser_bin: "/tmp/verifier/bin/busser"
http_proxy:
https_proxy:
kitchen_root: "/Users/jonnas2/Desktop/apache101"
log_level: :info
name: busser
root_path: "/tmp/verifier"
ruby_bindir: "/opt/chef/embedded/bin"
sudo: true
sudo_command: sudo -E
suite_name: default
test_base_path: "/Users/jonnas2/Desktop/apache101/test/integration"
version: busser
versions used:
test-kitchen 1.4.0
kitchen-ec2 0.9.0
Any help would be greatly appreciated. Thanks.
This issue was resolved by test-kitchen 1.4.1. A fix was merged (https://github.com/test-kitchen/test-kitchen/pull/704]) into core test-kitchen which disables password auth if an ssh_key is configured.

Test Kitchen, store credentials

With Test Kitchen, in the yaml configs... where is the best place to store globally used attributes that apply to multiple platforms and multiple suites?
To use my .kitchen.yml as an example:
---
provisioner:
name: chef_solo
platforms:
- name: centos-6.5
driver:
name: vagrant
- name: amazon
driver:
name: ec2
image_id: ami-ed8e9284
flavor_id: t2.medium
aws_ssh_key_id: <snip>
ssh_key: <snip>
availability_zone: us-east-1a
subnet_id: subnet-<snip>
require_chef_omnibus: true
iam_profile_name: <snip>
ebs_delete_on_termination: true
security_group_ids: sg-<snip>
# area in question (does not work here)
attributes:
teamcity:
server: 'build.example.com'
port: 80
username: 'example'
password: 'example'
# end area in question
suites:
- name: resin4
run_list:
- recipe[example_server::resin4]
- recipe[example_server::deploy_all_artifacts]
- name: deploy
run_list:
- recipe[example_server::deploy_all_artifacts]
- name: default
run_list:
- recipe[example_server::elasticsearch]
- recipe[example_server::resin4]
- recipe[example_server::deploy_all_artifacts]
I know there are other kitchen files, such as ~/kitchen/config.yml and .kitchen.local.yml but I've been unable to find a away to get attributes to apply to all platforms and suites. Is copy and pasting attributes to platforms the best way?
Is there a reason to specify these attributes in kitchen's yaml and not recipe[example_server::deploy_all_artifacts]? If necessary you could set overrides in kitchen.
Also, this post might be helpful: Access Attributes Across Recipes