OpenStack heat get ResourceGroup ip address and set to params - openstack-heat

Create 1 master instance node and 2 minion instances from heat template.
I want to write minion_group all instances ip addresses into master instance ip.txt file.
The minion group ip addresses get from {get_attr: [minion_group, first_address] }
but this type return was not string.
So it can't use str_replace.
Does anyone have ideas?
Here is my template about resource section:
resources:
master:
type: OS::Nova::Server
depends_on: minion_group
properties:
flavor: {get_param: master_flavor}
image: {get_param: image}
key_name: {get_param: key}
networks:
- port: {get_param: network}
user_data_format: SOFTWARE_CONFIG
minion_group:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: OS::Nova::Server
properties:
name:
list_join:
- '-'
- [{ get_param: 'OS::stack_name' }, 'minion', '%index%']
flavor: {get_param: minion_flavor}
image: {get_param: image}
key_name: {get_param: key}
networks:
- network: {get_param: network}
get_ip:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template: |
#!/bin/bash
echo ${minion_group_ip} > /opt/ip.txt
params:
$minion_group_ip: {get_attr: [minion_group, first_address] }
deployment:
type: OS::Heat::SoftwareDeployment
properties:
signal_transport: HEAT_SIGNAL
config: {get_resource: get_ip}
server: {get_resource: master}

Related

Deploying Docker Compose to AWS ECS

I've been trying to deploy my Docker Compose file to AWS ECS using Docker Compose CLI.
Steps:
docker context create aws myproject
docker context use myproject
docker compose up
My result is:
FrontendService TaskFailedToStart: CannotPullContainerError: inspect image has been retried 1 time(s): failed to resolve ref "docker.pkg.github.com/org/frontend/frontend:latest#sha256:360461ada5ee15cea9d058280eab4a2558c68cbbdc04fe14eaae789129ddb091": docker.pkg.github.com/org...
My Docker Compose File:
version: '3.3'
services:
frontend:
image: 'docker.pkg.github.com/org/frontend/frontend:latest'
restart: always
x-aws-pull_credentials: "arn secret"
env_file:
- ../frontend/.env.production
ports:
- 3000:3000
Output of docker compose convert:
AWSTemplateFormatVersion: 2010-09-09
Resources:
CloudMap:
Properties:
Description: Service Map for Docker Compose project docker
Name: docker.local
Vpc: vpc-c7823fba
Type: AWS::ServiceDiscovery::PrivateDnsNamespace
Cluster:
Properties:
ClusterName: docker
Tags:
- Key: com.docker.compose.project
Value: docker
Type: AWS::ECS::Cluster
Default3000Ingress:
Properties:
CidrIp: 0.0.0.0/0
Description: frontend:3000/tcp on default network
FromPort: 3000
GroupId:
Ref: DefaultNetwork
IpProtocol: TCP
ToPort: 3000
Type: AWS::EC2::SecurityGroupIngress
DefaultNetwork:
Properties:
GroupDescription: docker Security Group for default network
Tags:
- Key: com.docker.compose.project
Value: docker
- Key: com.docker.compose.network
Value: docker_default
VpcId: vpc-c7823fba
Type: AWS::EC2::SecurityGroup
DefaultNetworkIngress:
Properties:
Description: Allow communication within network default
GroupId:
Ref: DefaultNetwork
IpProtocol: "-1"
SourceSecurityGroupId:
Ref: DefaultNetwork
Type: AWS::EC2::SecurityGroupIngress
FrontendService:
DependsOn:
- FrontendTCP3000Listener
Properties:
Cluster:
Fn::GetAtt:
- Cluster
- Arn
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 100
DeploymentController:
Type: ECS
DesiredCount: 1
LaunchType: FARGATE
LoadBalancers:
- ContainerName: frontend
ContainerPort: 3000
TargetGroupArn:
Ref: FrontendTCP3000TargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- Ref: DefaultNetwork
Subnets:
- subnet-a7d289a9
- subnet-a56b0484
- subnet-22df6b13
- subnet-4c96f72a
- subnet-3bd2b064
- subnet-f486b7b9
PlatformVersion: 1.4.0
PropagateTags: SERVICE
SchedulingStrategy: REPLICA
ServiceRegistries:
- RegistryArn:
Fn::GetAtt:
- FrontendServiceDiscoveryEntry
- Arn
Tags:
- Key: com.docker.compose.project
Value: docker
- Key: com.docker.compose.service
Value: frontend
TaskDefinition:
Ref: FrontendTaskDefinition
Type: AWS::ECS::Service
FrontendServiceDiscoveryEntry:
Properties:
Description: '"frontend" service discovery entry in Cloud Map'
DnsConfig:
DnsRecords:
- TTL: 60
Type: A
RoutingPolicy: MULTIVALUE
HealthCheckCustomConfig:
FailureThreshold: 1
Name: frontend
NamespaceId:
Ref: CloudMap
Type: AWS::ServiceDiscovery::Service
FrontendTCP3000Listener:
Properties:
DefaultActions:
- ForwardConfig:
TargetGroups:
- TargetGroupArn:
Ref: FrontendTCP3000TargetGroup
Type: forward
LoadBalancerArn:
Ref: LoadBalancer
Port: 3000
Protocol: TCP
Type: AWS::ElasticLoadBalancingV2::Listener
FrontendTCP3000TargetGroup:
Properties:
Port: 3000
Protocol: TCP
Tags:
- Key: com.docker.compose.project
Value: docker
TargetType: ip
VpcId: vpc-c7823fba
Type: AWS::ElasticLoadBalancingV2::TargetGroup
FrontendTaskDefinition:
Properties:
ContainerDefinitions:
- Command:
- us-east-1.compute.internal
- docker.local
Essential: false
Image: docker/ecs-searchdomain-sidecar:1.0
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group:
Ref: LogGroup
awslogs-region:
Ref: AWS::Region
awslogs-stream-prefix: docker
Name: Frontend_ResolvConf_InitContainer
- DependsOn:
- Condition: SUCCESS
ContainerName: Frontend_ResolvConf_InitContainer
Environment:
- Name: NEXT_PUBLIC_BACKEND_URL
Value: '"http://localhost:4000"'
- Name: NEXT_PUBLIC_SEGMENT_TOKEN
Value: '"8GVxkwMLOYo5w4XrDwEGtATBaGlUrruH"'
Essential: true
Image: docker.pkg.github.com/org/frontend/frontend:latest#sha256:360461ada5ee15cea9d058280eab4a2558c68cbbdc04fe14eaae789129ddb091
LinuxParameters: {}
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group:
Ref: LogGroup
awslogs-region:
Ref: AWS::Region
awslogs-stream-prefix: docker
Name: frontend
PortMappings:
- ContainerPort: 3000
HostPort: 3000
Protocol: tcp
RepositoryCredentials:
CredentialsParameter: arn:aws
Cpu: "256"
ExecutionRoleArn:
Ref: FrontendTaskExecutionRole
Family: docker-frontend
Memory: "512"
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
Type: AWS::ECS::TaskDefinition
FrontendTaskExecutionRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- sts:AssumeRole
Condition: {}
Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Version: 2012-10-17
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
Policies:
- PolicyDocument:
Statement:
- Action:
- secretsmanager:GetSecretValue
- ssm:GetParameters
- kms:Decrypt
Condition: {}
Effect: Allow
Principal: {}
Resource:
- arn:aws:secretsmanager
PolicyName: frontendGrantAccessToSecrets
Tags:
- Key: com.docker.compose.project
Value: docker
- Key: com.docker.compose.service
Value: frontend
Type: AWS::IAM::Role
LoadBalancer:
Properties:
LoadBalancerAttributes:
- Key: load_balancing.cross_zone.enabled
Value: "true"
Scheme: internet-facing
Subnets:
- subnet-a7d289a9
- subnet-a56b0484
- subnet-22df6b13
- subnet-4c96f72a
- subnet-3bd2b064
- subnet-f486b7b9
Tags:
- Key: com.docker.compose.project
Value: docker
Type: network
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
LogGroup:
Properties:
LogGroupName: /docker-compose/docker
Type: AWS::Logs::LogGroup
How would I get this working?
Extra Questions:
Is it possible to deploy to EC2 instead of Fargate
Is it possible to specify the AMI and Type used for the instant
Notes:
I've changed my org name and ARN secret to the values above just to keep them hidden. In my actual file, they are valid.
Links:
https://docs.docker.com/cloud/ecs-integration/

Deploy CloudFormation Template resulting in error

I want to test the deploy of an ECS stack using AWS CLI from a GitLab pipeline.
My test project's core is a variation of the Docker Compose Flask app.
The file app.py:
import time
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello():
return 'Hello World!'
with its requirements.txt:
flask
The Dockerfile is:
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
and the Docker Compose file is:
version: "3.9"
services:
web:
image: registry.gitlab.com/<MYNAME>/<MYPROJECT>
x-aws-pull_credentials: "<CREDENTIALS>"
ports:
- "5000:5000"
I generate a CloudFormationTemplate.yml file using an ECS Docker Context (using an ecs Docker Context myecscontext, not the default one) and the command
docker compose convert > CloudFormationTemplate.yml
When I try to deploy on AWS from my local workstation (Win10):
aws cloudformation deploy --template-file CloudFormationTemplate.yml --stack-name test-stack
I get the error
unacceptable character #x0000: special characters are not allowed
in "<unicode string>", position 3
What's wrong? Thanks.
===========
ADDED
Here the CloudFormationTemplate.yml:
AWSTemplateFormatVersion: 2010-09-09
Resources:
CloudMap:
Properties:
Description: Service Map for Docker Compose project cloudformation
Name: cloudformation.local
Vpc: vpc-XXXXXXXX
Type: AWS::ServiceDiscovery::PrivateDnsNamespace
Cluster:
Properties:
ClusterName: cloudformation
Tags:
- Key: com.docker.compose.project
Value: cloudformation
Type: AWS::ECS::Cluster
Default5000Ingress:
Properties:
CidrIp: 0.0.0.0/0
Description: web:5000/tcp on default network
FromPort: 5000
GroupId:
Ref: DefaultNetwork
IpProtocol: TCP
ToPort: 5000
Type: AWS::EC2::SecurityGroupIngress
DefaultNetwork:
Properties:
GroupDescription: cloudformation Security Group for default network
Tags:
- Key: com.docker.compose.project
Value: cloudformation
- Key: com.docker.compose.network
Value: default
VpcId: vpc-XXXXXXXX
Type: AWS::EC2::SecurityGroup
DefaultNetworkIngress:
Properties:
Description: Allow communication within network default
GroupId:
Ref: DefaultNetwork
IpProtocol: "-1"
SourceSecurityGroupId:
Ref: DefaultNetwork
Type: AWS::EC2::SecurityGroupIngress
LoadBalancer:
Properties:
LoadBalancerAttributes:
- Key: load_balancing.cross_zone.enabled
Value: "true"
Scheme: internet-facing
Subnets:
- subnet-XXXXXXXX
- subnet-XXXXXXXX
- subnet-XXXXXXXX
Tags:
- Key: com.docker.compose.project
Value: cloudformation
Type: network
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
LogGroup:
Properties:
LogGroupName: /docker-compose/cloudformation
Type: AWS::Logs::LogGroup
WebService:
DependsOn:
- WebTCP5000Listener
Properties:
Cluster:
Fn::GetAtt:
- Cluster
- Arn
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 100
DeploymentController:
Type: ECS
DesiredCount: 1
LaunchType: FARGATE
LoadBalancers:
- ContainerName: web
ContainerPort: 5000
TargetGroupArn:
Ref: WebTCP5000TargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- Ref: DefaultNetwork
Subnets:
- subnet-XXXXXXXX
- subnet-XXXXXXXX
- subnet-XXXXXXXX
PlatformVersion: 1.4.0
PropagateTags: SERVICE
SchedulingStrategy: REPLICA
ServiceRegistries:
- RegistryArn:
Fn::GetAtt:
- WebServiceDiscoveryEntry
- Arn
Tags:
- Key: com.docker.compose.project
Value: cloudformation
- Key: com.docker.compose.service
Value: web
TaskDefinition:
Ref: WebTaskDefinition
Type: AWS::ECS::Service
WebServiceDiscoveryEntry:
Properties:
Description: '"web" service discovery entry in Cloud Map'
DnsConfig:
DnsRecords:
- TTL: 60
Type: A
RoutingPolicy: MULTIVALUE
HealthCheckCustomConfig:
FailureThreshold: 1
Name: web
NamespaceId:
Ref: CloudMap
Type: AWS::ServiceDiscovery::Service
WebTCP5000Listener:
Properties:
DefaultActions:
- ForwardConfig:
TargetGroups:
- TargetGroupArn:
Ref: WebTCP5000TargetGroup
Type: forward
LoadBalancerArn:
Ref: LoadBalancer
Port: 5000
Protocol: TCP
Type: AWS::ElasticLoadBalancingV2::Listener
WebTCP5000TargetGroup:
Properties:
Port: 5000
Protocol: TCP
Tags:
- Key: com.docker.compose.project
Value: cloudformation
TargetType: ip
VpcId: vpc-XXXXXXXX
Type: AWS::ElasticLoadBalancingV2::TargetGroup
WebTaskDefinition:
Properties:
ContainerDefinitions:
- Command:
- XXXXXXXX.compute.internal
- cloudformation.local
Essential: false
Image: docker/ecs-searchdomain-sidecar:1.0
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group:
Ref: LogGroup
awslogs-region:
Ref: AWS::Region
awslogs-stream-prefix: cloudformation
Name: Web_ResolvConf_InitContainer
- DependsOn:
- Condition: SUCCESS
ContainerName: Web_ResolvConf_InitContainer
Essential: true
Image: registry.gitlab.com/MYUSER/cloudformation
LinuxParameters: {}
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group:
Ref: LogGroup
awslogs-region:
Ref: AWS::Region
awslogs-stream-prefix: cloudformation
Name: web
PortMappings:
- ContainerPort: 5000
HostPort: 5000
Protocol: tcp
RepositoryCredentials:
CredentialsParameter: arn:aws:secretsmanager:XXXXXXXXXXXXXXXXXXXXXXXX
Cpu: "256"
ExecutionRoleArn:
Ref: WebTaskExecutionRole
Family: cloudformation-web
Memory: "512"
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
Type: AWS::ECS::TaskDefinition
WebTaskExecutionRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- sts:AssumeRole
Condition: {}
Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Version: 2012-10-17
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
Policies:
- PolicyDocument:
Statement:
- Action:
- secretsmanager:GetSecretValue
- ssm:GetParameters
- kms:Decrypt
Condition: {}
Effect: Allow
Principal: {}
Resource:
- arn:aws:secretsmanager:XXXXXXXXXXXXXXXXXXXXXXXX
PolicyName: webGrantAccessToSecrets
Tags:
- Key: com.docker.compose.project
Value: cloudformation
- Key: com.docker.compose.service
Value: web
Type: AWS::IAM::Role
docker compose convert does not create valid CloudFormation (CFN) template in default context. Before you attempt to generate it, you have to create ECS context:
docker context create ecs myecscontext
Then you have to switch from default context to myecscontext:
docker context use myecscontext
Use docker context ls to confirm that you are in the correct context (i.e., myecscontext). Then you can use your convert command
docker compose convert
to generate actual CFN template.

Google deployment manager fails when the MIG (managed instance group) and the load balancer are in the same config file

I'm working on IaC (Goole Deployment Manager - Python and YAML) and tinkering with Google's external load balancer. As part of the PoC I created:
An instance template based on ubuntu with startup script installing apache2 and an index.html file.
Create an instance group based on the above template.
Place an external HTTP load balancer in front of the instance group.
A network and a firewall rule to make debugging easier.
I heavily borrowed from Cloud Foundation Toolkit, and I'm showing only the config part of the code below. I used Python for templating.
imports:
- path: ../network/vpc/vpc_network.py
name: vpc_network.py
- path: ../network/firewall/firewall_rule.py
name: firewall_rule.py
- path: ../compute/instance_template/instance_template.py
name: instance_template.py
- path: ../compute/health_checks/health_check.py
name: health_check.py
- path: ../compute/instance_group/mananged/instance_group.py
name: instance_group.py
- path: ../network_services/load_balancing/external_loadBalancers/external_load_balancer.py
name: external_load_balancer.py
resources:
- name: demo-firewall-rules-1
type: firewall_rule.py
properties:
rules:
- name: "allow-ssh-for-all"
description: "tcp firewall enable from all"
network: $(ref.net-10-69-16.network)
priority: 1000
action: "allow"
direction: "INGRESS"
sourceRanges: ['0.0.0.0/0']
ipProtocol: "tcp"
ipPorts: ["22"]
- name: "test-health-check-for-ig"
description: "enable health check to work on the project"
network: $(ref.net-10-69-16.network)
priority: 1000
action: "allow"
direction: "INGRESS"
sourceRanges: ['130.211.0.0/22', '35.191.0.0/16']
ipProtocol: "tcp"
ipPorts: ["80", "443"]
- name: "allow-http-https-from-anywhere"
description: "allow http and https from anywhere"
network: $(ref.net-10-69-16.network)
priority: 1000
action: "allow"
direction: "INGRESS"
sourceRanges: ['0.0.0.0/0']
ipProtocol: "tcp"
ipPorts: ["80", "443"]
- name: net-10-69-16
type: vpc_network.py
properties:
subnetworks:
- region: australia-southeast1
cidr: 10.69.10.0/24
- region: australia-southeast1
cidr: 10.69.20.0/24
- region: australia-southeast1
cidr: 10.69.30.0/24
- name: mig-regional-1
type: instance_group.py
properties:
region: australia-southeast1
instanceTemplate: $(ref.it-demo-1.selfLink)
targetSize: 2
autoHealingPolicies:
- healthCheck: $(ref.demo-http-healthcheck-MIG-1.selfLink)
initialDelaySec: 400
- name: it-demo-1
type: instance_template.py
properties:
machineType: n1-standard-1
tags:
items:
- http
disks:
- deviceName: boot-disk-v1
initializeParams:
sourceImage: projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20201211
diskType: pd-ssd
networkInterfaces:
- network: $(ref.net-10-69-16.network)
subnetwork: $(ref.net-10-69-16.subnetworks[0])
accessConfigs:
- type: ONE_TO_ONE_NAT
metadata:
items:
- key: startup-script
value: |
sudo apt-get install -y apache2
sudo apt-get install -y php7.0
sudo service apache2 restart
sudo echo "Ho Ho Ho from $HOSTNAME" > /var/www/html/index.html
- name: demo-http-healthcheck-MIG-1
type: health_check.py
properties:
type: HTTP
checkIntervalSec: 5
timeoutSec: 5
unhealthyThreshold: 2
healthyThreshold: 2
httpHealthCheck:
port: 80
requestPath: /
- name: http-elb-1
type: external_load_balancer.py
properties:
portRange: 80
backendServices:
- resourceName: backend-service-for-http-1
sessionAffinity: NONE
affinityCookieTtlSec: 1000
portName: http
healthCheck: $(ref.demo-http-healthcheck-MIG-1.selfLink)
backends:
- group: $(ref.mig-regional-1.selfLink)
balancingMode: UTILIZATION
maxUtilization: 0.8
urlMap:
defaultService: backend-service-for-http-1
The problem is with the load balancer. When it is present in the same file, the deployment fails saying the managed instance group mig-regional-1 does not exist. But when I move the load balancer part out of this file and deploy the load balancer separately, (after a bit of delay) it all goes through well.
The most probable explanation is, the instance group is not ready when the load balancer is trying to reference it which is the exact situation ref.*.selfLink is supposed to handle.
This is not exactly a blocker, but it is nice to have the entire config in 1 file.
So, my questions:
Have any of you faced this before or am I missing something here?
Do you have any solution for this?
Your template need to explicit its dependencies.
You can have dependencies between your resources, such as when you
need certain parts of your environment to exist before you can deploy
other parts of the environment.
You can specify dependencies using the dependsOn option in your templates.
Seems that your External LB needs dependsOn attribute that has MIG value.
refer here to get more information about expliciting dependencies.

Configure a Firewall and a Startup Script with Deployment Manager

I'm carrying out the lab of the GCP platform "Configure a Firewall and a Startup Script with Deployment Manager", i changed the qwicklabs.jinja for this code:
resources:
- name: default-allow-http
type: compute.v1.firewall
properties:
targetTags: ["http"]
sourceRanges: ["0.0.0.0/0"]
allowed:
- IPProtocol: TCP
ports: ["80"]
- type: compute.v1.instance
name: vm-test
properties:
zone: {{ properties["zone"] }}
machineType: https://www.googleapis.com/compute/v1/projects/{{ env["project"] }}/zones/{{ properties["zone"] }}/machineTypes/f1-micro
# For examples on how to use startup scripts on an instance, see:
# https://cloud.google.com/compute/docs/startupscript
tags:
items: ["http"]
metadata:
items:
- key: startup-script
value: "apt-get update \n apt-get install -y apache2"
disks:
- deviceName: boot
type: PERSISTENT
boot: true
autoDelete: true
initializeParams:
diskName: disk-{{ env["deployment"] }}
sourceImage: https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/family/debian-9
networkInterfaces:
- network: https://www.googleapis.com/compute/v1/projects/{{ env["project"] }}/global/networks/default
# Access Config required to give the instance a public IP address
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
The VM and Disk are made succesfully but i can't complete the last task "Check that Deployment manager includes startup script and firewall resource" because i have problems making the firewall rule an this appear:
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1598852175371-5a
e25c7f61bda-1c55c951-22ca1242]: errors:
- code: RESOURCE_ERROR
location: /deployments/deployment-templates/resources/http-firewall-rule
message: '{"ResourceType":"compute.v1.firewall","ResourceErrorCode":"400","ResourceErrorMessage":{
"code":400,"message":"Request
contains an invalid argument.","status":"INVALID_ARGUMENT","statusMessage":"Bad
Request","requestPath":"https://compute.googleapis.com/compute/v1/projects/qwiklabs-gcp-01-888e7
df2843f/global/firewalls","httpMethod":"POST"}}'
Could someone help me pls? I have to finish this lab!
Your file was giving me for some reason "invalid format" error so I created a new Deployment Manager config file; took VM template from here, added your external IP configuration and also firewall rule part (without any changes).
My yaml file looks like this (I didn't use any variables though).
resources:
- name: vm-created-by-deployment-manager
type: compute.v1.instance
properties:
zone: us-central1-a
machineType: zones/us-central1-a/machineTypes/n1-standard-1
tags:
items: ["http"]
metadata:
items:
- key: startup-script
value: "apt-get update \n apt-get install -y apache2"
disks:
- deviceName: boot
type: PERSISTENT
boot: true
autoDelete: true
initializeParams:
sourceImage: projects/debian-cloud/global/images/family/debian-9
networkInterfaces:
- network: global/networks/default
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
- name: default-allow-http3
type: compute.v1.firewall
properties:
targetTags: ["http"]
sourceRanges: ["0.0.0.0/0"]
allowed:
- IPProtocol: TCP
ports: ["80"]
When I ran the file everything worked as intended:
wbogacz#cloudshell:~/fire (wojtek)$ gcloud deployment-manager deployments create test1 --config dm1.yaml
The fingerprint of the deployment is b'n63E-AtErTCKtWOvktfUsA=='
Waiting for create [operation-1599036146720-5ae5-----99-2a45880e-addbce89]...done.
Create operation operation-1599036146720-5ae-----99-2a45880e-addbce89 completed successfully.
NAME TYPE STATE ERRORS INTENT
default-allow-http3 compute.v1.firewall COMPLETED []
vm-created-by-deployment-manager compute.v1.instance COMPLETED []
At the end I logged in via SSH to the VM and verified that the startup script was executed - and again success.

How to make a regional cluster in GKE w/ deployment-manager?

"zone" is a required field when I try to create but it says in the documentation that it is "deprecated". This is kinda misleading. Then everytime I include "zone". It is the one followed; Let us say I put "asia-east2-a" then it will be a zonal where the master node is in asia-east2-a.
Below is my jinja template
resources:
- name: practice-gke-clusters
type: container.v1.cluster
properties:
zone: asia-east2-a
cluster:
name: practice-gke-clusters
location: asia-east2
network: $(ref.practice-gke-network.selfLink)
subnetwork: $(ref.practice-gke-network-subnet-1.selfLink)
nodePools:
- name: default-pool
config:
machineType: n1-standard-1
diskSizeGb: 10
diskType: pd-ssd
preemptible: True
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
initialNodeCount: 1
autoscaling:
enabled: True
minNodeCount: 1
maxNodeCount: 100
management:
autoUpgrade: False
autoRepair: True
loggingService: logging.googleapis.com
monitoringService: monitoring.googleapis.com
Currently v1 API does not support the creation of regional clusters. However you can use v1beta1 API which supports this feature and use the following resource type:
type: gcp-types/container-v1beta1:projects.locations.clusters
Rather than using the 'zone' or 'region' key in the YAML, you would instead use a parent property that includes locations.
So your YAML would look something like this (replace PROJECT_ID and REGION with your own).
resources:
- type: gcp-types/container-v1beta1:projects.locations.clusters # previously container.v1.clusters
name: source-cluster
properties:
parent: projects/PROJECT_ID/locations/REGION
cluster:
name: source
initialNodeCount: 3