We want to create a heat templates with servers and volumes attached to these servers. But we also want to be able to destroy all quickly servers without destroying volumes.
So we decided it would be best to make 2 heat templates instead of one :
- one for volumes
- one for servers and volume attachements
We would like something like that :
stack-for-volume.yml
description: project
heat_template_version: '2015-10-15'
resources:
volume-choca-01:
type: OS::Cinder::Volume
properties:
name: volume-choca-01
size: 15
stack-for-servers-and-attachments.yml
description: project
heat_template_version: '2015-10-15'
resources:
vm-choca-01:
type: OS::Nova::Server
properties:
flavor: CO.2
image: Centos 7
key_name: choca
name: vm-choca-01
networks:
- {network: net-ext}
security_groups: [default]
volume-attachment-01:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: vm-choca-01 }
volume_id: { get_resource: volume-choca-01 }
Of course since all resources are not in the same file:
volume_id: { get_resource: volume-choca-01 } can't work.
We tried to get the volume_id with the solution posted here : Openstack Heat - separate templates
by adding at the end stack-for-volume.yml :
outputs:
volume-choca-01-id:
description: something
value: { get_attr: [volume-choca-01] }
But the output didn't give us anything looking like the volume id.
We´re stuck right now.
Any idea ?
OpenStack Heat:
When the stack is created with the resources defined in the template/nested templates, all the resources are terminated/deleted when the stack is deleted by user.
So as per your requirement/question, you can try like this:
Step-1: Create the volume using the heat template
Step-2: Get the volume UUID from the dashboard/horizon and assign to volume_id in the OS::Cinder::VolumeAttachment resource like:
volume-attachment-01:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: vm-choca-01 }
volume_id: { get_param: volume-choca-01_UUID }
And in the parameters define the volume-choca-01_UUID param:
parameters:
volume-choca-01_UUID:
type: string
default: <UUID of volume from dashboard>
With the above process the server is created and volume is attached to it. When you delete the stack the volume is detached instead of getting deleted with server
Related
The task is simple: whenever an EC2 instance is launched with tag key:value I want it to install a specific software. Whenever an EC2 instance is launched with a different tag key:value I want it to install a different software.
I understand that I can create 2 different associations in State Manager that uses runCommand RuneRemoteScript to install software based on the tags, but the goal is to have 1 composite document that can do this.
Any help / guidance would be appreciated!
You can achieve that using SSM Automation documents - https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-branchdocs.html
However, probably you will need to do something like this:
In the State Manager use AWS-RunDocument,
This document should execute SSM Automation document (your Composite document)
Your Composite document should look like this:
I didn't validate this template, and I assume It shouldn't work without a few days of debugging
schemaVersion: '0.3'
parameters:
InstanceId:
type: String
mainSteps:
- name: DescribeEc2
action: 'aws:executeScript'
inputs:
Runtime: python3.7
Handler: script_handler
Script: |
import json
import boto3
def script_handler(events):
ec2_instance = boto3.client('ec2').describe_instances(
InstanceIds=events["instance_id"],
)["Reservations"][0]["Instances"][0]
# thread it like an example,
# Here you should parse your tags and decide what software you
# want to install on the provided instance
return json.dumps(
{
"to_be_installed": "result"
},
sort_keys=True,
default=str
)
InputPayload:
instance_id: '{{ InstanceId }}'
Outputs:
- Name: result
Selector: "$.to_be_installed"
- name: WhatToInstall
action: aws:branch
inputs:
Choices:
- NextStep: InstallSoft1
Variable: "{{DescribeEc2.result}}"
StringEquals: soft_1
- NextStep: InstallSoft1
Variable: "{{DescribeEc2.result}}"
StringEquals: soft_2
- name: InstallSoft1
action: aws:runCommand
inputs:
DocumentName: AWS-RunShellScript
InstanceIds:
- '{{ InstanceId }}'
Parameters:
commands:
...
- name: InstallSoft2
action: aws:runCommand
inputs:
DocumentName: AWS-RunShellScript
InstanceIds:
- '{{ InstanceId }}'
Parameters:
commands:
...
Tbh, you will find a lot of troubles with such solution (IAM and SSM specific issues), so I will recommend using Event Bridge -> Lambda Function(that decides which Document/Automation should be run) -> SSM-RunDocument (executed directly in the Lambda Function).
I am experimenting with deployment manager and each time I try to deploy an SQL instance with a DB on it and 2 users; some of the tasks are failing. Most of the time they are the users:
conf.yaml:
resources:
- name: mycloudsql
type: gcp-types/sqladmin-v1beta4:instances
properties:
name: mycloudsql-01
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
databaseVersion: MYSQL_5_7
region: europe-west6
settings:
tier: db-f1-micro
locationPreference:
zone: europe-west6-a
activationPolicy: ALWAYS
dataDiskSizeGb: 10
- name: mydjangodb
type: gcp-types/sqladmin-v1beta4:databases
properties:
name: django-db-01
instance: $(ref.mycloudsql.name)
charset: utf8
- name: sqlroot
type: gcp-types/sqladmin-v1beta4:users
properties:
name: root
host: "%"
instance: $(ref.mycloudsql.name)
password: root
- name: sqluser
type: gcp-types/sqladmin-v1beta4:users
properties:
name: user
instance: $(ref.mycloudsql.name)
password: user
Error:
PS C:\Users\user\Desktop\Python\GCP> gcloud --project=sound-catalyst-263911 deployment-manager deployments create dm-sql-test-11 --config conf.yaml
The fingerprint of the deployment is TZ_wYom9Q64Hno6X0bpv9g==
Waiting for create [operation-1589869946223-5a5fa71623bc9-1912fcb9-bc59aafc]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1589869946223-5a5fa71623bc9-1912fcb9-bc59aafc]: errors:
- code: RESOURCE_ERROR
location: /deployments/dm-sql-test-11/resources/sqluser
message: '{"ResourceType":"gcp-types/sqladmin-v1beta4:users","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Precondition
check failed.","status":"FAILED_PRECONDITION","statusMessage":"Bad Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/sound-catalyst-263911/instances/mycloudsql-01/users","httpMethod":"POST"}}'
- code: RESOURCE_ERROR
location: /deployments/dm-sql-test-11/resources/sqlroot
message: '{"ResourceType":"gcp-types/sqladmin-v1beta4:users","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Precondition
check failed.","status":"FAILED_PRECONDITION","statusMessage":"Bad Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/sound-catalyst-263911/instances/mycloudsql-01/users","httpMethod":"POST"}}'
Console View:
It doesn`t say what that precondition failing is or am I missing something?
It seems the installation of database is not completed by the time the Deployment Manager starts to create users despite the reference notation is used in the YAML code to take care of dependencies. That is why you receive the "FAILED_PRECONDITION" error.
As a workaround you can split the deployment into two parts:
Create a CloudSQL instance and a database;
Create users.
This does not look elegant, but it works.
Alternatively, you can consider using Terraform. Fortunately, Cloud Shell instance is provided with Terraform pre-installed. There are sample Terraform code for Cloud SQL out there, for example this one:
CloudSQL deployment with Terraform
I am creating a yaml config to deploy a gke cluster with multi-node-pool. I like to be able to create a new cluster and put each node-pool in a different subnetwork. Can this be done.
I have tried putting the subnetwork in different part of the properties under the second node-pool but it errors out. Below is the following error.
message: '{"ResourceType":"gcp-types/container-v1:projects.locations.clusters.nodePools","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"subnetwork\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"subnetwork\": Cannot find field."}]}],"statusMessage":"Bad
The current code for the both node-pools. first node is creates but second one error out.
resources:
- name: myclus
type: gcp-types/container-v1:projects.locations.clusters
properties:
parent: projects/[PROJECT_ID]/locations/[ZONE/REGION]
cluster:
name: my-clus
zone: us-east4
subnetwork: dev-web ### leave this field blank if using the default network
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 1
config:
machineType: n1-standard-1
imageType: cos
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
- name: my-clus
type: gcp-types/container-v1:projects.locations.clusters.nodePools
properties:
parent: projects/[PROJECT_ID]/locations/[ZONE/REGION]/clusters/$(ref.myclus.name)
subnetwork: dev-web ### leave this field blank if using the default
nodePool:
name: my-clus-pool2
initialNodeCount: 1
version: "1.13"
config:
machineType: n1-standard-1
imageType: cos
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
I like the expected out come to have 2 node-pools in 2 different subnetworks.
I found out that this is actually not a limitation of Deployment Manager but a limitation of GKE.
We can’t assign a different subnet to different node pools, the network and subnets are defined at the cluster level. There is no “Subnetwork” field in the node pool API.
Here is a link you can refer to for more information.
I am trying to create CF template that will ask users if RDS Instance and SecurityGroup exists and if they select Yes, then create the stack. If not, warn user that to create RDS Instance and SecurityGroup before creating the EC2 Stack.
Parameters:
IsRDSCreated:
Description: Ensure that the RDS Instance is already created
Default: No
Type: String
AllowedValues:
- Yes
- No
IsRDSSGCreated:
Description: Ensure that the RDS Security Group exists
Default: No
Type: String
AllowedValues:
- Yes
- No
Conditions:
ShouldCreateEC2Resource: !And
- !Equals [!Ref IsRDSCreated, Yes]
- !Equals [!Ref IsRDSSGCreated, Yes]
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Condition: ShouldCreateEC2Resource
.....
.....
.....
.....
At the moment, when I try to create after both the parameter is set to False, I get Template validation error: Template format error: Unresolved resource dependencies [EC2Instance] in the Resources block of the template.
How can I notify users with some kind of error/message that when they select False and run this, to make sure both RDS instance and RDS SG present before creating this stack.
Please suggest if there any other ways or methods of accomplishing this stack.
I'm trying to create an unmanaged instanceGroup with several VM's in it via Deployment Manager Configuration (YAML file).
I can easily find docs about addInstances via Google API, but couldn't find docs about how to do this in a YAML file:
instances
instanceGroups
What properties should be included in instances/instanceGroup resource to make it work?
The YAML below will create a compute engine instance, create an unmanaged instance group, and add the instance to the group.
resources:
- name: instance-1
type: compute.v1.instance
properties:
zone: australia-southeast1-a
machineType: zones/australia-southeast1-a/machineTypes/n1-standard-1
disks:
- deviceName: boot
type: PERSISTENT
diskType: zones/australia-southeast1-a/diskTypes/pd-ssd
boot: true
autoDelete: true
initializeParams:
sourceImage: projects/debian-cloud/global/images/debian-9-stretch-v20180716
networkInterfaces:
- network: global/networks/default
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
- name: ig-1
type: compute.v1.instanceGroup
properties:
zone: australia-southeast1-a
network: global/networks/default
- name: ig-1-members
action: gcp-types/compute-v1:compute.instanceGroups.addInstances
properties:
project: YOUR_PROJECT_ID
zone: australia-southeast1-a
instanceGroup: ig-1
instances: [ instance: $(ref.instance-1.selfLink) ]
There is no possibility right now, to do it with gcloud deployment manager.
This was tested and it seemed that while Google Deployment Manager was able to complete without issue having the following snippet:
{
"instances": [
{
"instance": string
}
]
}
it did not add the instances specified, but created the IGM.
However Terraform seems to be able to do it https://www.terraform.io/docs/providers/google/r/compute_instance_group.html
I think #mcourtney answer is correct.
I just had this scenario and i used python template with yaml config to add instances to an un-managed instance group.
Here is the snippet of resource definition in my python template :
{
'name': name + '-ig-members',
'action': 'gcp-types/compute-v1:compute.instanceGroups.addInstances',
'properties': {
'project': '<YOUR PROJECT ID>',
'zone' : context.properties['zone'], // Defined in config yaml
'instanceGroup': '<YOUR Instance Group name ( not url )>',
"instances": [
{
"instance": 'projects/<PROJECT ID>/zones/<YOUR ZONE>/instances/<INSTANCE NAME>'
}
]
}
}
Reference API is documented here :
https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroups/addInstances
This is just an example. you can abstract all the hard coded things to either yaml configuration or variables at the top of python template.