How to deploy an OpenStack heat template that includes a script - openstack-heat

The orchestration engine for OpenStack 'Heat' can deploy compute resources and configure software, known as HOT templates. There are a number of examples on github here:
https://github.com/openstack/heat-templates/tree/master/hot
heat templates are written in YAML and we can deploy a template with this syntax
heat stack-create my_first_stack -f heat_1a.yaml
You can also upload the template file directly to the OpenStack dashboard.
however, and here is my question, many of the templates will also include shell scripts of powershell scripts which are run after deployment - how do we upload these scripts to OpenStack for inclusion in the stack?
for example, here is the directory listing for a MicroSoft SQL server template
C:\heat-templates\hot\Windows\MSSQLServer>ls
MSSQL.ps1 MSSQL.psm1 MSSQL.yaml Tests heat-powershell-utils.psm1
Heat client will only take the YAML file as an argument, so how or what do we do with the scripts?
thanks,
Rob.

Refer to the heat's template guide:
http://docs.openstack.org/developer/heat/template_guide/software_deployment.html
Essentially resources defined in yaml template files can use "get_file" directive which reads strings from specified file name. So, when you invoke heat client your MSSQL.yaml, your heat client would parse it and wherever it sees "get_file" with a file name as an argument, it then reads from that file.
Example using "get_file" from the above link:
...
the_server:
type: OS::Nova::Server
properties:
# flavor, image etc
user_data:
str_replace:
template: {get_file: the_server_boot.sh}
params:
$FOO: {get_param: foo}

Sometimes, we need need to create the script based on the parameters provided in HEAT template and execute it once Stack is created.
For this kind of requirement, we can use pattern mention below. This will create as well as execute the script once VM is up and in cloud-init phase.
services-cloud-init:
type: OS::Heat::CloudConfig
properties:
cloud_config:
timezone: {get_param: time_zone}
write_files:
- path: /tmp/change_password.sh
owner: root:root
permissions: '0777'
content: |
#!/bin/bash
echo -e "pwd\npwd" | passwd cloud-user
- path: /run/change_timezone.sh
owner: root:root
permissions: '0777'
content: |
#!/bin/bash
ln -sf /usr/share/zoneinfo/timezone /etc/localtime
runcmd:
- echo "Executing change_timezone"
- /tmp/change_timezone.sh
- echo "Executing change_password"
- /tmp/change_password.sh
- reboot
bootcmd:
- echo "Boot Completed"

Related

VM Manager - OS Policy Assignment for a Windows VM in GCP

I am trying to create a couple of os policy assignments to configure - run some scripts with PowerShell - and install some security agents on a Windows VM (Windows Server 2022), by using the VM Manager. I am following the official Google documentation to setup the os policies. The VM Manager is already enabled, nevertheless I have difficulties creating the appropriate .yaml file which is required for the policy assignment since I haven't found any detailed examples.
Related topics I have found:
Google documentation offers a very simple example of installing an .msi file - Example OS policies.
An example of a fixed policy assignment in Terraform registry - google_os_config_os_policy_assignment, from where I managed to better comprehend the required structure for the .yaml file even though it is in a .json format.
Few examples provided at GCP GitHub repository (OSPolicyAssignments).
OS Policy resources in JSON representation - REST Resource, from where you can navigate to sample cases based on the selected resource.
But, it is still not very clear how to create the desired .yaml file. (ie. Copy some files, run a PowerShell script to perform an installation or an authentication). According to the Google documentation pkg, repository, exec, and file are the supported resource types.
Are there any more detailed examples I could use to understand what is needed? Have you already tried something similar?
Update: Adding an additional source.
You need to follow these steps:
Ensure that the OS Config agent is installed in your VM by running the below command in PowerShell:
PowerShell Get-Service google_osconfig_agent
you should see an output like this:
Status Name DisplayName
------ ---- -----------
Running google_osconfig... Google OSConfig Agent
if the agent is not installed, refer to this tutorial.
Set the metadata values to enable OSConfig agent with Cloud Shell command:
gcloud compute instances add-metadata $YOUR_VM_NAME \
--metadata=enable-osconfig=TRUE
Generate an OS policy and OS policy assignment yaml file. As an example, I am generating an OS policy that installs a msi file retrieved from a GCS bucket, and an OS policy assignment to run it in all Windows VMs:
# An OS policy assignment to install a Windows MSI downloaded from a Google Cloud Storage bucket
# on all VMs running Windows Server OS.
osPolicies:
- id: install-msi-policy
mode: ENFORCEMENT
resourceGroups:
- resources:
- id: install-msi
pkg:
desiredState: INSTALLED
msi:
source:
gcs:
bucket: <your_bucket_name>
object: chrome.msi
generation: 1656698823636455
instanceFilter:
inventories:
- osShortName: windows
rollout:
disruptionBudget:
fixed: 10
minWaitDuration: 300s
Note: Every file has its own generation number, you can get it with the command gsutil stat gs://<your_bucket_name>/<your_file_name>.
Apply the policies created in the previous step using Cloud Shell command:
gcloud compute os-config os-policy-assignments create $POLICY_NAME --location=$YOUR_ZONE --file=/<your-file-path>/<your_file_name.yaml> --async
Refer to the Examples of OS policy assignments for more scenarios, and check out this example of a PowerShell script.
Down below you can find the the .yaml file that worked, in my case. It copies a file, and executes a PowerShell command, so as to configure and deploy a sample agent (TrendMicro) - again this is specifically for a Windows VM.
.yaml file:
id: trendmicro-windows-policy
mode: ENFORCEMENT
resourceGroups:
- resources:
- id: copy-exe-file
file:
path: C:/Program Files/TrendMicro_Windows.ps1
state: CONTENTS_MATCH
permissions: '755'
file:
gcs:
bucket: [your_bucket_name]
generation: [your_generation_number]
object: Windows/TrendMicro/TrendMicro_Windows.ps1
- id: validate-running
exec:
validate:
interpreter: POWERSHELL
script: |
$service = Get-Service -Name 'ds_agent'
if ($service.Status -eq 'Running') {exit 100} else {exit 101}
enforce:
interpreter: POWERSHELL
script: |
Start-Process PowerShell -ArgumentList '-ExecutionPolicy Unrestricted','-File "C:\Program Files\TrendMicro_Windows.ps1"' -Verb RunAs
To elaborate a bit more, this .yaml file:
copy-exe-file: It copies the necessary installation script from GCS to a specified location on the VM. Generation number can be easily found on "VERSION HISTORY" when you select the object on GCS.
validate-running: This stage contains two different steps. On the validate it checks if the specific agent is up and running on the VM. If not, then it proceeds with the enforce step, where it executes the "TrendMicro_Windows.ps1" file with PowerShell. This .ps1 file downloads, configures and installs the agent. Note 1: This command is executed as Administrator and the full path of the file is specified. Note 2: Instead of Start-Process PowerShell a Start-Process pwsh can also be utilized. It was vital for one of my cases.
Essentially, a PowerShell command can be directly run at the enforce
step, nonetheless, I found it much easier to pass it first to a .ps1
file, and then just run this file. There are some restriction with the
.yaml file anywise.
PS: Passing osconfig-log-level - debug as a key-value pair as Metadata - directly to a VM or applied to all of them (Compute Engine > Setting - Metadata > EDIT > ADD ITEM) - provide some additional information and may help you on dealing with errors.

Set cloud build default substitution variables through a shell script

I have shell script that I use in order to be able to create my resources on Google Cloud Platform.
It looks something like this:
REGION=us-east1
# Create buckets
FILES_SOURCE=${DEVSHELL_PROJECT_ID}-source-$(date +%s)
gsutil mb -c regional -l ${REGION} gs://${FILES_SOURCE}
FUNCTIONS_BUCKET=${DEVSHELL_PROJECT_ID}-functions-$(date +%s)
gsutil mb -c regional -l ${REGION} gs://${FUNCTIONS_BUCKET}
I also have a Cloud Build enabled for my project with a trigger defined inside of it. Some of the values for my substitution variables should be equal to FILES_SOURCE and FUNCTIONS_BUCKET from the script above. If I have my Cloud Build enabled prior to the execution of my shell script, is it possible to somehow assign those values (and their keys) from the shell script?
I can see that we have gcloud builds interface but it doesn't seem to have such options.
You must be referring to user-defined substitution variables because default substitutions are automatically defined to you by Cloud Build. With regards to gcloud builds interface, you can set --substitutions flag to specify your user-defined variables but looking at your example, it seems that those aren't fixed.
Unfortunately you won't be able to specify user-defined substitution variables if the values came from a shell script. However, there's a workaround so that your shell script variables will persist the entire build steps by saving the values on a file and then read it as you require.
You've not specified how you intend to use the variables but here's an example:
build.sh
REGION=us-east1
DEVSHELL_PROJECT_ID=sample-proj
FUNCTIONS_BUCKET=${DEVSHELL_PROJECT_ID}-functions-$(date +%s)
FILES_SOURCE=${DEVSHELL_PROJECT_ID}-source-$(date +%s)
# Store variables on a file
echo $FUNCTIONS_BUCKET > /workspace/functions-bucket &&
echo $FILES_SOURCE > /workspace/files-source
echo "Saved values."
cloudbuild.yaml
steps:
- id: "Read script and store values"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['./build.sh']
- id: "Read Values"
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args:
- -c
- |
# Read from "/workspace"
echo "First we saved " $(cat /workspace/functions-bucket) &&
echo "Then we saved " $(cat /workspace/files-source)
Note: We used /workspace because Cloud Build uses it as a working directory by default.
Reference: https://medium.com/google-cloud/how-to-pass-data-between-cloud-build-steps-de5c9ebc4cdd
You can't override the substitution variables during the Cloud Build process. So, you have 2 solutions
Either you work with "Linux" variable, and the answer of Donnald is the right solution (you have to read the value from file in each step and then use it)
Or you can call a Cloud Build in Cloud Build. Like this
Create the Cloud Build file for your core build, with substitution variables
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
echo $_FUNCTIONS_BUCKET
echo $_FILES_SOURCE
...
substitutions:
_FUNCTIONS_BUCKET:
_FILES_SOURCE:
Then, Create a the file for initialization: cloudbuild-init.yaml
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
REGION=us-east1
DEVSHELL_PROJECT_ID=sample-proj
FUNCTIONS_BUCKET=$${DEVSHELL_PROJECT_ID}-functions-$$(date +%s)
FILES_SOURCE=$${DEVSHELL_PROJECT_ID}-source-$$(date +%s)
gcloud builds submit --async --substitutions=_FUNCTIONS_BUCKET=$${FUNCTIONS_BUCKET},_FILES_SOURCE=$${FILES_SOURCE}
You can note the async to not wait the end of the underlying Cloud Build before finishing the init one. Else you will pay twice the cost of the Build. On the other hand, you won't know in the trigger if your job worked or not.
It's matter of tradeoffs here.

How to extract actual timestamp in Cloud Build CI/CD pipeline yaml script or Cloud Build Triggers page

I have a cloud_build.yaml script for my CI/CD pipeline on GP using Cloud Build. In command line I can pass a subtitution variable which will include the actual timestamp: "notebook-instance-$(date +%Y-%m-%d-%H-%M)-v05". This is working fine.
When I add github trigger on the Cloud Build webpage, then I didn't find the way to get the timestamp extracted in the same way that I was using in cli $(date +%Y-%m-%d-%H-%M)-v05:
Any idea idea how to do that on the Triggers Cloud Build page ?
I aslo tried to do it inside the cloud_build.yaml script but without success for now.
- name: 'gcr.io/cloud-builders/gcloud'
id: Deploy the AI Platform Notebook instance
args:
- 'deployment-manager'
- 'deployments'
- 'create'
- '$(date -u +%Y-%m-%d-%H-%M)-${_NAME_INSTANCE}'
Any idea how to extract and create a variable using the actual timestamp in the .yaml CloudBuild script ?
A third option is to extract the timestamp in my .jinja deployment script. Here I get the same issue as well that I don't find the way to to extract the actual timestampt to build my variable name.
One of the solution is to do the following:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: sh
args:
- '-c'
- |
gcloud \
deployment-manager \
deployments \
create \
xxxx
The issue is that you cannot use it in another step later. Another option is is to write te variable in a file on the workspace. This can be access later during the build stackoverflow

Is there any possibility to use CodeDeploy environment variables in section files of AppSpec file

I have website, which stored on AWS EC2 servers.
We have 2 servers, one for production environment and another one for development and staging environments.
Development and staging environments located in different folders. For example development env stored in /var/www/development, while staging stored in /var/www/staging.
I'd like to use AWS CodeDeploy to upload files directly from bitbucket. I put AppSpec file, which copy source code to /var/www/html folder and install all dependencies and configurations. But I want my AppSpec file to copy source code to /var/www/development or to /var/www/staging depending on Development group, that was selected.
Is there is any way to do it or, maybe, there are some better approach in my situation?
The appspec.yml is a bit inflexible so use the following to deploy code in to different folders on the same instance.
version: 0.0
os: linux
files:
- source: /
destination: /var/www/my-temp-dir
permissions:
- object: /var/www/my-temp-dir
owner: ec2-user
group: ec2-user
hooks:
BeforeInstall:
- location: ci/integrations-deploy-pre.sh
runas: root
AfterInstall:
- location: ci/integrations-deploy-post.sh
runas: root
Inside of my integrations-deploy-post.sh file, I then use the CodeDeploy environment variables to move the files in to the place I need them to be;
#!/bin/bash
if [ "$DEPLOYMENT_GROUP_NAME" == "Staging" ]
then
cp -R /var/www/my-temp-dir /var/www/my-staging-dir
chown -R ec2-user:ec2-user /var/www/my-staging-dir
# Insert other commands that need to run...
fi
if [ "$DEPLOYMENT_GROUP_NAME" == "UAT" ]
then
cp -R /var/www/my-temp-dir /var/www/my-uat-dir
chown -R ec2-user:ec2-user /var/www/my-uat-dir
# Insert other commands that need to run...
fi
NOTE: In my integrations-deploy-post.sh You'll also need the commands you want to run on production. Removed for simplicity.
The recommended way to change AppSpec or custom scripts behavior is to utilize environment variables provided by the CodeDeploy agent. You have access to the deployment group name and the application name.
if [ "$DEPLOYMENT_GROUP_NAME" == "Staging" ]; then
# Copy to /var/www/staging
elif [ "$DEPLOYMENT_GROUP_NAME" == "Development" ]; then
# Copy to /var/www/development
elif [ "$DEPLOYMENT_GROUP_NAME" == "Production" ]; then
# Copy to /var/www/html
else
# Fail the deployment
fi
I had the same problem, but I used source control as a solution to this.
My workflow is using Gitlab CI > AWS Code Pipeline (S3 Source and CodeDeploy).
So in my development branch, my AppSpec file would look like this:-
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/my-project-dev
hooks:
AfterInstall:
- location: scripts/after_install.sh
timeout: 400
runas: root
in my staging branch:-
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/my-project-staging
hooks:
AfterInstall:
- location: scripts/after_install.sh
timeout: 400
runas: root
My Gitlab-CI just uses a shell executor to my EC2 instance and it basically compresses my project folder and uploads to S3.
.gitlab-ci.yml
stages:
- deploy
setup dependencies:
stage: .pre
script:
- echo "Setup Dependencies"
- pip install awscli
deploy to s3:
stage: deploy
script:
- tar -cvzf /tmp/artifact_$CI_COMMIT_REF_NAME.tar ./*
- echo "Copy artifact to S3"
- aws s3 cp /tmp/artifact_$CI_COMMIT_REF_NAME.tar s3://project-artifacts/
clean up:
stage: .post
script:
- echo "Removing generated artifact"
- rm /tmp/artifact_$CI_COMMIT_REF_NAME.tar
Note that $CI_COMMIT_REF_NAME is used to differentiate the artifact file being generated. In development branch it would be artifact_development.tar, in staging branch artifact_staging.tar.
Then, I have 2 pipelines listening to the two respective artifacts which deploys to 2 different CodeDeploy Application.
Not sure if this is the best way, surely welcome any suggestions that is better

AWS Elastic Beanstalk - EB Extensions Not Working

I've done this before a long time ago, but now it's not working... :)
I am trying to use EBExtensions in an ElasticBeanstalk application. I created a vanilla Elastic Beanstalk environment with no configuration beyond the defaults. I gave it an application version that had a directory structure like the following:
.ebextensions
40testextension.config
app.js
other files
The important part is that I have a folder called .ebextensions at the root of my deployable artifact, which is where I believe it should be located.
The 40testextension.config file inside that file has the following contents:
files:
"/home/ec2-user/myfile" :
mode: "000755"
owner: root
group: root
content: |
# This is my file
# with content
I uploaded that version when creating the environment, and the environment created successfully. But when I look for that file, it is not present. Furthermore, when do a recursive grep for that ebextension file name in the logs at /var/log, I only get one result:
./eb-activity.log: inflating: /tmp/deployment/application/.ebextensions/40testextension.config
Having looked at the logs, it seems that the file is present when the artifact gets pulled down to the host, but the ebextension never gives any indication of running.
What am I missing here? I've done this in the distant past and things have worked very nicely, but this time I can't seem to get the thing to be executed by the Beanstalk deploy lifecycle.
try to run it with -x Print commands and their arguments as they are executed to debug and try to change the mode to 000777.
files:
"/home/ec2-user/myfile" :
mode: "000777"
owner: root
group: root
content: |
#!/usr/bin/env bash
set -xe