I am new to heat templates and I am trying to run bash scripts from the heat template. However, the instance is coming up in active state but the shell script was not executed at all. Any suggestions will be highly appreciated.
Hope this is helpful for you.
parameters: DBRootPassword:
type: string
label: Database Password
description: Root password for MySQL
hidden: true
resources: my_instance:
type: OS::Nova::Server
properties:
# general properties ...
user_data:
str_replace:
template: |
#!/bin/bash
echo "Hello world"
echo "Setting MySQL root password"
mysqladmin -u root password $db_rootpassword
# do more things ...
params:
$db_rootpassword: { get_param: DBRootPassword }
Related
I have written a cloudformation template in YAML, and everything runs smoothly but now instead of manually going into powershell to add a local group member and install some windows features I want to add the powershell commands to the user data portion of the AWS::EC2::Instance properties.
Here's the template in short:
Resources:
Instance:
Properties:
UserData:
Fn::Base64: |
<powershell>
add-localgroupmember (my group member)
install-windowsfeature (my windows feature)
</powershell>
weirdly enough the local group member gets added automatically but the windows feature doesn't get installed. Is there a certain format for the commands when they are multi line?
Here is the log error:
2022-11-25 19:48:58 Info: Try parsing user data in yaml format
2022-11-25 19:48:58 Info: Parsing failed, fall back to XML format
2022-11-25 19:48:58 Info: Converting user data to yaml format
I have tried to format the powershell script differently, nothing changed. I also tried adding the script one command at a time and so far the only command that works is the local group member and not the windows feature installing.
I took this same template and added an outfile only to check where the powershell script stops. Sometimes these files are created and sometimes they are not. Same with the adding local group member line. Only sometimes they are added. I am not sure what is going on here.
the below yml template adds a user as well as adds a feature.
Let me know if it is helpful.
Resources:
WebServer1:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-0be29bafdaad782db
InstanceType: t2.micro
KeyName: NVirg-KP
SecurityGroups:
- EC2-AllTraffic-SG
UserData:
Fn::Base64: |
<powershell>
#username and password
$username = "ec2-user1"
$description = "ec2-user1"
$password = ConvertTo-SecureString "Login#1234" -AsPlainText -Force
#creating the user
New-LocalUser -Name $username -Password $password -FullName $username -Description $description
Add-LocalGroupMember -Group Administrators -Member $username
#Installing a feature
Import-Module ServerManager
Install-WindowsFeature -Name Web-Server
</powershell>
I've installed Argo on a managed k8 service following the guidelines here.
When i launch the following example task i get an error (if you have argo installed you should be able to copy paster the below code):
# create a.yml
cat >> a.yml<<EOL
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world- # Name of this Workflow
spec:
entrypoint: whalesay # Defines "whalesay" as the "main" template
templates:
- name: whalesay # Defining the "whalesay" template
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"] # This template runs "cowsay" in the "whalesay" image with arguments "hello world"
EOL
# submit a.yml
argo --insecure-skip-tls-verify --insecure-skip-verify -n argo submit a.yml
# monitor
$ argo list
# NAME STATUS AGE DURATION PRIORITY
# hello-world-hxrcp Succeeded 4m 10s 0
argo watch --insecure-skip-tls-verify --insecure-skip-verify -v -n argo hello-world-hxrcp
# DEBU[2021-06-09T19:37:22.125Z] CLI version version="{v3.0.7 2021-05-25T18:57:09Z e79e7ccda747fa4487bf889142c744457c26e9f7 v3.0.7 clean go1.16.3 gc linux/amd64}"
# DEBU[2021-06-09T19:37:22.125Z] Client options opts="(argoServerOpts=(url=127.0.0.1:2746,path=,secure=true,insecureSkipVerify=true,http=true),instanceID=)"
# DEBU[2021-06-09T19:37:22.125Z] curl -H 'Accept: text/event-stream' -H 'Authorization: ******' 'https://127.0.0.1:2746/api/v1/workflow-events/argo?listOptions.fieldSelector=metadata.name%3Dhello-world-hxrcp&listOptions.resourceVersion=0'
# FATA[2021-06-09T19:37:22.536Z] Get "https://127.0.0.1:2746/api/v1/workflow-events/argo?listOptions.fieldSelector=metadata.name%3Dhello-world-hxrcp&listOptions.resourceVersion=0": x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs
Why am i seeing this error ?
The install process was this:
kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/stable/manifests/install.yaml
CLI (taken from the latest version here):
# Download the binary
curl -sLO https://github.com/argoproj/argo/releases/download/v3.0.7/argo-linux-amd64.gz
# Unzip
gunzip argo-linux-amd64.gz
# Make binary executable
chmod +x argo-linux-amd64
# Move binary to path
sudo mv ./argo-linux-amd64 /usr/local/bin/argo
# Test installation
argo version
# link with server
# recommended on user panel in interface
cat >> ~/.bashrc <<EOL
export ARGO_SERVER='127.0.0.1:2746'
export ARGO_HTTP1=true
export ARGO_SECURE=true
export ARGO_BASE_HREF=
export ARGO_TOKEN=''
export ARGO_NAMESPACE=argo
export ARGO_INSECURE_SKIP_VERIFY=true
EOL
# check it works:
argo list
Heyo, I ran into this issue when setting up with the argo helm chart on kind. The problem is that you have to disable tls verification for the executor (the thing that executes the workflow) using the ARGO_KUBELET_INSECURE env var. Here are the docs https://argoproj.github.io/argo-workflows/environment-variables/#executor
Sorry I don't have the exact code change you need for your setup, but I'm sure you can figure that out now that you know what the problem is ;).
Here's what my helm values.yaml file looks like in case that helps anyone else:
server:
serviceType: LoadBalancer
extraArgs:
- --auth-mode=server
controller:
containerRuntimeExecutor: k8sapi
executor:
env:
- name: ARGO_KUBELET_INSECURE
value: true
All afternoon I have been trying to get my head around concatenating a parameter in an ADO template. The parameter is a source path and in the template a next folder level needs to be added. I would like to achieve this with a "simple" concatenation.
The simplified template takes the parameter and uses it to form the inputPath for a PowerShell script, like this:
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''
I have tried various ways to achieve this concatenation:
'$(sourcePath)/NextFolder'
see above
'$(variables.sourcePath)/NextFolder'
I know sourcePath is not a variable, but tried based on the fact that using a parameter in a task condition it apparently only works when referencing through variables
'${{ parameters.sourcePath }}/NextFolder'
And some other variations, all to no avail.
I also tried to introduce a variables section in the template, but that is not possible.
I have searched the internet for examples/documentation, but no direct answers and other issues seemed to hint to some solution, but were not working.
I will surely be very pleased if someone could help me out.
Thanx in advance.
We can add the variables in our temp yaml file and pass the sourcePath to the variable, then we can use it. Here is my demo script:
Main.yaml
resources:
repositories:
- repository: templates
type: git
name: Tech-Talk/template
trigger: none
variables:
- name: Test
value: TestGroup
pool:
# vmImage: windows-latest
vmImage: ubuntu-20.04
extends:
template: temp.yaml#templates
parameters:
agent_pool_name: ''
db_resource_path: $(System.DefaultWorkingDirectory)
# variable_group: ${{variables.Test}}
temp.yaml
parameters:
- name: db_resource_path
default: ""
# - name: 'variable_group'
# type: string
# default: 'default_variable_group'
- name: agent_pool_name
default: ""
stages:
- stage:
jobs:
- job: READ
displayName: Reading Parameters
variables:
- name: sourcePath
value: ${{parameters.db_resource_path}}
# - group: ${{parameters.variable_group}}
steps:
- script: |
echo sourcePath: ${{variables.sourcePath}}
- powershell: echo "$(sourcePath)"
Here, I just use the workingDirectory to as the test path. You can use the variables also.
Attach my build result:
Thanx, Yujun. In meantime did get it working. Apparently there must have been some typo that did block the script from executing right as the se solution looks like one of the options mentioned above.
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''
I need to copy the SSH public key from a local file, then use it in a uri task in my playbook.
Keep in mind, I cannot use "authorized_key" module as this is a system I must use the API to configure public keys for users.
Code below keeps failing, I am 100% sure its because of the filter I am using. I am including the commented out section that does work for the body.
Trying to use a lookup with a regex_search, I used [^\s]\s[^\s] which works in python. Also the key is in a different directory in my local host (../../ssh/ssh_key/key.pub)
Any ideas?
- name: copy public key to gitea
hosts: localhost
tasks:
- name: include user to add as variable
include_vars:
file: users.yaml
name: users
- name: Gather users key contents and create variable
# shell: "cat ../keys/ssh_keys/zz123z.pub | awk '{print $1 FS $2}'"
shell: "cat ../keys/ssh_keys/{{item.username}}.pub | awk '{print $1 FS $2}'"
register: key
with_items:
- "{{users.user}}"
- name: Add user's key to gitea
uri:
url: https://10.10.10.10/api/v1/admin/users/{{ item.username }}/keys
headers:
Authorization: "token {{ users.GiteaApiToken }}"
validate_certs: no
return_content: yes
status_code: 201
method: POST
body: "{\"key\": \"{{ key.stdout }}\", \"read_only\": true, \"title\": \"{{ item.username }} shared
body_format: json
with_items:
- "{{users.user}}"
This is the error I receive when using -vvv
TASK [Add user's key to gitea] *************************************************
task path: /home/dave/projects/Infrastructure/ansible/AddTempUsers/addusers.yaml:275
Wednesday 04 March 2020 18:14:29 -0500 (0:00:00.537) 0:00:01.991 *******
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/home/dave/projects/Infrastructure/ansible/AddTempUsers/addusers.yaml': line 275, column 13, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add user's key to gitea\n ^ here\n"
}
I FIGURED IT OUT!
used shell with an awk command to gather the keys. (Note: including an awk for RSA keys, and one for id_ed25519, which we use. RSA is commented out but others can comment if they wish to use.)
Used loop control to iterate through the results.
Code below:
- name: copy public key to gitea
hosts: localhost
tasks:
- name: include user to add as variable
include_vars:
file: users.yaml
name: users
- name: Gather users key contents and create variable
# For RSA Keys
# shell: "cat ../keys/ssh_keys/{{item.username}}.pub | awk '/-END PUBLIC KEY-/ { p = 0 }; p; /-BEGIN PUBLIC KEY-/ { p = 1 }'
# For id_ed5519 Keys
shell: "cat ../keys/ssh_keys/{{item.username}}.pub | awk '{print $1 FS $2}'"
register: key
with_items:
- "{{users.user}}"
- name: Add user's key to gitea
uri:
url: https://10.10.10.10/api/v1/admin/users/{{ item.username }}/keys
headers:
Authorization: "token {{ users.GiteaApiToken }}"
validate_certs: no
return_content: yes
status_code: 201
method: POST
body: "{\"key\": \"{{ key.results[ndx].stdout }}\", \"read_only\": true, \"title\": \"{{ item.username }} shared VM\"}"
body_format: json
with_items:
- "{{users.user}}"
loop_control:
index_var: ndx
I want to load the testdata for my unit tests into the test db via DataFixtures.
The documentation says, that if I set the environment variable the test db should be used:
$ php bin/console doctrine:fixtures:load --env=test
Careful, database will be purged. Do you want to continue y/N ?y
> purging database
> loading App\DataFixtures\PropertyFixtures
> loading App\DataFixtures\UserFixtures
> loading App\DataFixtures\UserPropertyFixtures
However if I check the data end up in my default database.
Where do I configure my test db with symfony 4?
And where do I configure it, so that DataFixtures knows where to write?
For my functional tests I configured the db setting in the phpunit.xml
What we did to solve it was the following. I don't know if that is a good way, so alternative solutions are still appreciated.
in ../config/packages there is the file doctrine.yaml
I copied the file in the folder ./config/packages/test and edited it in the following way:
parameters:
# Adds a fallback DATABASE_URL if the env var is not set.
# This allows you to run cache:warmup even if your
# environment variables are not available yet.
# You should not need to change this value.
env(DATABASE_URL): ''
doctrine:
dbal:
# configure these for your database server
driver: 'pdo_mysql'
server_version: '5.7'
charset: utf8
default_table_options:
charset: utf8
collate: utf8_unicode_ci
url: mysql://%db_user_test%:%db_password_test%#%db_host_test%:%db_port_test%/%db_name_test%
orm:
auto_generate_proxy_classes: '%kernel.debug%'
naming_strategy: doctrine.orm.naming_strategy.underscore
auto_mapping: true
mappings:
App:
is_bundle: false
type: annotation
dir: '%kernel.project_dir%/src/Entity'
prefix: 'App\Entity'
So what I changed was the like with the url: mysql....
In my parameters I had added the parameters db_user_test etc and used them to connect to the test db.
It works, but still I am not sure if this is how it should be done.