How to create an unnamed window with tmuxinator? - tmuxinator

I want to create 2 windows with tmuxinator:
windows:
- editor: vim
- : # I want this window unnamed, but tmuxinator say that this is not valid
Thanks.

Found this walkaround:
https://github.com/tmuxinator/tmuxinator/issues/292
name: default
root: ~/
windows:
- main:
- tmux set-window-option -t1 automatic-rename on
- clear

Related

Azure DevOps pipeline template - how to concatenate a parameter

All afternoon I have been trying to get my head around concatenating a parameter in an ADO template. The parameter is a source path and in the template a next folder level needs to be added. I would like to achieve this with a "simple" concatenation.
The simplified template takes the parameter and uses it to form the inputPath for a PowerShell script, like this:
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''
I have tried various ways to achieve this concatenation:
'$(sourcePath)/NextFolder'
see above
'$(variables.sourcePath)/NextFolder'
I know sourcePath is not a variable, but tried based on the fact that using a parameter in a task condition it apparently only works when referencing through variables
'${{ parameters.sourcePath }}/NextFolder'
And some other variations, all to no avail.
I also tried to introduce a variables section in the template, but that is not possible.
I have searched the internet for examples/documentation, but no direct answers and other issues seemed to hint to some solution, but were not working.
I will surely be very pleased if someone could help me out.
Thanx in advance.
We can add the variables in our temp yaml file and pass the sourcePath to the variable, then we can use it. Here is my demo script:
Main.yaml
resources:
repositories:
- repository: templates
type: git
name: Tech-Talk/template
trigger: none
variables:
- name: Test
value: TestGroup
pool:
# vmImage: windows-latest
vmImage: ubuntu-20.04
extends:
template: temp.yaml#templates
parameters:
agent_pool_name: ''
db_resource_path: $(System.DefaultWorkingDirectory)
# variable_group: ${{variables.Test}}
temp.yaml
parameters:
- name: db_resource_path
default: ""
# - name: 'variable_group'
# type: string
# default: 'default_variable_group'
- name: agent_pool_name
default: ""
stages:
- stage:
jobs:
- job: READ
displayName: Reading Parameters
variables:
- name: sourcePath
value: ${{parameters.db_resource_path}}
# - group: ${{parameters.variable_group}}
steps:
- script: |
echo sourcePath: ${{variables.sourcePath}}
- powershell: echo "$(sourcePath)"
Here, I just use the workingDirectory to as the test path. You can use the variables also.
Attach my build result:
Thanx, Yujun. In meantime did get it working. Apparently there must have been some typo that did block the script from executing right as the se solution looks like one of the options mentioned above.
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''

Adding paths to config file in most efficent way via Ansible

I wrote a task that is responsible for changing supervisor config file. The case is that on some servers we have more than one app running workers, so sometimes more than one path needs to be added to include section of supervisor.conf.
Currently I wrote this task in /roles/supervisor/tasks/main.yml/:
- name: Add apps paths in include section
lineinfile:
dest: /etc/supervisor/supervisord.conf
regex: '^files ='
line: 'files = /etc/supervisor/conf.d/*.conf /home/app/{{ app_name }}/releases/app/shared/supervisor/*.conf /home/dev/{{ app_name2 }}/releases/dev/shared/supervisor/*.conf'
when: ansible_hostname = 'ser-db-10'
notify: restart supervisor
tags: multi_workers
... and added in /roles/supervisor/defaults/main.yml/ this:
app_name: bla
app_name2: blabla
It works, but I don't like the thing that there are two application paths hardcoded in line and maybe I should also add variable in place of ser-db-10.
I am wondering how to rebuild this task to make it more independent.
What I mean is, if there are 4 apps, add 4 paths, if there are 2 apps, add 2 paths.
What is the most efficient way to do this?
As an example of how to put together the parameter line, the play below
- hosts: test_01
vars:
app_name1: A
app_name2: B
my_conf:
test_01:
lines:
- '/etc/*.conf'
- '/etc/{{ app_name1 }}/*.conf'
- '/etc/{{ app_name2 }}/*.conf'
tasks:
- debug:
msg: "files = {{ my_conf[inventory_hostname].lines|join(' ') }}"
gives
"msg": "files = /etc/*.conf /etc/A/*.conf /etc/B/*.conf"
With appropriate dictionary my_conf the task below should do the job
- name: Add apps paths in include section
lineinfile:
dest: /etc/supervisor/supervisord.conf
regex: '^files ='
line: "files = {{ my_conf[inventory_hostname].lines|join(' ') }}"
notify: restart supervisor
tags: multi_workers
(not tested)

I want to print the names of files present in a directory as a list using ansible

---
- hosts: localhost
user: root
tasks:
- command: "ls /root/Tmp/Deployment/script_files/Hotfix"
register: dir_out
- debug: msg="The hotfix ids are: {{dir_out.stdout_lines}}"
The output I got was:
but I want it as
The hotfix ids are :["1001","1002"]
How do I do this?
I needed to change: {{dir_out.stdout_lines}} to {{dir_out.stdout_lines|join(',')}}

ec2-import-instance makes an instance with no Public IP

This is related to my previous question. Basically, to summarize: I
1) Set up a vagrant ubuntu 14.04 box locally
2) Packaged the vagrant instance into a package.box following these instructions
3) Converted the package.box into a .vmdk file using this function
4) Ran the following CLI command:
ec2-import-instance tmpdir/box-disk1.vmdk -f VMDK -t t2.micro -a x86_64 -b <S3 Bucket> -o $AWS_ACCESS_KEY -w $AWS_SECRET_KEY -p Linux
Since I suspected the problem was with something called cloud-init I read about (but have never used/don't really know what it does), I tried the above twice: once with the original /etc/cloud/cloud.cfg file and again with the /etc/cloud/cloud.cfg file I found here.
Basically, what I'm eventually seeing in the AWS Console is a running instance that does not have a Public IP address. I attached an Elastic IP to the instance, but I can't ssh into that IP address for some reason - it says port 22: Connection refused
I'm at a loss because these instances are launching in the Default VPC which has a security group attached to it that allows all ports and all protocols from any IP.
By the way: I'm pretty new to all of AWS and don't really know my way fully around the console, so any direct guidance would be much appreciated.
Original /etc/cloud/cloud.cfg file:
# The top level settings are used as module
# and system configuration.
# A set of users which may be applied and/or used by various modules
# when a 'default' entry is found it will reference the 'default_user'
# from the distro configuration specified below
users:
- default
# If this is set, 'root' will not be able to ssh in and they
# will get a message to login instead as the above $user (ubuntu)
disable_root: true
# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: false
# Example datasource config
# datasource:
# Ec2:
# metadata_urls: [ 'blah.com' ]
# timeout: 5 # (defaults to 50 seconds)
# max_wait: 10 # (defaults to 120 seconds)
# The modules that run in the 'init' stage
cloud_init_modules:
- migrator
- seed_random
- bootcmd
- write-files
- growpart
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- ca-certs
- rsyslog
- users-groups
- ssh
# The modules that run in the 'config' stage
cloud_config_modules:
# Emit the cloud config ready event
# this can be used by upstart jobs for 'start on cloud-config'.
- emit_upstart
- disk_setup
- mounts
- ssh-import-id
- locale
- set-passwords
- grub-dpkg
- apt-pipelining
- apt-configure
- package-update-upgrade-install
- landscape
- timezone
- puppet
- chef
- salt-minion
- mcollective
- disable-ec2-metadata
- runcmd
- byobu
# The modules that run in the 'final' stage
cloud_final_modules:
- rightscale_userdata
- scripts-vendor
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
- power-state-change
# System and/or distro specific settings
# (not accessible to handlers/transforms)
system_info:
# This will affect which distro class gets used
distro: ubuntu
# Default user name + that default users groups (if added/used)
default_user:
name: ubuntu
lock_passwd: True
gecos: Ubuntu
groups: [adm, audio, cdrom, dialout, dip, floppy, netdev, plugdev, sudo, video]
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
shell: /bin/bash
# Other config here will be given to the distro class and/or path classes
paths:
cloud_dir: /var/lib/cloud/
templates_dir: /etc/cloud/templates/
upstart_dir: /etc/init/
package_mirrors:
- arches: [i386, amd64]
failsafe:
primary: http://archive.ubuntu.com/ubuntu
security: http://security.ubuntu.com/ubuntu
search:
primary:
- http://%(ec2_region)s.ec2.archive.ubuntu.com/ubuntu/
- http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/
- http://%(region)s.clouds.archive.ubuntu.com/ubuntu/
security: []
- arches: [armhf, armel, default]
failsafe:
primary: http://ports.ubuntu.com/ubuntu-ports
security: http://ports.ubuntu.com/ubuntu-ports
ssh_svcname: ssh
Second try /etc/cloud/cloud.cfg file:
users:
- default
disable_root: 1
ssh_pwauth: 0
locale_configfile: /etc/sysconfig/i18n
mount_default_fields: [~, ~, 'auto', 'defaults,nofail', '0', '2']
resize_rootfs_tmp: /dev
ssh_deletekeys: 0
ssh_genkeytypes: ~
syslog_fix_perms: ~
cloud_init_modules:
- bootcmd
- write-files
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- rsyslog
- users-groups
- ssh
cloud_config_modules:
- mounts
- locale
- set-passwords
- timezone
- runcmd
cloud_final_modules:
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- final-message
system_info:
distro: rhel
default_user:
name: ec2-user
paths:
cloud_dir: /var/lib/cloud
templates_dir: /etc/cloud/templates
ssh_svcname: sshd
EOF
This is happening because when you transferred the instance to AWS from your local there was no any PEM key associated with that instance due to which you were not able to SSH.
After you took an Image of your instance and launched the instance again with a associated key you were able to SSH into the instance.

How can I make multiple replacements on the same file using one saltstack state?

Here's my target file:
Sonatype Nexus
# ==============
# This is the most basic configuration of Nexus.
# Jetty section
application-port=8081
application-host=0.0.0.0
nexus-webapp=${bundleBasedir}/nexus
nexus-webapp-context-path=/nexus
# Nexus section
nexus-work=/opt/nexuswork
runtime=${bundleBasedir}/nexus/WEB-INF
I know there's an easy way to do this with regex or a simple sed script:
sed -i 's/${bundleBasedir}\/..\/my\/second\/path\/002\/\/nexus/\/myfirstdir001\/g'
However, I would, ideally, prefer the saltstack way.
I would like it to look something like this:
Sonatype Nexus
# ==============
# This is the most basic configuration of Nexus.
# Jetty section
application-port=8081
application-host=0.0.0.0
nexus-webapp=/my/second/path/002/nexus # changed
nexus-webapp-context-path=/nexus
# Nexus section
nexus-work=/opt/nexuswork
runtime=/myfirstdir001/nexus/WEB-INF # changed
I haven't yet made sense of the saltstack documentation on this.
Saltstack's documentation for salt.states.file.replace seems fairly straightforward:
http://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.replace
Here's what I tried:
/opt/nexus-2.8.0/conf/nexus.properties
file: # state
- replace
- pattern: '\$\{bundleBasedir\}' # without escapes: '${bundleBasedir}/nexus'
- repl: '/my/second/path/002/nexus'
# - name: /opt/nexus-2.8.0/conf/nexus.properties
# - count=0
# - append_if_not_found=False
# - prepend_if_not_found=False
# - not_found_content=None
# - backup='.bak'
# - show_changes=True
- pattern: '\$\{bundleBasedir\}\/WEB-INF' # without escapes: ${bundleBasedir}/WEB-INF
- repl: '/myfirstdir001/'
I could maybe try multiple state IDs, but that seems inelegant.
If there's anything else I'm fuffing up, please advise!
I'd shore love to find a solution to this.
Also, if there's any demand for people improving the salt documentation, I think my team could be convinced to pitch in some.
Here's the closest thing I've found to someone else asking this question:
http://comments.gmane.org/gmane.comp.sysutils.salt.user/15138
For such a small file I would probably go with a template as ahus1 suggested.
If the file was bigger and/or we didn't want to control other lines just ensure that those two are correct, I think multiple state IDs (as mentioned by OP) is a good way to go. Something like:
/opt/nexus-2.8.0/conf/nexus.properties-jetty:
file:
- replace
- name: /opt/nexus-2.8.0/conf/nexus.properties
- pattern: '\$\{bundleBasedir\}' # without escapes: '${bundleBasedir}/nexus'
- repl: '/my/second/path/002/nexus'
/opt/nexus-2.8.0/conf/nexus.properties-nexus:
file:
- replace:
- name: /opt/nexus-2.8.0/conf/nexus.properties
- pattern: '\$\{bundleBasedir\}\/WEB-INF' # without escapes: ${bundleBasedir}/WEB-INF
- repl: '/myfirstdir001/'
I have a similar setup in my configuration but I use salt.states.file.line to replace some lines with my values. In addition I used salt.states.file.managed with a template and replace: False to initialize the file if it's missing but once it exists, only the line states are doing changes.
The salt way to do this as I understand it: Place a template file for nexus.properties inside salt and use file.managed like shown in the docs http://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html
You will end up with something like:
/opt/nexus-2.8.0/conf/nexus.properties:
file.managed:
- source: salt://nexus/nexus.properties.jinja
- template: jinja
- defaults:
bundleBasedir: "..."
You'll then use Jinja templating in your file:
# Jetty section
application-port=8081
application-host=0.0.0.0
nexus-webapp={{ bundleBasedir }}/nexus
nexus-webapp-context-path=/nexus
See here for Jinja templating: http://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.jinja.html
I hope it helps.