Automate GCP persistent disk initialization - google-cloud-platform

Are there any scripts that automate persistent disks formatting and attaching to the Google Cloud VM instance, instead of doing formatting & mounting steps?
The persistent disk is created with Terraform, which also creates a VM and attaches the disk to it with the attached_disk command.
I am hoping to run a simple script on the VM instance start that would:
check if the attached disk is formatted, and format if needed with ext4
check if the disk is mounted, and mount if not
do nothing otherwise

Have you considered using a startup script on the instance (I presume you can also add a startup-script with Terraform)? You could use an if loop to discover if the disk is formatted, then if not, you could try running the formatting/mounting commands in the documentation you linked (I realise you have suggested you do not want to follow the manual steps in the documentation, but these can be integrated into the startup script to achieve the desired result).
Running the following outputs and empty string if the disk is not formatted:
sudo blkid /dev/sdb
You could therefore use this in a startup script to discover if the disk is formatted, then perform formatting/mounting if that is not the case. For example, you could use something like this (Note*** If the disk is formatted but not mounted this could be dangerous and should not be used if your use case could involve existing disks which may have already been formatted):
#!/bin/bash
if sudo blkid /dev/sdb;then
exit
else
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb; \
sudo mkdir -p /mnt/disks/newdisk
sudo mount -o discard,defaults /dev/sdb /mnt/disks/newdisk
fi

The marked answer did not work for me as the sudo blkid /dev/sdb part always returned a value (hence, true) and the script would exit.
I updated the script to check for the entry in fstab and added safety options to the script.
#!/bin/bash
set -uxo pipefail
MNT_DIR=/mnt/disks/persistent_storage
DISK_NAME=my-disk
# Check if entry exists in fstab
grep -q "$MNT_DIR" /etc/fstab
if [[ $? -eq 0 ]]; then # Entry exists
exit
else
set -e # The grep above returns non-zero for no matches & we don't want to exit then.
# Find persistent disk's drive value, prefixed by `google-`
DEVICE_NAME="/dev/$(basename $(readlink /dev/disk/by-id/google-${DISK_NAME}))"
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard $DEVICE_NAME
sudo mkdir -p $MOUNT_DIR
sudo mount -o discard,defaults $DEVICE_NAME $MOUNT_DIR
# Add fstab entry
echo UUID=$(sudo blkid -s UUID -o value $DEVICE_NAME) $MNT_DIR ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
fi
Here's the gist if you want to download it - https://gist.github.com/raj-saxena/3dcaa5c0ba0be88ed91ef3fb50d3ce85

Formatting, mounting and adding entry in /etc/fstab is necessary almost all the time. Here is a solution I came up with and might help others. This can also, for sure, be improved. I added echo commands to explain what each block does.
About disk name you could add device_name on your terraform code when you attach your disks to the instance(s) like mentioned here: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_attached_disk
device_name - (Optional) Specifies a unique device name of your choice that is reflected into the /dev/disk/by-id/google- tree of a Linux operating system running within the instance. This name can be used to reference the device for mounting, resizing, and so on, from within the instance.*
#!/bin/bash
DISKS_PATH=/dev/disk/by-id
DISKS=(disk1 disk2)
check_disks () {
for disk in "${DISKS[#]}"; do
MOUNT_DIR="/$disk"
echo "$MOUNT_DIR"
if sudo blkid $DISKS_PATH/google-${disk}; then
echo "$disk is already formatted, nothing to do"
echo "checking if $disk is present in fstab"
UUID=$(sudo blkid -s UUID -o value $DISKS_PATH/google-${disk})
grep -q "UUID=${UUID} $MOUNT_DIR" /etc/fstab
if [[ $? -eq 0 ]]; then
echo "$disk already present in fstab, continuing with checking mount"
echo "Now checking if $disk is already mounted"
grep -qs "$MOUNT_DIR" /proc/mounts
if [[ $? -eq 0 ]]; then
echo "$disk is already mounted, so doing nothing with mount"
else
echo "$disk is not mounted, so mounting it"
sudo mkdir -p $MOUNT_DIR
sudo mount -o discard,defaults $DISKS_PATH/google-${disk} $MOUNT_DIR
fi
elif [[ $? -ne 0 ]]; then
echo "$disk not present in fstab, so adding it"
echo UUID="$UUID" $MOUNT_DIR ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
echo "Now checking if $disk is already mounted"
grep -qs "$MOUNT_DIR" /proc/mounts
if [[ $? -eq 0 ]]; then
echo "$disk is already mounted, so doing nothing with mount"
else
echo "$disk is not mounted, so mounting it"
sudo mkdir -p $MOUNT_DIR
sudo mount -o discard,defaults $DISKS_PATH/google-${disk} $MOUNT_DIR
fi
fi
else
echo "Formatting ${disk}"
sudo mkfs.ext4 $DISKS_PATH/google-${disk};
echo "Creating directory for ${disk} on $MOUNT_DIR"
sudo mkdir -p $MOUNT_DIR
echo "adding $disk in fstab"
UUID=$(sudo blkid -s UUID -o value $DISKS_PATH/google-${disk})
echo UUID="$UUID" $MOUNT_DIR ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
echo "Mounting $disk"
sudo mount -o discard,defaults $DISKS_PATH/google-${disk} $MOUNT_DIR
fi
done
}
check_disks

Related

How to attach an AWS EFS volume to an EC2 spot instance?

I have successfully attached an EFS volume to an EC2 instance, but I need to know how to attach the EFS volume to an EC2 spot instance.
If you start to create a non spot-instance and mark the checkboxes for attaching an efs-filesystem, this results in an autogenerated script filled into the user-data-field.
Copy this script and cancel the non-spot-instance creation.
Start to create a spot instance and just paste the script in the user-data-field you get in the create-spot-request-instance-mask.
I tried it with ubuntu-linux 20.04 and it works perfect!
Script for example looks like this (if you use another os/distribution/version it may differ):
#cloud-config
package_update: true
package_upgrade: true
runcmd:
- yum install -y amazon-efs-utils
- apt-get -y install amazon-efs-utils
- yum install -y nfs-utils
- apt-get -y install nfs-common
- file_system_id_1=my-filessystem-id
- efs_mount_point_1=/mnt/efs/fs1
- mkdir -p "${efs_mount_point_1}"
- test -f "/sbin/mount.efs" && printf "\n${file_system_id_1}:/ ${efs_mount_point_1} efs tls,_netdev\n" >> /etc/fstab || printf "\n${file_system_id_1}.efs.eu-central-1.amazonaws.com:/ ${efs_mount_point_1} nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0\n" >> /etc/fstab
- test -f "/sbin/mount.efs" && grep -ozP 'client-info]\nsource' '/etc/amazon/efs/efs-utils.conf'; if [[ $? == 1 ]]; then printf "\n[client-info]\nsource=liw\n" >> /etc/amazon/efs/efs-utils.conf; fi;
- retryCnt=15; waitTime=30; while true; do mount -a -t efs,nfs4 defaults; if [ $? = 0 ] || [ $retryCnt -lt 1 ]; then echo File system mounted successfully; break; fi; echo File system not available, retrying to mount.; ((retryCnt--)); sleep $waitTime; done;

SSH execute remote command infacmd.sh is failing

sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com 'cd /opt/tools/informatica/ids/Informatica/10.2.0/isp/bin;infacmd.sh oie importObjects -dn Domain_IDS_Dev -un abc -pd "xxx" -rs MRS_IDS_DEV -sdn LDAP_NP -fp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/mapping_import.xml -cp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/import_control_file.xml'| tee -a logfile.log
I am running the above command from container in Buildspec as well as tested in ec2 instance , Command is failing with error: sh: infacmd.sh: command not found
But When i tried only command sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com and executed other command manually in ec2 then command is working.
Make sure the file exists at the path.
Make sure you have access to the file.
Make sure the file is executable or change the command to
; /bin/bash infacmd.sh ...

AWS apt install error = Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)

Using Amazon Linux AMI (2017.03.1) on a p2.xlarge instance, and attempting to sudo apt install {somepackage}, I get the following error:
Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)
I have already tried
sudo rm /var/lib/apt/lists/lock
and
sudo rm /var/cache/apt/archives/lock
Solution:
sudo rm /var/lib/dpkg/lock
sudo dpkg --configure -a
sudo apt install {somepackage}
First look for that process by
sudo lsof /var/lib/dpkg/lock
Then make sure the process is not running by
ps cax | grep PID #PID is the Process id e.g 1111
If PID is shown(It's running) kill it {else go to step 5} by
sudo kill -9 PID
Make sure the process is killed done by
sudo ps cax | grep PID
Then remove the locked file by
sudo rm /var/lib/dpkg/lock
sudo rm /var/lib/dpkg/lock-frontend #optional
Finally make dpkg fix its self by
sudo dpkg --configure -a
Reference From
Depends on the image you are using, it may be installing some dependencies at first run.
Try: ps aux | grep -i apt if it returns something like:
root 2531 0.0 0.0 4624 772 ? Ss 23:34 0:00 /bin/sh /usr/lib/apt/apt.systemd.daily install
root 2547 0.0 0.0 4624 1716 ? S 23:34 0:00 /bin/sh /usr/lib/apt/apt.systemd.daily lock_is_held install
or similar, you may want to wait until all necessary updates will be applied.
Note that instances can be configured to have "UserData" script to run on provisioning... That can cause things to be installed and the lock to be busy for a while.
I had a similar problem and after some investigation found the following in the "UserData" section of my instance template:
#cloud-config
package_update: true
package_upgrade: true
runcmd:
- yum install -y amazon-efs-utils
- apt-get -y install amazon-efs-utils
- yum install -y nfs-utils
- apt-get -y install nfs-common
- file_system_id_1=fs-74aa550f
- efs_mount_point_1=/mnt/efs/fs1
- mkdir -p "${efs_mount_point_1}"
- test -f "/sbin/mount.efs" && printf "\n${file_system_id_1}:/ ${efs_mount_point_1} efs tls,_netdev\n" >> /etc/fstab || printf "\n${file_system_id_1}.efs.us-east-2.amazonaws.com:/ ${efs_mount_point_1} nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0\n" >> /etc/fstab
- test -f "/sbin/mount.efs" && grep -ozP 'client-info]\nsource' '/etc/amazon/efs/efs-utils.conf'; if [[ $? == 1 ]]; then printf "\n[client-info]\nsource=liw\n" >> /etc/amazon/efs/efs-utils.conf; fi;
- retryCnt=15; waitTime=30; while true; do mount -a -t efs,nfs4 defaults; if [ $? = 0 ] || [ $retryCnt -lt 1 ]; then echo File system mounted successfully; break; fi; echo File system not available, retrying to mount.; ((retryCnt--)); sleep $waitTime; done;
This is what was causing the lock for about 80-90 seconds on startup.
I removed this script from the template and I can now immediately use apt-get.

Install ffmpeg on elastic beanstalk using ebextensions config

I'm attempting to install an up to date version of ffmpeg on an elastic beanstalk instance on amazon servers. I've created my config file and added these container_commands:
container_commands:
01-ffmpeg:
command: wget -O/usr/local/bin/ffmpeg http://ffmpeg.gusari.org/static/64bit/ffmpeg.static.64bit.2014-03-05.tar.gz
leader_only: false
02-ffmpeg:
command: tar -xzf /usr/local/bin/ffmpeg
leader_only: false
03-ffmpeg:
command: ln -s /usr/local/bin/ffmpeg /usr/bin/ffmpeg
leader_only: false
Command 01 and 03 seems to work perfectly but 02 doesn't seem to work so ffmpeg doesn't unzip. Any ideas what the issue might be?
Thanks,
Helen
A kind person at Amazon helped me out and sent me this config file that works, hopefully some other people will find this useful:
# .ebextensions/packages.config
packages:
yum:
ImageMagick: []
ImageMagick-devel: []
commands:
01-wget:
command: "wget -O /tmp/ffmpeg.tar.xz https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz"
02-mkdir:
command: "if [ ! -d /opt/ffmpeg ] ; then mkdir -p /opt/ffmpeg; fi"
03-tar:
command: "tar xvf /tmp/ffmpeg.tar.xz -C /opt/ffmpeg"
04-ln:
command: "if [[ ! -f /usr/bin/ffmpeg ]] ; then ln -sf /opt/ffmpeg/ffmpeg-4.2.2-amd64-static/ffmpeg /usr/bin/ffmpeg; fi"
05-ln:
command: "if [[ ! -f /usr/bin/ffprobe ]] ; then ln -sf /opt/ffmpeg/ffmpeg-4.2.2-amd64-static/ffprobe /usr/bin/ffprobe; fi"
06-pecl:
command: "if [ `pecl list | grep imagick` ] ; then pecl install -f imagick; fi"
Edit:
The above code works for me today 2020-01-03, in Elastic Beanstalk environment Python 3.6 running on 64bit Amazon Linux/2.9.17.
https://johnvansickle.com/ffmpeg/ is linked from the official ffmpeg site.
(The former static build from Gusari does not seem available anymore.)
Warning:
The above will always download the latest release when you deploy. You're also depending on johnvansickle's site being online (to deploy), and his URL not changing. Two alternative approaches would be:
Download the .tar.xz file to your own CDN, and let your deployment download from your own site. (That way, if John's site has a moment of downtime while you're deploying, you're unaffected. And you won't be surprised by the ffmpeg version changing without you realising.)
Specify a version number like https://johnvansickle.com/ffmpeg/old-releases/ffmpeg-4.2.2-amd64-static.tar.xz.
You can use a static build from ffmpeg gusari and the sources syntax to automagically download and extract the binaries from a static build tar to /usr/local/bin. Here's an extremely simple example that has worked for me:
sources:
/usr/local/bin: https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-64bit-static.tar.xz
The version is not specified in the first command "01-wget ..." however, it is specified when linking the files. Since the publication of this the release has been changed from "ffmpeg-3.3.1-64bit-static" to "ffmpeg-3.3.3-64bit-static" there are two solutions to fix this problem:
specify the version for wget
strip the containing directory on unpacking.
03-tar:
command: "tar xvf /tmp/ffmpeg.tar.xz -C /opt/ffmpeg --strip 1"
Here is the full script:
packages:
yum:
ImageMagick: []
ImageMagick-devel: []
commands:
01-wget:
command: "wget -O /tmp/ffmpeg.tar.xz https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-64bit-static.tar.xz"
02-mkdir:
command: "if [ ! -d /opt/ffmpeg ] ; then mkdir -p /opt/ffmpeg; fi"
03-tar:
command: "tar xvf /tmp/ffmpeg.tar.xz -C /opt/ffmpeg --strip 1"
04-ln:
command: "if [[ ! -f /usr/bin/ffmpeg ]] ; then ln -s /opt/ffmpeg/ffmpeg /usr/bin/ffmpeg; fi"
05-ln:
command: "if [[ ! -f /usr/bin/ffprobe ]] ; then ln -s /opt/ffmpeg/ffprobe /usr/bin/ffprobe; fi"
06-pecl:
command: "if [ `pecl list | grep imagick` ] ; then pecl install -f imagick; fi"
add the following to your .ebextensions/packages.config
packages:
yum:
ImageMagick: []
sources:
/usr/local/bin: http://ffmpeg.org/releases/ffmpeg-4.1.tar.gz
Check cloud-init logs for messages. On a Linux instance, that would be:
grep "03-ffmpeg" /var/log/eb-cfn-init.log
Also, you can log to another file to make errors easier to find:
command: ln -s /usr/local/bin/ffmpeg /usr/bin/ffmpeg >> /var/log/my-init.log
Untested, but shouldn't it be
tar xzf /usr/local/bin/ffmpeg
without a minus?
In case there are others out there that prefers compiling from source, herewith steps I've actioned to do so (this worked for me using a Java Application with Maven)
Inside your project root directory, create a ".ebextensions" folder containing a file name of your convenience but it must have the extension of ".config" (see example picture)
Specify the following as the contents of the .ebextensions/yourfilename.config file
packages:
yum:
autoconf: []
automake: []
cmake: []
freetype-devel: []
gcc: []
gcc-c++: []
git: []
libtool: []
make: []
nasm: []
pkgconfig: []
zlib-devel: []
ImageMagick: []
ImageMagick-devel: []
commands:
01-mkdir:
command: |
if [ ! -d /opt/bin ] ; then mkdir -p /opt/bin; fi
if [ ! -d /opt/ffmpeg ] ; then mkdir -p /opt/ffmpeg; fi
if [ ! -d /opt/ffmpeg/ffmpeg-5.1-build ] ; then mkdir -p /opt/ffmpeg/ffmpeg-5.1-build; fi
02-wget:
command: |
if [ ! -d /opt/ffmpeg/ffmpeg-5.1 ] ; then
if [ ! -d /tmp/ffmpeg-5.1.tar.gz ] ; then wget -O /tmp/ffmpeg-5.1.tar.gz https://ffmpeg.org/releases/ffmpeg-5.1.tar.gz; fi
tar xvf /tmp/ffmpeg-5.1.tar.gz -C /opt/ffmpeg
fi
03-configure:
cwd: /opt/ffmpeg/ffmpeg-5.1
command: |
if [[ ! -f /opt/bin/ffmpeg ]] ; then
PKG_CONFIG_PATH="/opt/ffmpeg/ffmpeg-5.1-build/lib/pkgconfig" \
./configure \
--prefix="/opt/ffmpeg/ffmpeg-5.1-build" \
--pkg-config-flags="--static" \
--bindir="/opt/ffmpeg/ffmpeg-5.1-build/bin" \
--enable-gpl \
--enable-libx264 \
fi
04-make:
cwd: /opt/ffmpeg/ffmpeg-5.1
command: |
if [[ ! -f /opt/bin/ffmpeg ]] ; then
make && make install
fi
05-link:
command: if [[ ! -f /usr/bin/ffmpeg ]] ; then ln -sf /opt/ffmpeg/ffmpeg-5.1-build/bin/ffmpeg /usr/bin/ffmpeg; fi
Herewith a summary of the steps that will be executed when your update is deployed to elastic beanstalk
Packages will be configured on your instance e.g in case your instance doesn't have cmake, it will be installed as per the "packages" section as per the configuration in .ebextensions/yourfilename.config file
Once all dependencies are resolved, the commands section will execute (I've discovered that they don't depend on one another e.g. if step#1 fails, step#2 will still be executed)
Command 01: Creates directories where your files will be unarchived to (amongst other things)
Command 02: Downloads the tar from the release website, places it into your instance's temp folder then unarchives it into your instance's /opt/ffmpeg/ folder. It's worth mentioning that the archive contains a folder "ffmpeg-5.1". Thus, when it's unarchived, you'll have a directory "/opt/ffmpeg/ffmpeg-5.1". This entire step won't execute if the folder "/opt/ffmpeg/ffmpeg-5.1" already exists i.e. this step was most likely executed during a previous deploy/update to your instance
Command 03 & Command 04: Configures ffmpeg. You can checkout the ffmpeg documentation for different configurations as per your requirement
Command 05: Creates a symlink from the installation directory to your usr/bin folder. This is required as the /opt directory is not managed by your ec2-user (requires root access) and therefor when you run ffmpeg from /opt or its subdirectories, your Java application may throw an Access Denied or Permission relates issues

How to launch more than one instance using knife ec2

How launch more than one instance using knife ec2, also is there any need of delay between launching the instances.
While lauching multiple instance using knife ec2 can we attach different roles to different instances
Honestly, when it comes to knife ec2 or any of the cloud providers, I use a wrapper bash+tmux script around it.
#!/bin/bash
tmux new-session -s build -n build -d "echo 'start'"
tmux new-window -t build -n backend
tmux send-keys -t build:backend "knife ec2 server create --server-name backend -N backend -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base], recipe[ops::mysql_db_setup], ' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n web01
tmux send-keys -t build:web01 "knife ec2 server create --server-name web01 -N web01 -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[web]' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n web02
tmux send-keys -t build:web02 "knife ec2 server create --server-name web02 -N web02 -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[web]' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n background01
tmux send-keys -t build:background01 "knife ec2 server create --server-name background01 -N background01 -E playpen -f 2 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[background]' -d ubuntu10.04-v4 --private-network" Enter
tmux attach-session -t build
tmux select-window -t build
Or at least something to that effect.