How to launch more than one instance using knife ec2 - amazon-web-services

How launch more than one instance using knife ec2, also is there any need of delay between launching the instances.
While lauching multiple instance using knife ec2 can we attach different roles to different instances

Honestly, when it comes to knife ec2 or any of the cloud providers, I use a wrapper bash+tmux script around it.
#!/bin/bash
tmux new-session -s build -n build -d "echo 'start'"
tmux new-window -t build -n backend
tmux send-keys -t build:backend "knife ec2 server create --server-name backend -N backend -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base], recipe[ops::mysql_db_setup], ' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n web01
tmux send-keys -t build:web01 "knife ec2 server create --server-name web01 -N web01 -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[web]' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n web02
tmux send-keys -t build:web02 "knife ec2 server create --server-name web02 -N web02 -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[web]' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n background01
tmux send-keys -t build:background01 "knife ec2 server create --server-name background01 -N background01 -E playpen -f 2 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[background]' -d ubuntu10.04-v4 --private-network" Enter
tmux attach-session -t build
tmux select-window -t build
Or at least something to that effect.

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

Where can I find logs of my Compute Engine startup script?

I have a startup script I believe is failing, but where can I find the logs for it? It doesnt seem to appear in StackDriver. My startup script looks like this:
#!/bin/bash
pwd
whoami
sysctl -w vm.max_map_count=262144
sysctl -w fs.file-max=65536
ulimit -n 65536
ulimit -u 4096
docker run -d --name sonarqube \
-p 80:9000 \
-e sonar.jdbc.username=xxx \
-e sonar.jdbc.password=xxx \
-e sonar.jdbc.url=xxx \
sonarqube:latest
When a Compute Engine starts up, you will find the logs for the startup in the serial log. You can read about the serial log here:
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console

SSH execute remote command infacmd.sh is failing

sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com 'cd /opt/tools/informatica/ids/Informatica/10.2.0/isp/bin;infacmd.sh oie importObjects -dn Domain_IDS_Dev -un abc -pd "xxx" -rs MRS_IDS_DEV -sdn LDAP_NP -fp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/mapping_import.xml -cp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/import_control_file.xml'| tee -a logfile.log
I am running the above command from container in Buildspec as well as tested in ec2 instance , Command is failing with error: sh: infacmd.sh: command not found
But When i tried only command sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com and executed other command manually in ec2 then command is working.
Make sure the file exists at the path.
Make sure you have access to the file.
Make sure the file is executable or change the command to
; /bin/bash infacmd.sh ...

Automate GCP persistent disk initialization

Are there any scripts that automate persistent disks formatting and attaching to the Google Cloud VM instance, instead of doing formatting & mounting steps?
The persistent disk is created with Terraform, which also creates a VM and attaches the disk to it with the attached_disk command.
I am hoping to run a simple script on the VM instance start that would:
check if the attached disk is formatted, and format if needed with ext4
check if the disk is mounted, and mount if not
do nothing otherwise
Have you considered using a startup script on the instance (I presume you can also add a startup-script with Terraform)? You could use an if loop to discover if the disk is formatted, then if not, you could try running the formatting/mounting commands in the documentation you linked (I realise you have suggested you do not want to follow the manual steps in the documentation, but these can be integrated into the startup script to achieve the desired result).
Running the following outputs and empty string if the disk is not formatted:
sudo blkid /dev/sdb
You could therefore use this in a startup script to discover if the disk is formatted, then perform formatting/mounting if that is not the case. For example, you could use something like this (Note*** If the disk is formatted but not mounted this could be dangerous and should not be used if your use case could involve existing disks which may have already been formatted):
#!/bin/bash
if sudo blkid /dev/sdb;then
exit
else
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb; \
sudo mkdir -p /mnt/disks/newdisk
sudo mount -o discard,defaults /dev/sdb /mnt/disks/newdisk
fi
The marked answer did not work for me as the sudo blkid /dev/sdb part always returned a value (hence, true) and the script would exit.
I updated the script to check for the entry in fstab and added safety options to the script.
#!/bin/bash
set -uxo pipefail
MNT_DIR=/mnt/disks/persistent_storage
DISK_NAME=my-disk
# Check if entry exists in fstab
grep -q "$MNT_DIR" /etc/fstab
if [[ $? -eq 0 ]]; then # Entry exists
exit
else
set -e # The grep above returns non-zero for no matches & we don't want to exit then.
# Find persistent disk's drive value, prefixed by `google-`
DEVICE_NAME="/dev/$(basename $(readlink /dev/disk/by-id/google-${DISK_NAME}))"
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard $DEVICE_NAME
sudo mkdir -p $MOUNT_DIR
sudo mount -o discard,defaults $DEVICE_NAME $MOUNT_DIR
# Add fstab entry
echo UUID=$(sudo blkid -s UUID -o value $DEVICE_NAME) $MNT_DIR ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
fi
Here's the gist if you want to download it - https://gist.github.com/raj-saxena/3dcaa5c0ba0be88ed91ef3fb50d3ce85
Formatting, mounting and adding entry in /etc/fstab is necessary almost all the time. Here is a solution I came up with and might help others. This can also, for sure, be improved. I added echo commands to explain what each block does.
About disk name you could add device_name on your terraform code when you attach your disks to the instance(s) like mentioned here: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_attached_disk
device_name - (Optional) Specifies a unique device name of your choice that is reflected into the /dev/disk/by-id/google- tree of a Linux operating system running within the instance. This name can be used to reference the device for mounting, resizing, and so on, from within the instance.*
#!/bin/bash
DISKS_PATH=/dev/disk/by-id
DISKS=(disk1 disk2)
check_disks () {
for disk in "${DISKS[#]}"; do
MOUNT_DIR="/$disk"
echo "$MOUNT_DIR"
if sudo blkid $DISKS_PATH/google-${disk}; then
echo "$disk is already formatted, nothing to do"
echo "checking if $disk is present in fstab"
UUID=$(sudo blkid -s UUID -o value $DISKS_PATH/google-${disk})
grep -q "UUID=${UUID} $MOUNT_DIR" /etc/fstab
if [[ $? -eq 0 ]]; then
echo "$disk already present in fstab, continuing with checking mount"
echo "Now checking if $disk is already mounted"
grep -qs "$MOUNT_DIR" /proc/mounts
if [[ $? -eq 0 ]]; then
echo "$disk is already mounted, so doing nothing with mount"
else
echo "$disk is not mounted, so mounting it"
sudo mkdir -p $MOUNT_DIR
sudo mount -o discard,defaults $DISKS_PATH/google-${disk} $MOUNT_DIR
fi
elif [[ $? -ne 0 ]]; then
echo "$disk not present in fstab, so adding it"
echo UUID="$UUID" $MOUNT_DIR ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
echo "Now checking if $disk is already mounted"
grep -qs "$MOUNT_DIR" /proc/mounts
if [[ $? -eq 0 ]]; then
echo "$disk is already mounted, so doing nothing with mount"
else
echo "$disk is not mounted, so mounting it"
sudo mkdir -p $MOUNT_DIR
sudo mount -o discard,defaults $DISKS_PATH/google-${disk} $MOUNT_DIR
fi
fi
else
echo "Formatting ${disk}"
sudo mkfs.ext4 $DISKS_PATH/google-${disk};
echo "Creating directory for ${disk} on $MOUNT_DIR"
sudo mkdir -p $MOUNT_DIR
echo "adding $disk in fstab"
UUID=$(sudo blkid -s UUID -o value $DISKS_PATH/google-${disk})
echo UUID="$UUID" $MOUNT_DIR ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
echo "Mounting $disk"
sudo mount -o discard,defaults $DISKS_PATH/google-${disk} $MOUNT_DIR
fi
done
}
check_disks

Not able to start 2 tasks using Dockerfile CMD

I have a question about Dockerfile with CMD command. I am trying to setup a server that needs to run 2 commands in the docker container at startup. I am able to run either 1 or the other service just fine on their own but if I try to script it to run 2 services at the same time, it fails. I have tried all sorts of variations of nohup, &, linux task backgrounding but I haven't been able to solve it.
Here is my project where I am trying to achieve this:
https://djangofan.github.io/mountebank-with-ui-node/
#entryPoint.sh
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
jobs -l
Displays this output but the ports are not listening:
djangofan#MACPRO ~/workspace/mountebank-container (master)*$ ./run-container.sh
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878 djangofan/mountebank-example "/bin/bash -c /scripts/entryPoint.sh" Less than a second ago Up Less than a second 0.0.0.0:2525->2525/tcp, 0.0.0.0:4546->4546/tcp, 0.0.0.0:5555->5555/tcp, 2424/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:2424->80/tcp nervous_lalande
[1]- 5 Running nohup /bin/bash -c "http-server -p 80 /ui" &
[2]+ 6 Running nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
And here is my Dockerfile:
FROM node:8-alpine
ENV MOUNTEBANK_VERSION=1.14.0
RUN apk add --no-cache bash gawk sed grep bc coreutils
RUN npm install -g http-server
RUN npm install -g mountebank#${MOUNTEBANK_VERSION} --production
EXPOSE 2525 2424 4546 5555 9000
ADD imposters /mb/
ADD ui /ui/
ADD *.sh /scripts/
# these work when ran 1 or the other
#CMD ["http-server", "-p", "80", "/ui"]
#CMD ["mb", "--port", "2525", "--configfile", "/mb/imposters.ejs", "--allowInjection"]
# this doesnt yet work
CMD ["/bin/bash", "-c", "/scripts/entryPoint.sh"]
One process inside docker container has to run not in background mode, because docker container is running while main process inside it is running.
The /scripts/entryPoint.sh should be:
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection"
Everything else is fine in your Dockerfile.