Where can I find logs of my Compute Engine startup script? - google-cloud-platform

I have a startup script I believe is failing, but where can I find the logs for it? It doesnt seem to appear in StackDriver. My startup script looks like this:
#!/bin/bash
pwd
whoami
sysctl -w vm.max_map_count=262144
sysctl -w fs.file-max=65536
ulimit -n 65536
ulimit -u 4096
docker run -d --name sonarqube \
-p 80:9000 \
-e sonar.jdbc.username=xxx \
-e sonar.jdbc.password=xxx \
-e sonar.jdbc.url=xxx \
sonarqube:latest

When a Compute Engine starts up, you will find the logs for the startup in the serial log. You can read about the serial log here:
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

Can minio be run as nonroot user in a docker container?

Can someone let me know if we can run MINIO as non root user?
Found some articles where it can run only as root and not as non root.
Please guide if someone has any idea on how it can achieved if possible.
From Minio docs (Run MinIO Docker as a regular user), you can provide the --user argument to the docker run command.
An example for Linux/macOS, from the doc:
mkdir -p ${HOME}/data
docker run -p 9000:9000 \
--user $(id -u):$(id -g) \
--name minio1 \
-e "MINIO_ROOT_USER=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_ROOT_PASSWORD=wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY" \
-v ${HOME}/data:/data \
minio/minio server /data

SSH execute remote command infacmd.sh is failing

sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com 'cd /opt/tools/informatica/ids/Informatica/10.2.0/isp/bin;infacmd.sh oie importObjects -dn Domain_IDS_Dev -un abc -pd "xxx" -rs MRS_IDS_DEV -sdn LDAP_NP -fp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/mapping_import.xml -cp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/import_control_file.xml'| tee -a logfile.log
I am running the above command from container in Buildspec as well as tested in ec2 instance , Command is failing with error: sh: infacmd.sh: command not found
But When i tried only command sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com and executed other command manually in ec2 then command is working.
Make sure the file exists at the path.
Make sure you have access to the file.
Make sure the file is executable or change the command to
; /bin/bash infacmd.sh ...

Simple docker container fails to run with no indication

This is the contents of my dockerfile:
FROM debian:jessie
RUN mkdir -p /var/www/html && \
mkdir -p /var/log && \
mkdir -p /var/lib/mysql && \
mkdir -p /etc/apache2/sites-enabled && \
chmod 0777 /var/lib/mysql
VOLUME ["/var/www/html", "/var/log", "/var/lib/mysql", "/etc/apache2/sites-enabled"]
Run with:
docker run --name data \
-v ~/test/www/:/var/www/html \
-v ~/test/logs/:/var/log \
-v ~/test/vhosts/:/etc/apache2/sites-enabled \
-v ~/test/mysql/:/var/lib/mysql \
deano87/dockerfiles:data
And it fails to start up. There is nothing printed out anywhere. I have a far more complicated docker image built and running. I followed the same process to build and run etc. I don't see why this one simply fails for no apparent reason?
Sounds like a 'data volume container', see Creating and mounting a data volume container. You only create such a container, you do not actually run it, as there is nothing to run. Is just file system.

How to launch more than one instance using knife ec2

How launch more than one instance using knife ec2, also is there any need of delay between launching the instances.
While lauching multiple instance using knife ec2 can we attach different roles to different instances
Honestly, when it comes to knife ec2 or any of the cloud providers, I use a wrapper bash+tmux script around it.
#!/bin/bash
tmux new-session -s build -n build -d "echo 'start'"
tmux new-window -t build -n backend
tmux send-keys -t build:backend "knife ec2 server create --server-name backend -N backend -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base], recipe[ops::mysql_db_setup], ' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n web01
tmux send-keys -t build:web01 "knife ec2 server create --server-name web01 -N web01 -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[web]' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n web02
tmux send-keys -t build:web02 "knife ec2 server create --server-name web02 -N web02 -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[web]' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n background01
tmux send-keys -t build:background01 "knife ec2 server create --server-name background01 -N background01 -E playpen -f 2 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[background]' -d ubuntu10.04-v4 --private-network" Enter
tmux attach-session -t build
tmux select-window -t build
Or at least something to that effect.