Installing a Monitoring Agent on GCE - google-cloud-platform

I'm trying to install a Monitoring Agent on my f1-micro instance with Debian 9 and running dockerised application. I'm following the https://cloud.google.com/monitoring/agent/install-agent#linux-install tutorial. When I execute sudo bash install-monitoring-agent.sh I get a message Unidentifiable or unsupported platform..
Am I doing anything wrong?

Inside the bash script you will see that he looking on the os-release on [/etc/os-release] see below:
============================================================================
# Fallback for systems lacking /etc/os-release.
if [[ -f /etc/debian_version ]]; then
echo 'Installing agent for Debian.'
install_for_debian
elif [[ -f /etc/redhat-release ]]; then
echo 'Installing agent for Red Hat.'
install_for_redhat
else
echo >&2 'Unidentifiable or unsupported platform.'
echo >&2 "See ${MONITORING_AGENT_SUPPORTED_URL} for a list of supported platforms."
exit 1
Check the [/etc/os-release].
Normally Debian its supported, and I did install {f1-micro} machine with Debian and it worked perfectly, the output should be like below:
~$ sudo cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Related

Jupyterlab health check failing for AI Notebook

The JupyterLab health check is intermittently failing for my AI Notebook. I noticed that Jupyter service still works even when the health check fails, since my queries still get executed successfully, and I'm able to open my notebook by clicking "Open Jupyterlab".
How can I debug why this health check is failing?
jupyterlab status failing
This is a new feature in Jupyter Notebooks, there is a now a health agent which runs inside Notebook and this specific check is to verify Jupyter Service status:
The check is:
if [ "$(systemctl show --property ActiveState jupyter | awk -F "=" '{print $2}')" == "active" ]; then echo 1; else echo -1; fi
You can use the gcloud notebooks instance get-health to get more details.
What is the DLVM version/image you are using and Machine type?
If you are using container based the check is:
docker exec -i payload-container ps aux | grep /opt/conda/bin/jupy
ter-lab | grep -v grep >/dev/null 2>&1; test $? -eq 0 && echo 1 || echo -1
And make sure: report-container-health and report-system-health are both True.

Running a Qt GUI in a docker container

So, I have a C++ GUI based on Qt5 which I want to run from inside a Docker container.
When I try to start it with
docker run --rm -it my_image
this results in the error output
qt.qpa.xcb: could not connect to display localhost:10.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
So I searched for how to do this. I found GUI Qt Application in a docker container, and based on that called it with
QT_GRAPHICSSYSTEM="native" docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix my_image
which resulted in the same error.
Then I found Can you run GUI applications in a Docker container?.
The accepted answer in there seems to be specific to certain applications such as Firefox?
Scrolling further down I got a solution that tells me to set X11UseLocalhost no in sshd_config and then call it like
docker run -v $HOME:/hosthome:ro -e XAUTHORITY=/hosthome/.Xauthority -e DISPLAY=$(echo $DISPLAY | sed "s/^.*:/$(hostname -i):/") my_image
this produces a slight variation of the error above:
qt.qpa.xcb: could not connect to display 127.0.1.1:13.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
Following another answer, I added ENV DISPLAY :0 to my Dockerfile and called it with
xhost +
XSOCK=/tmp/.X11-unix/X0
docker run -v $XSOCK:$XSOCK my_image
This time, the first line of my error was qt.qpa.xcb: could not connect to display :0.
Then I tried another answer, added
RUN export uid=0 gid=0 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
mkdir /etc/sudoers.d/ && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
to my Dockerfile and called docker run -ti --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix my_image, again same error.
I also tried several of the ways described in http://wiki.ros.org/docker/Tutorials/GUI, same error.
Am I doing something wrong? Note that I'm working on a remote machine via SSH, with X11 forwarding turned on of course (and the application works just fine outside of Docker). Also note that what I write is a client-server-application, and the server part which needs no GUI elements but shares most of the source code works just fine from it's container.
I hope for a solution that doesn't require me to change the system as the reason I use Docker in the first place is for users of my application to get it running without much hassle.
You have multiple errors that are covering each other. First of all, make sure you have the correct libraries installed. If your docker image is debian-based, it usually looks like a line
RUN apt-get update && \
apt-get install -y libqt5gui5 && \
rm -rf /var/lib/apt/lists/*
ENV QT_DEBUG_PLUGINS=1
Note the environment variable QT_DEBUG_PLUGINS. This will make the output much more helpful and cite any missing libraries. In the now very verbose output, look for something like this:
Cannot load library /usr/local/lib/python3.9/site-packages/PySide2/Qt/plugins/platforms/libqxcb.so: (libxcb-icccm.so.4: cannot open shared object file: No such file or directory)
The bolded part will be the missing library file; you can find the package it's in with your distribution's package manager (e.g. dpkg -S libxcb-iccm.so.4 on debian).
Next, start docker like this (can be one line, separated for clarity):
docker run \
-e "DISPLAY=$DISPLAY" \
-v "$HOME/.Xauthority:/root/.Xauthority:ro" \
--network host \
YOUR_APP_HERE
Make sure to replace /root with the guest user's HOME directory.
Advanced graphics (e.g. games, 3D modelling etc.) may also require mounting of /dev with -v /dev:/dev. Note that this will give the guest root access to your hard disks though, so you may want to copy/recreate the devices in a more fine-grained way.
On host system allow X connection from docker xhost +local:root
And start your container
docker run -it \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--name my_app \
--rm \
<your_image>

Elastic Beanstalk Extensions: When does a command complete?

I have an AWS Elastic Beanstalk setup with some .ebextensions files with some container_commands in them. One of those commands is a script. The script completes, but the next command doesn't run.
$ pstree -p | grep cfn-
|-cfn-hup(2833)-+-command-process(10161)---command-process(10162)-+-cfn-init(10317)---bash(10428)
$ ps 10317
PID TTY STAT TIME COMMAND
10317 ? S 0:00 /usr/bin/python2.7 /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-east-1:278460835609:stack/awseb-e-4qwsypzv7u-stack/f8ab55f0-393c-11e9-8907-0ae8cc519968 -r AWSEBAutoScalingGroup --region us-east-1 --configsets Infra-EmbeddedPostBuild
$ ps 10428
PID TTY STAT TIME COMMAND
10428 ? Z 0:00 [bash] <defunct>
As you can see, my script is a defuct zombie, but cfn-init isn't making a wait(2) syscall for it.
When I run the script from the command line, it terminates properly.
I have to assume cfn-init is getting SIGCHLD. Why isn't it wait(2)ing and moving on?
Also, is there a better way to investigate this? I've been looking at running processes and reading the completely unhelpful /var/log/eb-* logs.
FWIW, the script is very simple:
#!/usr/bin/env bash
mkfifo ~ec2-user/fifo
nohup ~ec2-user/holdlock.sh &
read < ~ec2-user/fifo
And the thing it nohups is pretty simple:
#!/usr/bin/env bash
(echo 'select pg_advisory_lock(43110);';sleep 10m) |
PGPASSWORD=$RDS_PASSWORD psql -h $RDS_HOSTNAME -d $RDS_DB_NAME -U
$RDS_USERNAME |
tee ~ec2-user/nhlog > ~ec2-user/fifo
A workaround for this is to move the series of commands into a single shell script and invoke that as a single command. This still doesn't explain what ebextensions actually does, but it lets me move forward.

Starting Mesos slave in Docker on Amazon Linux results in cgroup error

I'm taking a docker-compose setup that's working on my Mac, and trying to get it running on a couple of AWS EC2 instances. The Mesos master Docker container came up fine, as did Zookeeper/Marathon, but the Mesos slave is having trouble:
$ sudo docker run --name mesos-slave1 -p 5051:5051 \
-e "MESOS_LOG_DIR=/var/log" -e "MESOS_MASTER=zk://10.x.x.x:2181/mesos" \
-e "MESOS_HOSTNAME=172.17.42.1" -e "MESOS_PORT:5051" \
-e "MESOS_ISOLATOR=cgroups/cpu,cgroups/mem" -e "MESOS_CONTAINERIZERS=docker,mesos" \
-e "MESOS_EXECUTOR_REGISTRATION_TIMEOUT:5mins" \
redjack/mesos-slave:0.21.0
I0708 19:26:09.559125 1 logging.cpp:172] INFO level logging started!
I0708 19:26:09.569294 1 main.cpp:142] Build: 2014-11-22 05:29:57 by root
I0708 19:26:09.569327 1 main.cpp:144] Version: 0.21.0
I0708 19:26:09.569340 1 main.cpp:147] Git tag: 0.21.0
I0708 19:26:09.569350 1 main.cpp:151] Git SHA: ab8fa655d34e8e15a4290422df38a18db1c09b5b
Failed to create a containerizer: Could not create DockerContainerizer: Failed to find a mounted cgroups hierarchy for the 'cpu' subsystem; you probably need to mount cgroups manually!
I've done various searches, and have tried things like mounting /sys and similar approaches, with no luck. I've even tried running docker in privileged mode just as a sanity check. My docker-compose.yml specifies:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
- /sys/fs/cgroup:/sys/fs/cgroup
And I started there, but mounting those on the EC2 instance doesn't work either. Since it works on my Mac, there's clearly some difference between OS X and Amazon Linux that's connected with the issue, but I haven't been able to determine a work-around. In case it's handy for OS identification purposes, on the EC2 instance it says:
$ uname -a
Linux ip-10-x-x-x 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 22:50:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
I may end up just installing Mesos directly on the EC2 instance, but it would be very convenient to avoid that by using the Docker container, of course.
If anyone's stumbled across this and found a solution, please share! I'll post back if I figure something out.
I was just about to post this question when I found one more thing to try, and it looks like it did it, so I figured I'd go ahead and post anyway after putting the work in, and hopefully the info may save someone else the time I spent trying to figure this out. Looks like the key is to mount /cgroup as a volume, which is presumably a difference between OS X and Amazon Linux (aka Centos variant). The final docker incantation that seems to be working:
$ sudo docker run --privileged=true --name mesos-slave1 -p 5051:5051 \
-e "MESOS_LOG_DIR=/var/log" -e "MESOS_MASTER=zk://10.x.x.x:2181/mesos" \
-e "MESOS_HOSTNAME=172.17.42.1" -e "MESOS_PORT:5051" \
-e "MESOS_ISOLATOR=cgroups/cpu,cgroups/mem" \
-e "MESOS_CONTAINERIZERS=docker,mesos" \
-e "MESOS_EXECUTOR_REGISTRATION_TIMEOUT:5mins" \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/local/bin/docker:/usr/local/bin/docker -v /sys:/sys \
-v /cgroup:/cgroup redjack/mesos-slave:0.21.0
I'm still experimenting so I can't say for sure whether the privileged mode is required, and whether some of the other volumes are really needed (the docker stuff). But if this saves someone else some time, that's great.

How to launch or run an apk installed via adb on Google Glass

I wanted to share my experience on how to run an android app on Google Glass after installing it. In case, app does not have voice triggers defined and cannot be launched by 'OK Glass..', how can you run an app?
I got this answer from posts in Stackoverflow including this post: How to start an application using android ADB tools?
After connecting your glass to your computer, you have two options (These has been tested on mac):
1) How to launch/run an apt installed via adb if we know the activity/package name:
adb shell am start -n com.package.name/com.package.name.ActivityName
adb shell am start -n com.package.name/.ActivityName
** If you need to find out the list of installed packages on your glass:
adb shell 'pm list packages -f'
2) How to launch/run an apt if we don’t know the activity/package name?
a) create a file named "adb-run.sh" with these 3 lines:
pkg=$(aapt dump badging $1|awk -F" " '/package/ {print $2}'|awk -F"'" '/name=/ {print $2}')
act=$(aapt dump badging $1|awk -F" " '/launchable-activity/ {print $2}'|awk -F"'" '/name=/ {print $2}')
adb shell am start -n $pkg/$act
b) "chmod +x adb-run.sh" to make it executable
c) adb-run.sh myapp.apk
Note: This requires that you have aapt in your path. You can find it under the new build tools folder in the SDK:
echo ‘export PATH=$PATH:/Users/USERNAME/LOCATIONofSDK/platform-tools/:/Users/USERNAME/LOCATIONofSDK/build-tools/android-4.3/’ >> ~/.bash_profile
You might consider using ChromeADB ... a great extension that does not need all the command line work.
You can use launchy.
Just install this app on Glass and then go to the settings... launchy will start and provide you the ability to run the APKs you've installed.
https://github.com/kaze0/launchy