Bosh Lite cf-deployment VirtualBox with MITM Proxy Certificate - cloud-foundry

I'm trying to deploy CF locally through a VirtualBox bosh-lite VM, but I'm running into the corporate proxy injecting a self signed certificate before reaching out to the internet.
I've SSH'd into the box and added the CA to the trusted certs at the OS level, but I'm still getting untrusted certificates in chain.
Is there somewhere I can put the Corporate CA within the configuration so all of the items will download / install successfully?

BOSH deploys things like CF, Zookeeper, Kubernetes, etc. to "clouds" by creating "machines" and installing the appropriate software and running it in those "machines". On a "typical" cloud like Amazon Web Services or VMWare vSphere, a "machine" is a typical virtual machine.
BOSH can also treat various container runtimes like Docker, Kubernetes, or Garden as "clouds" as well, and in the BOSH-Lite case, it's targetting Garden as a cloud. So in the BOSH-Lite case, the "machines" are actually Linux containers running inside the VirtualBox VM. So when you install your certs at the OS level of the VM, that will not apply to things running as containers within the VM.
BOSH does have a native way of injecting trusted certs into each machine it manages, using the trusted_certs property. Assuming you followed these docs to install BOSH-Lite, you can update the create-env command from this:
bosh create-env ~/workspace/bosh-deployment/bosh.yml \
--state ./state.json \
-o ~/workspace/bosh-deployment/virtualbox/cpi.yml \
-o ~/workspace/bosh-deployment/virtualbox/outbound-network.yml \
-o ~/workspace/bosh-deployment/bosh-lite.yml \
-o ~/workspace/bosh-deployment/bosh-lite-runc.yml \
-o ~/workspace/bosh-deployment/uaa.yml \
-o ~/workspace/bosh-deployment/credhub.yml \
-o ~/workspace/bosh-deployment/jumpbox-user.yml \
--vars-store ./creds.yml \
-v director_name=bosh-lite \
-v internal_ip=192.168.50.6 \
-v internal_gw=192.168.50.1 \
-v internal_cidr=192.168.50.0/24 \
-v outbound_network_name=NatNetwork
to this:
bosh create-env ~/workspace/bosh-deployment/bosh.yml \
--state ./state.json \
-o ~/workspace/bosh-deployment/virtualbox/cpi.yml \
-o ~/workspace/bosh-deployment/virtualbox/outbound-network.yml \
-o ~/workspace/bosh-deployment/bosh-lite.yml \
-o ~/workspace/bosh-deployment/bosh-lite-runc.yml \
-o ~/workspace/bosh-deployment/uaa.yml \
-o ~/workspace/bosh-deployment/credhub.yml \
-o ~/workspace/bosh-deployment/jumpbox-user.yml \
-o ~/workspace/bosh-deployment/openstack/trusted-certs.yml \
--vars-store ./creds.yml \
-v director_name=bosh-lite \
-v internal_ip=192.168.50.6 \
-v internal_gw=192.168.50.1 \
-v internal_cidr=192.168.50.0/24 \
-v outbound_network_name=NatNetwork \
--var-file=openstack_ca_cert=</PATH/TO/YOUR/CERT>
This adds two lines:
-o ~/workspace/bosh-deployment/openstack/trusted-certs.yml
--var-file=openstack_ca_cert=</PATH/TO/YOUR/CERT>
Even though it says openstack, there's nothing OpenStack-specific about those files. The first line (with -o) modifies the base manifest for BOSH to include a section for setting the director.trusted_certs property but doesn't actually aset the value, it parameterizes it as a variable called openstack_ca_cert, and the second line (with --var-file) actually sets the value with the contents from the given file.
After you run that command, it will update BOSH-Lite, but it won't update the things deployed by BOSH, e.g. CF. You'll need to re-run the deploy commands for CF to make sure it picks up those trusted certs.

Related

AWS Glue 3.0 container not working for Jupyter notebook local development

I am working on Glue in AWS and trying to test and debug in local dev. I follow the instruction here https://aws.amazon.com/blogs/big-data/developing-aws-glue-etl-jobs-locally-using-a-container/ to develop Glue job locally. On that post, they use Glue 1.0 image for testing and it works as it should be. However when I load and try to dev by Glue 3.0 version; I follow the guidance steps but, I can't open Jupyter notebook on :8888 like the post said even every step seems correct.
here my cmd to start a Jupyter notebook on Glue 3.0 container
docker run -itd -p 8888:8888 -p 4040:4040 -v ~/.aws:/root/.aws:ro --name glue3_jupyter amazon/aws-glue-libs:glue_libs_3.0.0_image_01 /home/jupyter/jupyter_start.sh
nothing shows on http://localhost:8888.
still have no idea why! I understand the diff. between versions of Glues just wanna develop and test on the latest version of it. Have anybody got the same issue?
Thanks.
It seems that GLUE 3.0 image has some issues with SSL. A workaround for working locally is to disable SSL (you also have to change the script paths as documentation is not updated).
$ docker run -it -p 8888:8888 -p 4040:4040 -e DISABLE_SSL="true" \
-e AWS_ACCESS_KEY_ID=$(aws --profile default configure get aws_access_key_id) \
-e AWS_SECRET_ACCESS_KEY=$(aws --profile default configure get aws_secret_access_key) \
-e AWS_DEFAULT_REGION=$(aws --profile default configure get region) \
--name glue_jupyter amazon/aws-glue-libs:glue_libs_3.0.0_image_01 \
/home/glue_user/jupyter/jupyter_start.sh
After a few seconds you should have a working jupyter notebook instance running on http://127.0.0.1:8888

gdb exits immediately `Process finished with exit code 1` or lldb `'A packet returned error 8'` on docker

This took me full days to find, so I am posting this for future reference.
I am developing C++ on a docker image. I am using clion.
My code is compiled in debug mode, and runs fine in run mode, but when trying to debug, the process exits immediately with the very informative
Process finished with exit code 1
When switching the debugger from
to
Trying to debug still exits, but yields a popup in clion
'A packet returned error 8'
The same code debugs fine on a local computer.
The docker run command is
RUN_CMD="docker run --group-add ${DOCKER_GROUP_ID} \
--env HOME=${HOME} \
--env="DISPLAY" \
--entrypoint /bin/bash \
--interactive \
--net "host" \
--rm \
--tty \
--user=${USER_ID}:${GROUP_ID} \
--volume ${HOME}:${HOME} \
--volume /mnt:/mnt \
$(cat ${HOME}/personal-uv-docker-flags) \
-v "${HOME}/.Xauthority:${HOME}/.Xauthority:rw" \
--volume /var/run/docker.sock:/var/run/docker.sock \
--workdir ${HOME} \
${IMAGE} $(${DIR}/impl/known-tools.py cmd-line ${TOOL})"
How to debug C++ on docker?
Eventually, I found this comment which led me to this blog post, in which I learned C++ debugging is disallowed on docker by default.
The arguments --cap-add=SYS_PTRACE and --security-opt
seccomp=unconfined are required for C++ memory profiling and debugging
in Docker.
I added
--cap-add=SYS_PTRACE --security-opt seccomp=unconfined
to the docker run command, and the debugger was able to connect.

What does `gcloud compute instances create` do? - POST https://compute.googleapis.com…

Some things are very easy to do with the gcloud CLI, like:
$ export network='default' instance='example-instance' firewall='ssh-http-icmp-fw'
$ gcloud compute networks create "$network"
$ gcloud compute firewall-rules create "$firewall" --network "$network" \
--allow 'tcp:22,tcp:80,icmp'
$ gcloud compute instances create "$instance" --network "$network" \
--tags 'http-server' \
--metadata \
startup-script='#! /bin/bash
# Installs apache and a custom homepage
apt update
apt -y install apache2
cat <<EOF > /var/www/html/index.html
<html><body><h1>Hello World</h1>
<p>This page was created from a start up script.</p>
</body></html>'
$ # sleep 15s
$ curl $(gcloud compute instances list --filter='name=('"$instance"')' \
--format='value(EXTERNAL_IP)')
(to be exhaustive in commands, tear down with)
$ gcloud compute networks delete -q "$network"
$ gcloud compute firewall-rules delete -q "$firewall"
$ gcloud compute instances delete -q "$instance"
…but it's not clear what the equivalent commands are from the REST API side. Especially considering the HUGE number of options, e.g., at https://cloud.google.com/compute/docs/reference/rest/v1/instances/insert
So I was thinking to just steal whatever gcloud does internally when I write my custom REST API client for Google Cloud's Compute Engine.
Running rg I found a bunch of these lines:
https://github.com/googleapis/google-auth-library-python/blob/b1a12d2/google/auth/transport/requests.py#L182
Specifically these 5 in lib/third_party:
google/auth/transport/{_aiohttp_requests.py,requests.py,_http_client.py,urllib3.py}
google_auth_httplib2/__init__.py
Below each of them I added _LOGGER.debug("With body: %s", body). But there seems to be some fancy batching going on because I almost never get that With body line 😞
Now messing with Wireshark to see what I can find, but I'm confident this is a bad rabbit hole to fall down. Ditto for https://console.cloud.google.com/home/activity.
How can I find out what body is being set by gcloud?
Add the command line option --log-http to see the REST API parameters.
There is no simple answer as the CLI changes over time. New features are added, removed, etc.

Running a Qt GUI in a docker container

So, I have a C++ GUI based on Qt5 which I want to run from inside a Docker container.
When I try to start it with
docker run --rm -it my_image
this results in the error output
qt.qpa.xcb: could not connect to display localhost:10.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
So I searched for how to do this. I found GUI Qt Application in a docker container, and based on that called it with
QT_GRAPHICSSYSTEM="native" docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix my_image
which resulted in the same error.
Then I found Can you run GUI applications in a Docker container?.
The accepted answer in there seems to be specific to certain applications such as Firefox?
Scrolling further down I got a solution that tells me to set X11UseLocalhost no in sshd_config and then call it like
docker run -v $HOME:/hosthome:ro -e XAUTHORITY=/hosthome/.Xauthority -e DISPLAY=$(echo $DISPLAY | sed "s/^.*:/$(hostname -i):/") my_image
this produces a slight variation of the error above:
qt.qpa.xcb: could not connect to display 127.0.1.1:13.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
Following another answer, I added ENV DISPLAY :0 to my Dockerfile and called it with
xhost +
XSOCK=/tmp/.X11-unix/X0
docker run -v $XSOCK:$XSOCK my_image
This time, the first line of my error was qt.qpa.xcb: could not connect to display :0.
Then I tried another answer, added
RUN export uid=0 gid=0 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
mkdir /etc/sudoers.d/ && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
to my Dockerfile and called docker run -ti --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix my_image, again same error.
I also tried several of the ways described in http://wiki.ros.org/docker/Tutorials/GUI, same error.
Am I doing something wrong? Note that I'm working on a remote machine via SSH, with X11 forwarding turned on of course (and the application works just fine outside of Docker). Also note that what I write is a client-server-application, and the server part which needs no GUI elements but shares most of the source code works just fine from it's container.
I hope for a solution that doesn't require me to change the system as the reason I use Docker in the first place is for users of my application to get it running without much hassle.
You have multiple errors that are covering each other. First of all, make sure you have the correct libraries installed. If your docker image is debian-based, it usually looks like a line
RUN apt-get update && \
apt-get install -y libqt5gui5 && \
rm -rf /var/lib/apt/lists/*
ENV QT_DEBUG_PLUGINS=1
Note the environment variable QT_DEBUG_PLUGINS. This will make the output much more helpful and cite any missing libraries. In the now very verbose output, look for something like this:
Cannot load library /usr/local/lib/python3.9/site-packages/PySide2/Qt/plugins/platforms/libqxcb.so: (libxcb-icccm.so.4: cannot open shared object file: No such file or directory)
The bolded part will be the missing library file; you can find the package it's in with your distribution's package manager (e.g. dpkg -S libxcb-iccm.so.4 on debian).
Next, start docker like this (can be one line, separated for clarity):
docker run \
-e "DISPLAY=$DISPLAY" \
-v "$HOME/.Xauthority:/root/.Xauthority:ro" \
--network host \
YOUR_APP_HERE
Make sure to replace /root with the guest user's HOME directory.
Advanced graphics (e.g. games, 3D modelling etc.) may also require mounting of /dev with -v /dev:/dev. Note that this will give the guest root access to your hard disks though, so you may want to copy/recreate the devices in a more fine-grained way.
On host system allow X connection from docker xhost +local:root
And start your container
docker run -it \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--name my_app \
--rm \
<your_image>

Starting Mesos slave in Docker on Amazon Linux results in cgroup error

I'm taking a docker-compose setup that's working on my Mac, and trying to get it running on a couple of AWS EC2 instances. The Mesos master Docker container came up fine, as did Zookeeper/Marathon, but the Mesos slave is having trouble:
$ sudo docker run --name mesos-slave1 -p 5051:5051 \
-e "MESOS_LOG_DIR=/var/log" -e "MESOS_MASTER=zk://10.x.x.x:2181/mesos" \
-e "MESOS_HOSTNAME=172.17.42.1" -e "MESOS_PORT:5051" \
-e "MESOS_ISOLATOR=cgroups/cpu,cgroups/mem" -e "MESOS_CONTAINERIZERS=docker,mesos" \
-e "MESOS_EXECUTOR_REGISTRATION_TIMEOUT:5mins" \
redjack/mesos-slave:0.21.0
I0708 19:26:09.559125 1 logging.cpp:172] INFO level logging started!
I0708 19:26:09.569294 1 main.cpp:142] Build: 2014-11-22 05:29:57 by root
I0708 19:26:09.569327 1 main.cpp:144] Version: 0.21.0
I0708 19:26:09.569340 1 main.cpp:147] Git tag: 0.21.0
I0708 19:26:09.569350 1 main.cpp:151] Git SHA: ab8fa655d34e8e15a4290422df38a18db1c09b5b
Failed to create a containerizer: Could not create DockerContainerizer: Failed to find a mounted cgroups hierarchy for the 'cpu' subsystem; you probably need to mount cgroups manually!
I've done various searches, and have tried things like mounting /sys and similar approaches, with no luck. I've even tried running docker in privileged mode just as a sanity check. My docker-compose.yml specifies:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
- /sys/fs/cgroup:/sys/fs/cgroup
And I started there, but mounting those on the EC2 instance doesn't work either. Since it works on my Mac, there's clearly some difference between OS X and Amazon Linux that's connected with the issue, but I haven't been able to determine a work-around. In case it's handy for OS identification purposes, on the EC2 instance it says:
$ uname -a
Linux ip-10-x-x-x 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 22:50:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
I may end up just installing Mesos directly on the EC2 instance, but it would be very convenient to avoid that by using the Docker container, of course.
If anyone's stumbled across this and found a solution, please share! I'll post back if I figure something out.
I was just about to post this question when I found one more thing to try, and it looks like it did it, so I figured I'd go ahead and post anyway after putting the work in, and hopefully the info may save someone else the time I spent trying to figure this out. Looks like the key is to mount /cgroup as a volume, which is presumably a difference between OS X and Amazon Linux (aka Centos variant). The final docker incantation that seems to be working:
$ sudo docker run --privileged=true --name mesos-slave1 -p 5051:5051 \
-e "MESOS_LOG_DIR=/var/log" -e "MESOS_MASTER=zk://10.x.x.x:2181/mesos" \
-e "MESOS_HOSTNAME=172.17.42.1" -e "MESOS_PORT:5051" \
-e "MESOS_ISOLATOR=cgroups/cpu,cgroups/mem" \
-e "MESOS_CONTAINERIZERS=docker,mesos" \
-e "MESOS_EXECUTOR_REGISTRATION_TIMEOUT:5mins" \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/local/bin/docker:/usr/local/bin/docker -v /sys:/sys \
-v /cgroup:/cgroup redjack/mesos-slave:0.21.0
I'm still experimenting so I can't say for sure whether the privileged mode is required, and whether some of the other volumes are really needed (the docker stuff). But if this saves someone else some time, that's great.