AWS Cloudwatch Agent in Docker - amazon-web-services

I am trying to package AWS CloudWatch agent into docker container. The docker build runs into the following error -
Failed to connect to bus: No such file or directory
unknown init system
Here is the snippet from Dockerfile -
FROM ubuntu:16.04
RUN \
apt-get -y update && \
apt-get -y install wget && \
apt-get -y install unzip
RUN \
wget https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip && \
unzip AmazonCloudWatchAgent.zip && \
./install.sh
What is missing or wrong here?

I notice the documentation has different ways of installing, I wonder if they are both correct. I found another in the EC2 guide that a different method for installing on Ubuntu
RUN \
curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O && \
python ./awslogs-agent-setup.py --region us-east-1

I noticed that AWS has launched official docker image for the Cloudwatch agent on dockerhub and They are updating it frequently. I am late though, But It may help someone.
https://hub.docker.com/r/amazon/cloudwatch-agent

Related

Run docker commands in AWS Lambda Function

Goal
I'm curious to know if it's possible to run docker commands within AWS Lambda Function invocations. Specifically I'm running docker compose up -d to run one-off ECS tasks (see this aws article for more info). I know it's easily possible with AWS CodeBuild but for my use case where the workload duration is usually below 10 seconds, it would be more cost effective to use Lambda.
AFAIK Docker DooD is not available given Lambda Functions hosts can not be toggled to mount the host's docker daemon onto the Lambda Function's container.
Attempts
I've tried the following Docker DinD approach below with no luck:
Lambda custom container image:
ARG FUNCTION_DIR="/function"
FROM python:buster as build-image
ARG FUNCTION_DIR
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY ./* ${FUNCTION_DIR}
RUN pip install --target ${FUNCTION_DIR} -r requirements.txt
FROM python:buster
ARG FUNCTION_DIR
WORKDIR ${FUNCTION_DIR}
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
RUN chmod 755 /usr/bin/aws-lambda-rie ./entrypoint.sh ./runner_install_docker.sh
RUN sh ./runner_install_docker.sh
ENTRYPOINT [ "./entrypoint.sh" ]
CMD [ "lambda_function.lambda_handler" ]
contents ofrunner_install_docker.sh (script that installs docker)
#!/bin/bash
apt-get -y update
apt-get install -y \
software-properties-common build-essential \
apt-transport-https ca-certificates gnupg lsb-release curl sudo
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo chmod u+x /usr/bin/*
sudo chmod u+x /usr/local/bin/*
sudo apt-get clean
sudo rm -rf /var/lib/apt/lists/*
sudo rm -rf /tmp/*
When I run docker compose or other docker commands, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Docker isn't available inside the AWS Lambda runtime. Even if you built it into the custom container, the Lambda function would need to run as a privileged docker container for docker-in-docker to work, which is not something supported by AWS Lambda.
Specifically I'm running docker compose up -d to run one-off ECS tasks
Instead of trying to do this with the docker-compose ECS functionality, you need to look at invoking an ECS RunTask command via one of the AWS SDKs.

Connecting to a s3 bucket from docker container

I am building a docker container which, in a specific folder transform some data, I would like to allocate those files in a s3 bucket, within as specific folder. I have been going thought the aws cli documentation but not sure how this needs to be faced it.
I have installed it without errors by using:
# AWS cli isntallation
RUN apk add --no-cache \
python3 \
py3-pip \
&& pip3 install --upgrade pip \
&& pip3 install \
awscli \
&& rm -rf /var/cache/apk/*
RUN aws --version
I have read about adding the yaml configuration to point to the bucket but not sure how the process needs to be done. Someone with similar project on mind which can point me better to some documentation or how to face it as i am very layman in docker.

'pip' is not recognized as an internal or external command, operable program or batch file on docker web application

I am undergoing a web project using django and docker. The tutorial references how to set up an email service. I registered with AWS and followed a guide of how to link it to docker. The first step is to run "pip install --upgrade boto3". This is followed by the error in the title. How do I install boto3 through docker?
You can use docker-boto3 docker image instead of installing and maintaining a docker image for your self.
docker run --rm -t \
-v $HOME/.aws:/home/worker/.aws:ro \
-v ${pwd}/example:/work \
shinofara/docker-boto3 python example.py
or you can create your own docker image
FROM alpine:latest
RUN apk add --update python3 \
&& pip3 install --upgrade pip \
&& pip3 install boto3 requests PyYAML pg8000 -U \
&& ln -sv /usr/bin/python3 /usr/bin/python
ENTRYPOINT [ "python3" ]
Boto3 Dockerfile

How do you set up a Python 3.8 kernel on GCP's AI Platform JupyterLab instances?

My goal is to be able to start a JupyterNotebook in JupyterLab with Python3.8
Update Python version to 3.8 in GCP AI Platform Jupyter Notebooks
AI Platform Notebooks environments are provided by container images that you select when creating the instance. In this page you will see the available container image types.
In order to specify the container image to run on the notebook you can either choose between using one of the list provided by Google Cloud mentioned above or in case that none of them comes with Python 3.8, you can create a derivative container based on one of the standard AI Platform images and edit the Dockerfile in order to set the Python 3.8 installation command.
To test it out I have made a small modification to a provided container image to incorporate a Python 3.8 kernel in JupyterLab. In order to do it I have created a Dockerfile that does the following:
Creates a layer from the latest tf-gpu Docker image
Installs Python 3.8 and dependencies
Activates a Python 3.8 environment
Installs the Python 3.8 kernel to Jupyter Notebooks
Once the image has been built and pushed to Google Container Registry, you will be able to create an AI Platform Jupyter Notebook with the new kernel.
The code is the following:
FROM gcr.io/deeplearning-platform-release/tf-gpu:latest
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get install -y apt-transport-https \
&& apt-get install -y build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev wget libbz2-dev \
&& wget https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tgz
RUN tar xzf Python-3.8.0.tgz \
&& echo Getting inside folder \
&& cd Python-3.8.0 \
&& ./configure --enable-optimizations \
&& make -j 8 \
&& make altinstall \
&& apt-get install -y python3-venv \
&& echo Creating environment... \
&& python3.8 -m venv testenv \
&& echo Activating environment... \
&& . testenv/bin/activate \
&& echo Installing jupyter... \
&& pip install jupyter \
&& pip install ipython \
&& apt-get update -y \
&& apt-get upgrade -y \
&& ipython kernel install --name "Python3.8" --user
In case you need it, you can also specify a custom image that will allow you to customize the environment for your specific needs. Take into account that the product is in Beta and might change or have limited support.

Google Cloud SDK for ARM architecture

I would like to work Google Cloud SDK on ARM machine.
$ uname -a
Linux myhost 3.14.79-at10 #2 SMP PREEMPT Mon Mar 6 15:38:30 JST 2017 armv7l GNU/Linux
In this page, I can find only for x86 architecture.
Can I work Google Cloud SDK on ARM?
Yes - I was able to install it using the apt-get instructions on an ARM64 (aarch64) Pinebook Pro. If you don't have Ubuntu/Debian, you could use a Docker container. I did it from Manjaro-ARM using an Ubuntu container.
I would think those instructions would work for a Raspberry Pi running Raspbian.
Although the link above, being maintained by Google, may be the best place to obtain these instructions, I will copy in the current minimal set of commands below, just in case the instructions get moved at some point:
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates gnupg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk
gcloud init
You could optionally install any of the following additional packages:
google-cloud-sdk-app-engine-python
google-cloud-sdk-app-engine-python-extras
google-cloud-sdk-app-engine-java
google-cloud-sdk-app-engine-go
google-cloud-sdk-bigtable-emulator
google-cloud-sdk-cbt
google-cloud-sdk-cloud-build-local
google-cloud-sdk-datalab
google-cloud-sdk-datastore-emulator
google-cloud-sdk-firestore-emulator
google-cloud-sdk-pubsub-emulator
kubectl
The answer is No. The SDK is closed source, and it's very not likely that you can hack it to work on ARM. I won't stop you from doing that since it mostly consists of Python scripts.
On the other hand, gsutil, a part of the SDK which handles Cloud Storage operations, is open source and on PyPI. You can install that using pip just as normal.
We organize our local environments around Docker. Unfortunately, there is no official ARM Docker image for the Google Cloud SDK. To get around that, we cloned the official Google Cloud SDK Dockerfile and, after some trial and error, were able to remove the unavailable SDK modules so we can build locally to produce an ARM Docker image. The unavailable modules were not an issue for us as we don't use them so we just commented them out (see the LOCAL_HACK section below). Here is the current hacked Dockerfile we use:
# This is a temporary workaround Dockerfile to allow us to run the Google SDK on Apple Silicon
# For the original #see https://raw.githubusercontent.com/GoogleCloudPlatform/cloud-sdk-docker/master/Dockerfile
FROM docker:19.03.11 as static-docker-source
FROM debian:buster
ARG CLOUD_SDK_VERSION=365.0.1
ENV CLOUD_SDK_VERSION=$CLOUD_SDK_VERSION
ENV PATH "$PATH:/opt/google-cloud-sdk/bin/"
COPY --from=static-docker-source /usr/local/bin/docker /usr/local/bin/docker
RUN groupadd -r -g 1000 cloudsdk && \
useradd -r -u 1000 -m -s /bin/bash -g cloudsdk cloudsdk
RUN apt-get -qqy update && apt-get install -qqy \
curl \
python3-dev \
python3-crcmod \
python-crcmod \
apt-transport-https \
lsb-release \
openssh-client \
git \
make \
gnupg && \
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb https://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" > /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && \
apt-get install -y google-cloud-sdk=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python-extras=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-java=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-datalab=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-datastore-emulator=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-pubsub-emulator=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-firestore-emulator=${CLOUD_SDK_VERSION}-0 \
kubectl && \
gcloud --version && \
docker --version && kubectl version --client
# >>> LOCAL HACK START
# #todo Removed the following packages from the `apt-get install` above as we cannot build them locally
#8 29.36 E: Unable to locate package google-cloud-sdk-app-engine-go
#8 29.37 E: Version '339.0.0-0' for 'google-cloud-sdk-bigtable-emulator' was not found
#8 29.37 E: Unable to locate package google-cloud-sdk-spanner-emulator
#8 29.37 E: Unable to locate package google-cloud-sdk-cbt
#8 29.37 E: Unable to locate package google-cloud-sdk-kpt
#8 29.37 E: Unable to locate package google-cloud-sdk-local-extract
# google-cloud-sdk-app-engine-go=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-bigtable-emulator=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-spanner-emulator=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-cbt=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-kpt=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-local-extract=${CLOUD_SDK_VERSION}-0 \
# <<< LOCAL HACK END
RUN apt-get install -qqy \
gcc \
python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install pyopenssl
RUN git config --system credential.'https://source.developers.google.com'.helper gcloud.sh
VOLUME ["/root/.config", "/root/.kube"]
If you were to save this file as Dockerfile.CloudSdk.arm64, you can then run a docker build on an ARM machine (in our case, an Apple M1 machine) to produce your ARM Docker image:
docker build -f Dockerfile.CloudSdk.arm64 -t yourorg.com/cloud-sdk-docker-arm:latest .
Voila! You now have a reasonably featured Google Cloud SDK Docker image that will run beautifully on an ARM architecture :)
If you have python or python3, along with pip and pip3, try:
pip install --upgrade google-cloud
Hope that helps.
tekk#rack:~ $ uname -a
Linux rack 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l GNU/Linux
It worked for me.