It seems that the Google Monitoring Agent (powered by Stackdriver) should be installed on each Node (i.e. each compute instance, i.e. each machine) of a Kubernetes cluster.
However the new plugins, like Nginx, Redis, ElasticSearch..., need those agents to know the IP of these services. This means having kube-proxy running and set up which should mean running that Google Monitoring Agent on a Pod.
These two conflict: On one side that agent monitors the entire machine, on the other it monitor services running on one or more machines.
Can these Stackdriver plugins work on a Google Container Engine (GKE) / Kubernetes cluster?
To monitor each machine (memory, CPU, disk...) it's possible to install the agent on each node (i.e. on each Compute Instance of your GKE cluster). Note that it'll not work with auto-scaling in the sense that re-created nodes won't have the agent installed.
To monitor services (number of requests/s, client connection...) it's possible to install the agent plugin in another container so that for example Nginx Pod run two containers:
Nginx
Google Monitoring Agent together with the Nginx plugin
Note: Not fully tested yet.
You can install the StackDriver Agent in your Dockerfile.
I have been able to get this working for a couchdb container as follows:
FROM klaemo/couchdb
RUN apt-get update
RUN apt-get install curl lsb-release -y
RUN curl -O https://repo.stackdriver.com/stack-install.sh
RUN apt-get install libyajl2 -y
COPY couchdb.conf /opt/stackdriver/collectd/etc/collectd.d/couchdb.conf
CMD bash stack-install.sh --write-gcm && service stackdriver-agent restart && couchdb
I had tried to use a Stackdriver container in a pod to collect stats about Nginx/Uwsgi in the same pod.
I had some findings that may be not so helpful. Just for your reference.
To create the stackdriver image, you may reference the docker file created by Keto.
https://hub.docker.com/r/keto/stackdriver/~/dockerfile/
FROM centos:centos7
MAINTAINER Mikael Keto
# add stackdriver repository
RUN curl -o /etc/yum.repos.d/stackdriver.repo https://repo.stackdriver.com/stackdriver-el7.repo
# install stackdriver
RUN yum -y install initscripts stackdriver-agent && yum clean all
RUN mkdir -p /var/lock/subsys; exit 0
ADD run.sh /run.sh
RUN chmod 755 /run.sh
CMD ["/run.sh"]
The run.sh is look like below,
#!/usr/bin/env bash
/opt/stackdriver/stack-config --write-gcm --no-start
/etc/init.d/stackdriver-agent start
while true; do
sleep 60
agent_pid=$(cat /var/run/stackdriver-agent.pid 2>/dev/null)
ps -p $agent_pid > /dev/null 2>&1
if [ $? != 0 ]; then
echo "Stackdriver agent pid not found!"
break;
fi
done
In the GKE/K8S deployment yaml file,
apiVersion: extensions/v1beta1
kind: Deployment
...
- name: stackdriver-agent
image: gcr.io/<project_id>/stackdriver-agent:<your_version>
command: ['/run.sh']
In my test, I found
It will report stats based on [node_name] instead of [container_name].
It will collect many system stats that are meaningful for a node, but since it is in a pod, it's quite pointless.
Well, I hope to find some way to collect both statistics of the pods and nodes that I need, but I didn't find a easy way to do that. What I did is do that by Google Python API library, but that takes too much time.
There is an other way to use Dockerfile.
When creating the docker image, pre-install necessary libraries for the stackdriver-agent installation.
FROM mongo
RUN apt-get update && apt-get install -y curl lsb-release
# COPY credential
COPY gcloud-credential.json /etc/google/auth/application_default_credentials.json
ENV GOOGLE_APPLICATION_CREDENTIALS "/etc/google/auth/application_default_credentials.json"
# download Stackdriver Agent installer
RUN curl -O https://repo.stackdriver.com/stack-install.sh
RUN chmod +x /stack-install.sh
# COPY stackdriver mongodb plugin
COPY mongodb.conf /opt/stackdriver/collectd/etc/collectd.d/mongodb.conf
Then install the agent using POD lifecycle.
spec:
containers:
- image: your_mongo_image
name: my-mongo
ports:
- containerPort: 27017
lifecycle:
postStart:
exec:
command: ["/stack-install.sh", "--write-gcm"]
Related
I have a VM instance in Google Cloud with Ubuntu 20.04 LTS, I set it to allow HTTP traffic.
I need to setup Label Studio (https://github.com/heartexlabs/label-studio) in this VM so anyone can access it by just typing the VM public IP.
I already tried building it with docker:
sudo docker build -t heartexlabs/label-studio:latest .
But when i run it with:
sudo docker run -d -p 80:80 heartexlabs/label-studio:latest
It doesn't work, wheres the output of the container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e728edbe6d5 heartexlabs/label-studio:latest "./tools/run.sh" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 8080/tcp gracious_shamir
I also tried to install it with pip and run it with:
label-studio start --host 34.66.116.52 --port 80 testproject
If anyone has experience with Google Cloud VM and can help set this up with docker or with a WSGI server I'd appreciate it
Try this:
sudo docker run --rm -d -p 80:8080 -v `pwd`/my_project:/label-studio/my_project --name label-studio heartexlabs/label-studio:latest label-studio start my_project --init
I was able to access it from External IP with this.
I am trying to set up Amazon Cloudwatch Agent to my docker as a container. This is an OnPremise installation so it's running locally, not inside AWS Kubernetes or anything of the sorts.
I've set up a basic dockerfile, agent.json and .aws/ folder for credentials and using docker-compose build to actually set it up, then launch it, but I am running into constant problems because Docker does not contain or run systemctl so I cannot run the service using AWS own documentation command:
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -s
This will fail on an error when I try to run the container:
cloudwatch_1 | /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl: line 262: systemctl: command not found
cloudwatch_1 | unknown init system
I've tried to run the /start-amazon-cloudwatch-agent inside /bin as well, but no luck. No documentation on this.
Basically the issue is how can I run this as a service or a process in the foreground? Anyone have any clues? Otherwise the container won't stay up. Below is my code:
dockerfile
FROM amazonlinux:2.0.20190508
RUN yum -y install https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
COPY agent.json /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
CMD /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
agent.json
{
"agent": {
"metrics_collection_interval": 60,
"region": "eu-west-1",
"logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log",
"debug": true
}
}
.aws/ folder contains config and credentials, but I never got as far for the agent to actually try and make a connection.
just use the official image docker pull amazon/cloudwatch-agent it will handel all the things for you
here
if you insist to use your own , try the following:
FROM amazonlinux:2.0.20190508
RUN yum -y install https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
COPY agent.json /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json
ENV RUN_IN_CONTAINER=True
ENTRYPOINT ["/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent"]
Use the AWS official Docker Image, here is the example of the docker compose
version: "3.8"
services:
agent:
image: amazon/cloudwatch-agent:1.247350.0b251814
volumes:
- ./config/log-collect.json:/opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json # agent config
- ./aws:/root/.aws # required for authentication
- ./log:/log # sample log
- ./etc:/opt/aws/amazon-cloudwatch-agent/etc # for debugging the config of AWS of container
From config above, only the first 2 volume sync required.
Number 3 & 4 is for debug purpose.
If you interested in learning what each volumes does, you can read more at https://medium.com/#gusdecool/setup-aws-cloudwatch-agent-on-premise-server-part-1-31700e81ab8
I using docker container and docker-compose, to create ELK containers, after the containers created i should inject file into logstash and display it via docker
I'm havent work on docker until three days ago, i working at this problem, surfed at least 10 websites+youtube and cant understand what should i do.
I sucssesed in creatind docker container, install/create (not sure how to say it) docker-compose.
I have pulled the docker-elk/ from git, so i have ready yml files for docker-compose, logstash, kibana and elastic search, i have tried to push file into logstash but i cant get if i did it right, and how to check it at all
i saw an option to check ip addresses of running containers and run it via ip:5061, ip:9200 but nothing have worked
i have installed docker and pulled docker elk
sudo amazon-linux-extras install docker
Download docker-elk:
git clone https://github.com/deviantony/docker-elk
sudo curl -L
downloaded docker compose
https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo mv /usr/local/bin/docker-compose /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
and created elk containers- i have tried two commands, the second one worked #better
sudo docker-compose -d
sudo docker-compose -f /full addres/ docker-compose.yml up
I expect to show injected into logstash log file via kibana graph
what you need is a log shipper like filebeat and that do not comes with the ELK stack. after you configure your file beate to send logs to logstash you can see the logs
So here is what I want to do.
Push to master in git
Have gitlab-ci hear that push an start a pipeline
The pipeline builds code and pushes a docker container to the gitlab registry
The pipeline logs into a digital ocean droplet via ssh
The pipeline pulls the docker container from the gitlab registry
The pipeline starts the container
I can get up to step 4 no problem. But step 4 just fails every which way. I've tried the ssh key approach:
https://gitlab.com/gitlab-examples/ssh-private-key/blob/master/.gitlab-ci.yml
But that did not work.
So I tried a plain text password approach like this:
image: gitlab/dind:latest
before_script:
- apt-get update -y && apt-get install sshpass
stages:
- deploy
deploy:
stage: deploy
script:
- sshpass -p "mypassword" ssh root#x.x.x.x 'echo $HOME'
this version just exits with code 1 like so
Pseudo-terminal will not be allocated because stdin is not a terminal.
ln: failed to create symbolic link '/sys/fs/cgroup/systemd/name=systemd': Operation not permitted
/usr/local/bin/wrapdocker: line 113: 54 Killed docker daemon $DOCKER_DAEMON_ARGS &> /var/log/docker.log
Timed out trying to connect to internal docker host.
Is there a better way to do this? How can I at the very least access my droplet from inside the gitlab-ci build environment?
I just answered this related question: Create react app + Gitlab CI + Digital Ocean droplet - Pipeline succeeds but Docker container is deleted right after
Heres the solution he is using to get ssh creds set:
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test it!
- ssh -t ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP} 'echo $HOME'
Code credit goes to https://stackoverflow.com/users/6655011/leonardo-sarmento-de-castro
Actually i wants to use my own stun/Turn server instance and i want to use Amazon EC2 .If anybody has any idea regarding this please share with me the steps to create or any reference link to follow.
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
simple way:
sudo apt-get install coturn
If you say no, I want the latest cutting edge, you can download source code from their downloads page in install it yourself, example:
sudo -i # ignore if you already in admin mode
apt-get update && apt-get install libssl-dev libevent-dev libhiredis-dev make -y # install the dependencies
wget -O turn.tar.gz http://turnserver.open-sys.org/downloads/v4.5.0.3/turnserver-4.5.0.3.tar.gz # Download the source tar
tar -zxvf turn.tar.gz # unzip
cd turnserver-*
./configure
make && make install
sample command for running TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP --no-dtls --no-tls
command description:
-X - your amazon instance's external IP, internal IP: EXT_IP/INT_IP
-p - port to be used, default 3478
-a - Use long-term credentials mechanism
-o - Run server process as daemon
-v - 'Moderate' verbose mode.
-n - no configuration file
--no-dtls - Do not start DTLS listeners
--no-tls - Do not start TLS listeners
-u - user credentials to be used
-r - default realm to be used, need for TURN REST API
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}
One method to install a turnserver on Amazon EC2 would be to choose Debian and to install the coturn package, which is the successor of the RFC5766-server.
The configuration file at /etc/turnserver.conf includes EC2 specific instructions. The information provided within this file is very exhaustive in general and should answer the majority of configuration questions.
Once configured, the coturn server can be stopped an started however you would any other service.