How to run heaptrack on long running server applications - c++

I have a c++ grpc server image running on GKE kubernetes and I was trying to profile with heaptrack.
in the docker image I installed heaptrack via apt-get, leaving out unrelated stuff the dockerfile looks like this
FROM ubuntu:20.04 as build
.....
RUN apt-get update && apt install -y software-properties-common && \
apt-get -y --no-install-recommends install \
....
heaptrack
...
ENTRYPOINT ["heaptrack", "./grpc_server"]
this creates a docker image which I store on google container registry
I then deploy the image via a yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: grpc-server
image: grpc_server_container_repo
resources:
requests:
memory: "54Gi"
cpu: "14"
limits:
memory: "64Gi"
cpu: "16"
ports:
- name: grpc
containerPort: 8080
after I deployed the image I ssh into the container with the command
kubectl exec -it app-6c88bd5854-98dg4 -c grpc-server -- /bin/bash
and I saw the file
heaptrack.grpc_server.1.gz
despite the server still running
I opened this file using heaptrack_gui but it shows total runtime as ~2s, I make a couple request to the server and this file is never updated again. I tried running
ps -aux
in the container and I can see
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 2608 1728 ? Ss 08:49 0:00 /bin/sh /usr/bin/heaptrack ./grpc_server
root 20 0.4 0.0 1817048 31340 ? Sl 08:49 0:06 ./grpc_server
root 113 0.0 0.0 4240 3516 pts/0 Ss 09:08 0:00 /bin/bash
root 125 0.0 0.0 6136 2928 pts/0 R+ 09:12 0:00 ps -aux
seems like I have 2 running instance of the server, one with heaptrack the other don't. I'm not sure what's going on here and was hoping someone can point me some direction to how can I profile a running server on k8s with heaptrack.

Related

aws cli Image rrun from docker compose and it's shutdon after one second

I tried to run aws cli image from amazon from docker-compose.
version: '3.1'
services:
web:
image: amazon/aws-cli:latest
stdin_open: true # equivalent of -i
tty: true # equivalent of -t
deploy:
mode: replicated
replicas: 1
resources:
limits:
cpus: "2"
memory: 2048M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
environment:
- HTTP_PROXY=http://ipadresses:port
- HTTPS_PROXY=https://ipadresses:port
ports:
- "8080:8080"
After one second, the image stops running immediately. I looked for the official documentation from amazon but I cannot find an answer to my question: why it's stop so quickely Could someone please help me understand this behavior?
amazon/aws-cli:latest is a docker image that provides only the aws command as starting command, which means, if you don't override the command executed by docker the container will execute aws command and stop the execution.
To execute commands against aws using that docker image you need to provide your command e.g. in this compose file I'm executing aws help by adding command: help to the file.
version: '3.1'
services:
web:
image: amazon/aws-cli:latest
stdin_open: true # equivalent of -i
tty: true # equivalent of -t
command: help
deploy:
mode: replicated
replicas: 1
resources:
limits:
cpus: "2"
memory: 2048M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
environment:
- HTTP_PROXY=http://ipadresses:port
- HTTPS_PROXY=https://ipadresses:port
ports:
- "8080:8080"
Alternatively you could override entrypoint to force the container to stay alive, so afterwards you can exec commands against your container e.g
version: '3.1'
services:
web:
image: amazon/aws-cli:latest
stdin_open: true # equivalent of -i
tty: true # equivalent of -t
entrypoint: tail -f /dev/null
deploy:
mode: replicated
replicas: 1
resources:
limits:
cpus: "2"
memory: 2048M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
environment:
- HTTP_PROXY=http://ipadresses:port
- HTTPS_PROXY=https://ipadresses:port
ports:
- "8080:8080"
After you run this compose file (in another terminal) you can inspect the name of the created container with docker ps and then execute commands against that using docker exec -it <your_container_name> aws help (note that in this case I'm sending the command aws help to the running container)

Why is the port not being mapped correctly? [duplicate]

name: Rspec
on: [push]
jobs:
build:
runs-on: [self-hosted, linux]
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
redis:
image: redis
options: --entrypoint redis-server
steps:
- uses: actions/checkout#v2
- name: running tests
run: |
sleep 60
curl -X GET http://elasticsearch:9200/
I am running tests self hosted, I see on host with docker ps the containers (redis and elasticsearch) when they up to test.
I access a container of redis, install a curl and run curl -X GET http://elasticsearch:9200/ and i see a response ok before 60 sec (wait time to service up)
On step running tests I got error message "Could not resolve host: elasticsearch"
So, inside service/container redis I see a host elasticsearch but on step running tests no. What I can do?
You have to map the ports of your service containers and use localhost:host-port as address in your steps running on the GitHub Actions runner.
If you configure the job to run directly on the runner machine and your step doesn't use a container action, you must map any required Docker service container ports to the Docker host (the runner machine). You can access the service container using localhost and the mapped port.
https://docs.github.com/en/free-pro-team#latest/actions/reference/workflow-syntax-for-github-actions#jobsjob_idservices
name: Rspec
on: [push]
jobs:
build:
runs-on: [self-hosted, linux]
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
ports:
# <port on host>:<port on container>
- 9200:9200
redis:
image: redis
options: --entrypoint redis-server
steps:
- uses: actions/checkout#v2
- name: running tests
run: |
sleep 60
curl -X GET http://localhost:9200/
Alternative:
Also run your job in a container. Then the job has to access the service containers by hostname.
name: Rspec
on: [push]
jobs:
build:
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
redis:
image: redis
options: --entrypoint redis-server
# Containers must run in Linux based operating systems
runs-on: [self-hosted, linux]
# Docker Hub image that this job executes in, pick any image that works for you
container: node:10.18-jessie
steps:
- uses: actions/checkout#v2
- name: running tests
run: |
sleep 60
curl -X GET http://elasticsearch:9200/

docker-compose No such command: convert error

I'm trying to follow this tutorial on AWS ECS integration that mentions the Docker command docker compose convert that is supposed to generate a AWS CloudFormation template.
However, when I run this command, it doesn't appear to exist.
$ docker-compose convert
No such command: convert
#...
$ docker compose convert
docker: 'compose' is not a docker command.
See 'docker --help'
$ docker context create ecs myecscontext
"docker context create" requires exactly 1 argument.
See 'docker context create --help'.
Usage: docker context create [OPTIONS] CONTEXT
Create a context
$ docker --version
Docker version 19.03.13, build 4484c46
$ docker-compose --version
docker-compose version 1.25.5, build unknown
$ docker version
Client:
Version: 19.03.13
API version: 1.40
Go version: go1.13.8
Git commit: 4484c46
Built: Thu Oct 15 18:34:11 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.11
API version: 1.40 (minimum version 1.12)
Go version: go1.13.12
Git commit: 77e06fd
Built: Mon Jun 8 20:24:59 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit:
docker-init:
Version: 0.18.0
GitCommit: fec3683
$ docker info
Client:
Debug Mode: false
Server:
Containers: 12
Running: 3
Paused: 0
Stopped: 9
Images: 149
Server Version: 19.03.11
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version:
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.8.0-29-generic
Operating System: Ubuntu Core 16
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 7.202GiB
Name: HongLee
ID: GZ5R:KQDD:JHOJ:KCUF:73AE:N3NY:MWXS:ABQ2:2EVY:4ABJ:H375:J64V
Docker Root Dir: /var/snap/docker/common/var-lib-docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Any ideas?
To get the ECS integration, you need to be using an ECS docker context. First, enable the experimental flag in /etc/docker/daemon.json
// /etc/docker/daemon.json
{
"experimental": true
}
Then create the context:
docker context create ecs myecscontext
docker context use myecscontext
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock [redacted] (default) swarm
myecscontext * ecs
Now run convert:
$ docker compose convert
WARN[0000] services.build: unsupported attribute
AWSTemplateFormatVersion: 2010-09-09
Resources:
AdminwebService:
DependsOn:
- AdminwebTCP80Listener
Properties:
Cluster:
...
You're running on Ubuntu. The /usr/bin/docker installed (even with latest docker-ce 20.10.6) does not enable the docker compose subcommand. It is enabled by default on Docker for Desktop Windows or Mac.
See the Linux installation instructions at https://github.com/docker/compose-cli to download and configure so that docker compose works.
There's a curl|bash script for Ubuntu or just download the latest release, put that docker executable into a PATH directory before /usr/bin/ and make sure the original docker is available as com.docker.cli e.g. ln -s /usr/bin/docker ~/bin/com.docker.cli.

Istio 1.6.5 gateway timeout errors

Intermittently we are seeing 504 gateway time out (504) errors when accessing application from browser. We upgraded istio from 1.4.3 to 1.6.5. There was no issue with 1.4.3.
Basically if you want to upgrade istio from 1.4.x to 1.6.x you should first upgrade from 1.4.x to 1.5.x, then upgrade from 1.5.x to 1.6.x
I have followed a theme on istio discuss about upgrade created by#laurentiuspurba.
I have changed it a little for your use case, so an upgrade from 1.4.3 to 1.5.0, then from 1.5.0 to 1.6.8
Take a look at below steps, before using that on your environment I would suggest to test that on some test environment.
1.Follow istio documentation and install istioctl 1.4.3 and 1.5 with:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.4.3 sh -
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh -
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.6.8 sh -
2.Add the istioctl 1.4.3 to your path
cd istio-1.4.3
export PATH=$PWD/bin:$PATH
3.Install istio 1.4.3
istioctl manifest generate > $HOME/generated-manifest.yaml
kubectl create namespace istio-system
kubectl apply -f generated-manifest.yaml
4.Check if everything works correct.
kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
5.Add the istioctl 1.5 to your path
cd istio-1.5.0
export PATH=$PWD/bin:$PATH
6.Install istio operator for future upgrade.
istioctl operator init
7.Prepare IstioOperator.yaml
nano IstioOperator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: default
tag: 1.5.0
8.Before the upgrade use below commands
kubectl -n istio-system delete service/istio-galley deployment.apps/istio-galley
kubectl delete validatingwebhookconfiguration.admissionregistration.k8s.io/istio-galley
9.Upgrade from 1.4.3 to 1.5 with istioctl upgrade and prepared IstioOperator.yaml
istioctl upgrade -f IstioOperator.yaml
10.After the upgrade use below commands
kubectl -n istio-system delete deployment istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete service istio-citadel istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete horizontalpodautoscaler.autoscaling/istio-pilot horizontalpodautoscaler.autoscaling/istio-telemetry
kubectl -n istio-system delete pdb istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete deployment istiocoredns
kubectl -n istio-system delete service istiocoredns
11.Check if everything works correct.
kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
12.I have deployed a bookinfo app to check if everything work correct.
kubectl label namespace default istio-injection=enabled
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
13.Results
curl -v xx.xx.xxx.xxx/productpage | grep HTTP
HTTP/1.1 200 OK
istioctl version
client version: 1.5.0
control plane version: 1.5.0
data plane version: 1.5.0 (8 proxies)
14.Add the istioctl 1.6.8 to your path
cd istio-1.6.8
export PATH=$PWD/bin:$PATH
15.Prepare IstioOperator.yaml
nano IstioOperator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: default
tag: 1.6.8
16.Upgrade from 1.5.0 to 1.6.8 with istioctl upgrade and prepared IstioOperator.yaml
istioctl upgrade -f IstioOperator.yaml
17.To upgrade the Istio data plane, you will need to re-inject it.
If you’re using automatic sidecar injection, you can upgrade the sidecar by doing a rolling update for all the pods:
kubectl rollout restart deployment --namespace <namespace with auto injection>
If you’re using manual injection, you can upgrade the sidecar by executing:
kubectl apply -f < (istioctl kube-inject -f <original application deployment yaml>)
18.Results
curl -v xx.xx.xxx.xxx/productpage | grep HTTP
HTTP/1.1 200 OK
istioctl version
client version: 1.6.8
control plane version: 1.6.8
data plane version: 1.6.8 (8 proxies)
Hope you find this useful. If you have any questions let me know.
default timeout = 15 sec.
you can set an explicit timeout in the Virtual Service

Docker connection refused hanging Django

When I run my container, it just hangs on the next line and if I write
curl http://0.0.0.0:8000/
I get
Failed to connect to 0.0.0.0 port 8000: Connection refuse
This is my dockerfile
FROM python:3.6.1
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN pip3 install -r requirements.txt
CMD ["python3", "dockerizing/manage.py", "runserver", "0.0.0.0:8000"]
I also tried doing it through a docker-compose.yml file and again nothing happens, I´ve searched a lot and haven´t found a solution, this is the docker-compose.yml
version: "3"
services:
web:
image: app1
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "8000:8000"
networks:
- webnet
networks:
webnet:
By the way, if I run docker ps with myapp image I get this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e9633657f060 app1 "python3 dockerizi..." 5 seconds ago Up 5 seconds friendly_dijkstra
When I deploy the service with the django-compose.yml and docker ps I get this:
`MacBook-Pro-de-Jesus:docker-django Almaral$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
13677a71d9d5 app1:latest "python3 dockerizin..." 15 seconds ago Up 11 seconds getstartedlab_web.1.cq3zqmpfsii5g6m5r9qsnmtb1
c6693118ef70 app1:latest "python3 dockerizin..." 16 seconds ago Up 12 seconds getstartedlab_web.4.r472oh80s4zd1yymj447f1df6
f3822e47970b app1:latest "python3 dockerizin..." 16 seconds ago Up 12 seconds getstartedlab_web.2.lkp43v9h30esjohcnf3pe31hi
f66a4038ebdf app1:latest "python3 dockerizin..." 16 seconds ago Up 12 seconds getstartedlab_web.5.xxu01ruebd84tnlxmoymsu0vo
e3d31c419c11 app1:latest "python3 dockerizin..." 16 seconds ago Up 13 seconds getstartedlab_web.3.uqswgirmg22sjnekzmf5b4xo7`
Your docker ps output shows nothing in the PORTS column. That means that there's no port forwarding from the host to the container.
[...] STATUS PORTS NAMES
[...] Up 5 seconds friendly_dijkstra
If you use the command docker run to run your container, you should explicitly specify port number both on host and on the container using the command option -p hostPort:containerPort
docker run -p 8000:8000 app1
Now, running docker ps should show port forwarding.
[...] STATUS PORTS NAMES
[...] Up 5 seconds 0.0.0.0:8000->8000/tcp friendly_dijkstra
If you are using docker-compose to start your containers, the host and container ports are already configured in your docker-compose.yml file, so you don't need a command line option.
docker-compose up web
To use docker compose, you have to install it on the host.
It's a python module, so you can install it with pip pip install docker-compose
Into your docker-compose config file, modify your port redirection from: 8000:8000 to 127.0.0.1:8000:8000