Using --net=host in Tekton sidecars - kubectl

I am creating a tekton project which will spawn docker images which in turn will run few kubectl commands. This I have accomplished by using sidecars in tekton docker:dind image and setting
securityContext:
privileged: true
env:
However, one of the task is failing, since it needs to have an equivalent of --net=host in docker run example.
I have tried to set a podtemplate with hostnetwork: True, but then the task with the sidecar fails to start the docker
Any idea if I could implement --net=host in the task yaml file. It would be really helpful.
Snippet of my task with the sidecar:
sidecars:
- image: mypvtreg:exv1
name: mgmtserver
args:
- --storage-driver=vfs
- --userland-proxy=false
# - --net=host
securityContext:
privileged: true
env:
# Write generated certs to the path shared with the client.
- name: DOCKER_TLS_CERTDIR
value: /certs
volumeMounts:
- mountPath: /certs

As commented by #SYN, Using docker:dind as a sidecar, your builder container, executing in your Task steps, should connect to 127.0.0.1. That's how you would talk to your dind sidecar.

Related

GCP Helm Cloud Builder

Just curious, why isn't there a helm cloud builder officially supported? It seems like a very common requirement, yet I'm not seeing one in the list here:
https://github.com/GoogleCloudPlatform/cloud-builders
I was previously using alpine/helm in my cloudbuild.yaml for my helm deployment as follows:
steps:
# Build app image
- name: gcr.io/cloud_builders/docker
args:
- build
- -t
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
- ./cloudbuild/$_CONTAINER_NAME/
# Push my-app image to Google Cloud Registry
- name: gcr.io/cloud-builders/docker
args:
- push
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
# Configure a kubectl workspace for this project
- name: gcr.io/cloud-builders/kubectl
args:
- cluster-info
env:
- CLOUDSDK_COMPUTE_REGION=$_CUSTOM_REGION
- CLOUDSDK_CONTAINER_CLUSTER=$_CUSTOM_CLUSTER
- KUBECONFIG=/workspace/.kube/config
# Deploy with Helm
- name: alpine/helm
args:
- upgrade
- -i
- $_CONTAINER_NAME
- ./cloudbuild/$_CONTAINER_NAME/k8s
- --set
- image.repository=$_IMAGE_REPO/$_CONTAINER_NAME,image.tag=$COMMIT_SHA
- -f
- ./cloudbuild/$_CONTAINER_NAME/k8s/values.yaml
env:
- KUBECONFIG=/workspace/.kube/config
- TILLERLESS=false
- TILLER_NAMESPACE=kube-system
- USE_GKE_GCLOUD_AUTH_PLUGIN=True
timeout: 1200s
substitutions:
# substitutionOption: ALLOW_LOOSE
# dynamicSubstitutions: true
_CUSTOM_REGION: us-east1
_CUSTOM_CLUSTER: demo-gke
_IMAGE_REPO: us-east1-docker.pkg.dev/fakeproject/my-docker-repo
_CONTAINER_NAME: app2
options:
logging: CLOUD_LOGGING_ONLY
# In this option we are providing the worker pool name that we have created in the previous step
workerPool:
'projects/fakeproject/locations/us-east1/workerPools/cloud-build-pool'
And this was working with no issues. Then recently it just started failing with the following error so I'm guessing a change was made recently:
Error: Kubernetes cluster unreachable: Get "https://10.10.2.2/version": getting credentials: exec: executable gke-gcloud-auth-plugin not found"
I get this error regularly on VM's and can workaround it by setting USE_GKE_GCLOUD_AUTH_PLUGIN=True, but that does not seem to fix the issue here if I add it to the env section. So I'm looking for recommendations on how to use helm with Cloud Build. alpine/helm was just something I randomly tried and was working for me up until now, but there's probably better solutions out there.
Thanks!

Deploying an ECS application to AWS using docker compose

I am following the AWS tutorial on deploying an ECS application using docker compose.
When I run docker compose up, I only receive the message docker UpdateInProgress User Initiated, but nothing else happens:
[+] Running 0/0
- docker UpdateInProgress User Initiated 0.0s
Previously, this worked fine and all the ECS resources (cluster, task definitions, services, load balancer) had been created.
For some reason, now, this does not work anymore (although I have not changed my docker-compose.yml file).
docker-compose.yml:
version: '3'
services:
postgres:
image: ${AWS_DOCKER_REGISTRY}/postgres
networks:
- my-network
ports:
- "5432:5432"
volumes:
- postgres:/data/postgres
server:
image: ${AWS_DOCKER_REGISTRY}/server
networks:
- my-network
env_file:
- .env
ports:
- "${PORT}:${PORT}"
depends_on:
- postgres
entrypoint: "/server/run.sh"
pgadmin:
image: ${AWS_DOCKER_REGISTRY}/pgadmin
networks:
- my-network
depends_on:
- postgres
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:${PGADMIN_PORT:-5050}"
networks:
my-network:
#driver: bridge
volumes:
postgres:
pgadmin:
I also switched to the correct Docker context before (docker context use my-aws-context).
And I have updated to the latest version of Docker Desktop for Windows and AWS CLI.
Did someone already have a similar problem?
From the message it appears that you are trying to compose up a stack that is existing already (on AWS) and so it's trying to update the CFN stack. Can you check if this is the case? You have a couple of options if that is what's happening: 1) delete the CFN stack (either in AWS or with docker compose down) or 2) launch the docker compose up with the flag --project-name string (where string is an arbitrary name of your choice). By default compose will use the directory name as the project name so if you compose up twice it will try to work on the same stack.

Not able to run Elasticsearch in docker on amazon Ec2 instance

I am trying to run elasticsearch 7.7 in docker container using t2.medium instance and went through this SO question and official ES docs on installing ES using docker but even after giving discovery.type: single-node its not bypassing the bootstrap checks mentioned in several posts.
My elasticsearch.yml file
cluster.name: scanner
node.name: node-1
network.host: 0.0.0.0
discovery.type: single-node
cluster.initial_master_nodes: node-1 // tried explicitly giving this but no luck
xpack.security.enabled: true
My Dockerfile
FROM docker.elastic.co/elasticsearch/elasticsearch:7.7.0
COPY elasticsearch.yml /usr/share/elasticsearch/elasticsearch.yml
USER root
RUN chmod go-w /usr/share/elasticsearch/elasticsearch.yml
RUN chown root:elasticsearch /usr/share/elasticsearch/elasticsearch.yml
USER elasticsearch
And this is how I am building and running the image.
docker build -t es:latest .
docker run --ulimit nofile=65535:65535 -p 9200:9200 es:latest
And relevant error logs
75", "message": "bound or publishing to a non-loopback address,
enforcing bootstrap checks" } ERROR: 1 bootstrap checks failed 1:
the default discovery settings are unsuitable for production use; at
least one of [discovery.seed_hosts, discovery.seed_providers,
cluster.initial_master_nodes] must be configured ERROR: Elasticsearch
did not exit normally - check the logs at
/usr/share/elasticsearch/logs/docker-cluster.log
Elasticsearch in a single node
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch
environment:
- node.name=vibhuvi-node
- discovery.type=single-node
- cluster.name=vibhuvi-es-data-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- vibhuviesdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
networks:
elastic:
driver: bridge
volumes:
vibhuviesdata:
driver: local
Run
docker-compose up -d

How to fix ”unable to prepare context: unable to evaluate symlinks in Dockerfile path” error in circleci

I'm setting up circle-ci to automatically build/deploy to AWS ECR &ECS.
But build is failed due to no Dockerfile.
Maybe this is because I set docker-compose for multiple docker images.
But I don't know how to resolve this issue.
Is there no way to make DockerFile instead of docker-compose?
front: React
backend: Golang
ci-tool: circle-ci
db: mysql
article
 ├ .circleci
 ├ client
 ├ api
 └ docker-compose.yml
I set .circleci/config.yml.
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.0.0
aws-ecs: circleci/aws-ecs#0.0.8
workflows:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'article-task-jpskgc'
cluster-name: 'article-cluster-jpskgc'
service-name: 'article-service-jpskgc'
container-image-name-updates: 'container=article-container-jpskgc,tag=${CIRCLE_SHA1}'
Here is the source code in github.
https://github.com/jpskgc/article
I expect build/deploy via circle-ci to ECR/ECS to success, but it actually fails.
This is the error log on circle-ci.
Build docker image
Exit code: 1
#!/bin/bash -eo pipefail
docker build \
\
-f Dockerfile \
-t $AWS_ECR_ACCOUNT_URL/article-ecr-jpskgc:${CIRCLE_SHA1} \
.
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/circleci/project/Dockerfile: no such file or directory
Exited with code 1
You must use a Dockerfile, check out the documentation for the orb you are using. Please read through them here. Also docker-compose ≠ docker, therefore I will confirm that one cannot be used in substitution for the other.
Given your docker-compose.yml, I have a few suggestions for your general setup and CI.
For reference here is the docker-compose.yml in question:
version: '3'
services:
db:
image: mysql
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: article
MYSQL_USER: docker
MYSQL_PASSWORD: docker
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./api
volumes:
- ./api:/app
ports:
- 2345:2345
depends_on:
- db
tty: true
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
ports:
- 3000:3000
From the above we have the various components, just as you have stated:
MySQL Database
Nginx Loadbalancer
Client App
API Server
Here are my recommendations for each component:
MySQL Database
Since you are deploying to AWS I recommend deploying a MySQL instance on the free tier, please follow this documentation: https://aws.amazon.com/rds/free. With this you can remove your database from CI, which is recommended as ECS is not the ideal service to run a MySQL server.
Nginx Loadbalancer
Because you are using ECS, this is not required as AWS handles all load balancing for you and is redundant.
Client App
Because this is a react application, you shouldn't deploy to ECS -- this is not cost effective you would rather deploy this to Amazon S3. There are many resources on how to do this. You may follow this guide though you may have to make a few change based of the structure of your repository.
This will reduce your overall cost and it makes more sense than an entire Docker container running just to serve static files.
API Server
This is the only thing that should be running in ECS, and all you need to do is point to the correct Dockerfile in your configuration for it be built and pushed successfully.
You may therefore edit your circle ci config as follows, assuming we are using the same Dockerfile in your docker-compose.yml:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
dockerfile: Dockerfile.dev
path: ./api
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
Things to Note
My answer does not include:
How to load balance your API service please follow these docs on how to do so: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
Details on setting up the MySQL server, it assumed you will follow the AWS documentation provided above.
Things you must do:
Point your client app to the API server, this will probably require a code change from what I've seen.
I want to stress that you must Load balance your API server according to these docs yet again.
You do not need to edit your docker-compose.yml

How can I get traefik to work on my cloud architecture?

Okay, so spent a day on my EC2 with Traefik and Docker set up, but doesn't seem to be working as described in the docs. I can get the Whoami example running but that doesn't really illustrate what I'm looking for?
For my example I have three AWS API Gateway endpoints and I need to point them to my EC2 IP address that gets routed by my Traefik frontend set up and then uses some backend? Which I'm still uncertain of what kind of backend to use.
I can't seem to find a good YAML example that clearly illustrates something to suit my purpose and needs.
Can anyone point me in the right direction? Any good example Docker YAML examples, configuration set up for my example below? Thanks!
I had taken this article as a guide to provision docker installation with traefik.
EDIT: Before this, create a docker network called proxy.
$ docker network create proxy
version: '3'
networks:
proxy:
external: true
internal:
external: false
services:
reverse-proxy:
image: traefik:latest # The official Traefik docker image
command: --api --docker --acme.email="your-email" # Enables the web UI and tells Træfik to listen to docker
restart: always
labels:
- traefik.frontend.rule=Host:traefik.your-server.net
- traefik.port=8080
networks:
- proxy
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/traefik.toml:/etc/traefik/traefik.toml
- $PWD/acme.json:/acme.json
db:
image: mariadb:10.3
restart: always
environment:
MYSQL_ROOT_PASSWORD: r00tPassw0rd
volumes:
- vol-db:/var/lib/mysql
networks:
- internal # since you do not need to expose this via traefik, so just set it to internal network
labels:
- traefik.enable=false
api-1:
image: your-api-image
restart: always
networks:
- internal
- proxy
labels:
- "traefik.docker.network=proxy"
- "traefik.enable=true"
- "traefik.frontend.rule=Host:api1.yourdomain.com"
- "traefik.port=80"
- "traefik.protocol=http"
api-2:
image: your-api-2-image
restart: always
networks:
- internal
- proxy
labels:
- "traefik.docker.network=proxy"
- "traefik.enable=true"
- "traefik.frontend.rule=Host:api2.yourdomain.com"
- "traefik.port=80"
- "traefik.protocol=http"
Note: Use this if you want to enable SSL as well. Please note that, this might not work in local server as letsencrypt cannot complete the challenge for SSL setup.
create a blank file acme.json and set its permission to 0600
touch acme.json
chmod 0600 acme.json
After setting up everything,
docker-compose config # this is optional though.
and then,
docker-compose up
I have posted my traefik.toml here
I hope this helps.
Let me know if you face any issues.
Regards,
Kushal.