Not able to run Elasticsearch in docker on amazon Ec2 instance - amazon-web-services

I am trying to run elasticsearch 7.7 in docker container using t2.medium instance and went through this SO question and official ES docs on installing ES using docker but even after giving discovery.type: single-node its not bypassing the bootstrap checks mentioned in several posts.
My elasticsearch.yml file
cluster.name: scanner
node.name: node-1
network.host: 0.0.0.0
discovery.type: single-node
cluster.initial_master_nodes: node-1 // tried explicitly giving this but no luck
xpack.security.enabled: true
My Dockerfile
FROM docker.elastic.co/elasticsearch/elasticsearch:7.7.0
COPY elasticsearch.yml /usr/share/elasticsearch/elasticsearch.yml
USER root
RUN chmod go-w /usr/share/elasticsearch/elasticsearch.yml
RUN chown root:elasticsearch /usr/share/elasticsearch/elasticsearch.yml
USER elasticsearch
And this is how I am building and running the image.
docker build -t es:latest .
docker run --ulimit nofile=65535:65535 -p 9200:9200 es:latest
And relevant error logs
75", "message": "bound or publishing to a non-loopback address,
enforcing bootstrap checks" } ERROR: 1 bootstrap checks failed 1:
the default discovery settings are unsuitable for production use; at
least one of [discovery.seed_hosts, discovery.seed_providers,
cluster.initial_master_nodes] must be configured ERROR: Elasticsearch
did not exit normally - check the logs at
/usr/share/elasticsearch/logs/docker-cluster.log

Elasticsearch in a single node
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch
environment:
- node.name=vibhuvi-node
- discovery.type=single-node
- cluster.name=vibhuvi-es-data-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- vibhuviesdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
networks:
elastic:
driver: bridge
volumes:
vibhuviesdata:
driver: local
Run
docker-compose up -d

Related

Deploying an ECS application to AWS using docker compose

I am following the AWS tutorial on deploying an ECS application using docker compose.
When I run docker compose up, I only receive the message docker UpdateInProgress User Initiated, but nothing else happens:
[+] Running 0/0
- docker UpdateInProgress User Initiated 0.0s
Previously, this worked fine and all the ECS resources (cluster, task definitions, services, load balancer) had been created.
For some reason, now, this does not work anymore (although I have not changed my docker-compose.yml file).
docker-compose.yml:
version: '3'
services:
postgres:
image: ${AWS_DOCKER_REGISTRY}/postgres
networks:
- my-network
ports:
- "5432:5432"
volumes:
- postgres:/data/postgres
server:
image: ${AWS_DOCKER_REGISTRY}/server
networks:
- my-network
env_file:
- .env
ports:
- "${PORT}:${PORT}"
depends_on:
- postgres
entrypoint: "/server/run.sh"
pgadmin:
image: ${AWS_DOCKER_REGISTRY}/pgadmin
networks:
- my-network
depends_on:
- postgres
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:${PGADMIN_PORT:-5050}"
networks:
my-network:
#driver: bridge
volumes:
postgres:
pgadmin:
I also switched to the correct Docker context before (docker context use my-aws-context).
And I have updated to the latest version of Docker Desktop for Windows and AWS CLI.
Did someone already have a similar problem?
From the message it appears that you are trying to compose up a stack that is existing already (on AWS) and so it's trying to update the CFN stack. Can you check if this is the case? You have a couple of options if that is what's happening: 1) delete the CFN stack (either in AWS or with docker compose down) or 2) launch the docker compose up with the flag --project-name string (where string is an arbitrary name of your choice). By default compose will use the directory name as the project name so if you compose up twice it will try to work on the same stack.

Docker : compose file is incompatible with Amazon ECS

I am trying to deploy my docker image in AWS ECS. I have created the ECR repository and done all required steps till pushing the image to ECS.
My docker-compose.yaml looks like this
version: '3'
services:
djangoapp:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/abc:latest #uri after pushing the image
build: .
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
networks:
- nginx_network
- database1_network
depends_on:
- database1
nginx:
image: nginx:1.13
ports:
- 80:5000
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
depends_on:
- djangoapp
networks:
- nginx_network
database1:
image: postgres:10
env_file:
- config/db/database1_env
networks:
- database1_network
volumes:
- database1_volume:/var/lib/postgresql/data
networks:
nginx_network:
driver: bridge
database1_network:
driver: bridge
volumes:
database1_volume:
static_volume: # <-- declare the static volume
media_volume: # <-- declare the media volume
I am trying to run the command:
docker ecs compose -n abc up
And i get the following error:
WARN[0000] services.build: unsupported attribute
WARN[0000] services.volumes: unsupported attribute
ERRO[0000] published port can't be set to a distinct value than container port: incompatible attribute
WARN[0000] services.volumes: unsupported attribute
WARN[0000] services.env_file: unsupported attribute
WARN[0000] services.volumes: unsupported attribute
WARN[0000] networks.driver: unsupported attribute
WARN[0000] networks.driver: unsupported attribute
compose file is incompatible with Amazon ECS
I am using the latest version of docker i.e 19.03.08 and the latest aws-cli/2.0.39.
I am facing the same trouble and I was able to accomplish it by removing all attributes in docker-compose.yaml except for image.
In your djangoapp service under the image attribute, you set the value to a URI with the comment "the uri after pushing the image". Presumably this is the URI to a locally-built docker image of djangoapp, which was pushed to a ECR repository.
Since you already built and pushed the djangoapp image to ECR, just leave the image attribute and comment out all other attributes listed in the error message from docker-compose.yaml:
build
volumes
...
In my case it helped.
The list of supported attributes: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-parameters.html
Docker decided to not allow for this. This is what they said on a Slack convo: "We decided to not support this as this would only apply to ingress traffic, not service-to-service, which would be both confusing and inconsistent with local development".
I'm not sure what you're trying to achieve, but it looks like you want to have a django service backed by a postgres database, and you want to use nginx as a reverse proxy to forward requests to django?
If you are trying to do that, then just us a single network for your cluster, and get rid of the driver option(s) for the networks. Removing those driver options will get rid of the networks.driver errors. The networks in docker compose will map to EC2 security groups. They're not like private/bridged/nat networks you would use in VM management like VMWare or Hyper-V. Putting all of your services on the same network will allow them to communicate with each other if that's what you're looking for. They will also be able to communicate out to the internet, but only those that have ports set will be reachable directly from the internet.
Regarding ports, only symmetrical port mapping is supported in ECS. This means you can't map one external port to a different internal port. They have to be the same. So your nginx configuration must use either 80, or 5000 - not both. Fixing that will get rid of your 3rd error.
As #Maksym specified, you don't need the build option under your djangoapp service. You already have the image built and are just deploying it.
For your volumes, you're trying to mount a host path with .:/opt/services/djangoapp/src. When you use docker compose to deploy to ECS, it uses CloudFormation to deploy your stack and each of your services run "serverless", so there isn't a host to mount a path from. In your case, since you're building djangoapp image yourself, just update your Dockerfile to copy the desired contents to the /opt/services/djangoapp/src filter as part of the image build. Do the same with your nginx service - create your own nginx image and include the files you want in /etc/nginx/conf.d, push it to ECR, and then use that image in your compose yaml. One of those files should be a configuration to reverse proxy port 80 to your djangoapp:5000 port.
For your other errors, I'm not sure. The configuration for env_file looks fine, as does your named volume mappings like static_volume:/opt/services/djangoapp/static. I don't know if updating your version heading to 3.7 in the top of your file will help.
In the end, your file should look similar to this:
version: '3.7'
services:
djangoapp:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/abc:latest #uri after pushing the image
volumes:
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
networks:
- app_network
depends_on:
- database1
nginx:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/my-nginx:latest
ports:
- 80
volumes:
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
depends_on:
- djangoapp
networks:
- app_network
database1:
image: postgres:10
env_file:
- config/db/database1_env
networks:
- app_network
volumes:
- database1_volume:/var/lib/postgresql/data
networks:
app_network:
name: app_network
volumes:
database1_volume:
name: app_database1_volume
static_volume:
name: app_static_volumne
media_volume:
name: app_media_volume

Run Elasticsearch on AWS EC2 with Docker

I'm trying to run Elasticsearch with Docker on an AWS EC2 instance, but when it runs, after a few seconds will be stopped, any of you have any experiences what the problem could be?
This is my Elasticsearch config in the docker-compose.yaml:
elasticsearch:
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- elasticsearch:/usr/share/elasticsearch/data
environment:
- cluster.name=laradock-cluster
- node.name=laradock-node
- bootstrap.memory_lock=true
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms7g -Xmx7g"
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.watcher.enabled=false
- cluster.initial_master_nodes=laradock-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "${ELASTICSEARCH_HOST_HTTP_PORT}:9200"
- "${ELASTICSEARCH_HOST_TRANSPORT_PORT}:9300"
depends_on:
- php-fpm
networks:
- frontend
- backend
And This is my Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:7.5.1
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
EXPOSE 9200 9300
Also, I did sysctl -w vm.max_map_count=655360 on my AWS EC2 instance
Notice: my AWS EC2 instance is Ubuntu 18.4
Thanks
I am not sure about your docker-compose.yaml as you are not referring this in your dockerfile, But I am able to reproduce the issue. I launched same ubuntu 18.4 in my AWS account and used your dockerfile to launch a ES docker container using below commands:
docker build --tag=elasticsearch-custom .
docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
And my docker container was also stopping just after starting up as shown below:
ubuntu#ip-172-31-32-95:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03cde4a19389 elasticsearch-custom "/usr/local/bin/dock…" 33 seconds ago Exited (78) 6 seconds ago mystifying_napier
When checked the logs on console, when starting the docker, I found below error:
ERROR: [1] bootstrap checks failed [1]: the default discovery settings
are unsuitable for production use; at least one of
[discovery.seed_hosts, discovery.seed_providers,
cluster.initial_master_nodes] must be configured
Which is very well known error and can be easily resolved just by adding -e "discovery.type=single-node" to docker run command. After adding this in docker run command as below:
docker run -e "discovery.type=single-node" -ti -v /usr/share/elasticsearch/data elasticsearch-custom
its running fine as shown below:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
191fc3dceb5a elasticsearch-custom "/usr/local/bin/dock…" 8 minutes ago Up 8 minutes 9200/tcp, 9300/tcp recursing_elgamal

How to fix ”unable to prepare context: unable to evaluate symlinks in Dockerfile path” error in circleci

I'm setting up circle-ci to automatically build/deploy to AWS ECR &ECS.
But build is failed due to no Dockerfile.
Maybe this is because I set docker-compose for multiple docker images.
But I don't know how to resolve this issue.
Is there no way to make DockerFile instead of docker-compose?
front: React
backend: Golang
ci-tool: circle-ci
db: mysql
article
 ├ .circleci
 ├ client
 ├ api
 └ docker-compose.yml
I set .circleci/config.yml.
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.0.0
aws-ecs: circleci/aws-ecs#0.0.8
workflows:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'article-task-jpskgc'
cluster-name: 'article-cluster-jpskgc'
service-name: 'article-service-jpskgc'
container-image-name-updates: 'container=article-container-jpskgc,tag=${CIRCLE_SHA1}'
Here is the source code in github.
https://github.com/jpskgc/article
I expect build/deploy via circle-ci to ECR/ECS to success, but it actually fails.
This is the error log on circle-ci.
Build docker image
Exit code: 1
#!/bin/bash -eo pipefail
docker build \
\
-f Dockerfile \
-t $AWS_ECR_ACCOUNT_URL/article-ecr-jpskgc:${CIRCLE_SHA1} \
.
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/circleci/project/Dockerfile: no such file or directory
Exited with code 1
You must use a Dockerfile, check out the documentation for the orb you are using. Please read through them here. Also docker-compose ≠ docker, therefore I will confirm that one cannot be used in substitution for the other.
Given your docker-compose.yml, I have a few suggestions for your general setup and CI.
For reference here is the docker-compose.yml in question:
version: '3'
services:
db:
image: mysql
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: article
MYSQL_USER: docker
MYSQL_PASSWORD: docker
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./api
volumes:
- ./api:/app
ports:
- 2345:2345
depends_on:
- db
tty: true
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
ports:
- 3000:3000
From the above we have the various components, just as you have stated:
MySQL Database
Nginx Loadbalancer
Client App
API Server
Here are my recommendations for each component:
MySQL Database
Since you are deploying to AWS I recommend deploying a MySQL instance on the free tier, please follow this documentation: https://aws.amazon.com/rds/free. With this you can remove your database from CI, which is recommended as ECS is not the ideal service to run a MySQL server.
Nginx Loadbalancer
Because you are using ECS, this is not required as AWS handles all load balancing for you and is redundant.
Client App
Because this is a react application, you shouldn't deploy to ECS -- this is not cost effective you would rather deploy this to Amazon S3. There are many resources on how to do this. You may follow this guide though you may have to make a few change based of the structure of your repository.
This will reduce your overall cost and it makes more sense than an entire Docker container running just to serve static files.
API Server
This is the only thing that should be running in ECS, and all you need to do is point to the correct Dockerfile in your configuration for it be built and pushed successfully.
You may therefore edit your circle ci config as follows, assuming we are using the same Dockerfile in your docker-compose.yml:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
dockerfile: Dockerfile.dev
path: ./api
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
Things to Note
My answer does not include:
How to load balance your API service please follow these docs on how to do so: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
Details on setting up the MySQL server, it assumed you will follow the AWS documentation provided above.
Things you must do:
Point your client app to the API server, this will probably require a code change from what I've seen.
I want to stress that you must Load balance your API server according to these docs yet again.
You do not need to edit your docker-compose.yml

Django Docker/Kubernetes Postgres data not appearing

I just tried switching from docker-compose to docker stacks/kubernetes. In compose I was able to specify where the postgres data volume was and the data persisted nicely.
volumes:
- ./postgres-data:/var/lib/postgresql/data
I tried doing the same thing with the stack file and I can connect to the pod and use psql to see the schema but none of the data entered from docker-compose is there.
Any ideas why this might be?
Here's the stack.yml
version: '3.3'
services:
django:
image: image
build:
context: .
dockerfile: docker/Dockerfile
deploy:
replicas: 5
environment:
- DJANGO_SETTINGS_MODULE=config.settings.local
- SECRET_KEY=password
- NAME=postgres
- USER=postgres
- HOST=db
- PASSWORD=password
- PORT=5432
volumes:
- .:/application
command: ["gunicorn", "--bind 0.0.0.0:8000", "config.wsgi"]
ports:
- "8000:8000"
links:
- db
db:
image: mdillon/postgis:9.6-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
You failed to mention how your cluster is provisioned, where is it running etc. so I will make an assumption we're talking about local tests here. If so, you probably have local docker/docker-compose and minikube installed.
If that is the case, please mind that minikube runs in it's own VM so it will not be affected by changes you make on your host by ie. docker, as it has it's own filesystem in vm.
Hint: you can run docker against docker daemon of minikube if you first run eval $(minikube docker-env)
For docker stacks, run the docker inspect command, it should show the mount point of the Postgres container.
docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Mounts}} {{.Source}}{{end}}' <StackName>
Fixed in the last Docker Edge update.