Run Elasticsearch on AWS EC2 with Docker - amazon-web-services

I'm trying to run Elasticsearch with Docker on an AWS EC2 instance, but when it runs, after a few seconds will be stopped, any of you have any experiences what the problem could be?
This is my Elasticsearch config in the docker-compose.yaml:
elasticsearch:
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- elasticsearch:/usr/share/elasticsearch/data
environment:
- cluster.name=laradock-cluster
- node.name=laradock-node
- bootstrap.memory_lock=true
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms7g -Xmx7g"
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.watcher.enabled=false
- cluster.initial_master_nodes=laradock-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "${ELASTICSEARCH_HOST_HTTP_PORT}:9200"
- "${ELASTICSEARCH_HOST_TRANSPORT_PORT}:9300"
depends_on:
- php-fpm
networks:
- frontend
- backend
And This is my Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:7.5.1
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
EXPOSE 9200 9300
Also, I did sysctl -w vm.max_map_count=655360 on my AWS EC2 instance
Notice: my AWS EC2 instance is Ubuntu 18.4
Thanks

I am not sure about your docker-compose.yaml as you are not referring this in your dockerfile, But I am able to reproduce the issue. I launched same ubuntu 18.4 in my AWS account and used your dockerfile to launch a ES docker container using below commands:
docker build --tag=elasticsearch-custom .
docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
And my docker container was also stopping just after starting up as shown below:
ubuntu#ip-172-31-32-95:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03cde4a19389 elasticsearch-custom "/usr/local/bin/dock…" 33 seconds ago Exited (78) 6 seconds ago mystifying_napier
When checked the logs on console, when starting the docker, I found below error:
ERROR: [1] bootstrap checks failed [1]: the default discovery settings
are unsuitable for production use; at least one of
[discovery.seed_hosts, discovery.seed_providers,
cluster.initial_master_nodes] must be configured
Which is very well known error and can be easily resolved just by adding -e "discovery.type=single-node" to docker run command. After adding this in docker run command as below:
docker run -e "discovery.type=single-node" -ti -v /usr/share/elasticsearch/data elasticsearch-custom
its running fine as shown below:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
191fc3dceb5a elasticsearch-custom "/usr/local/bin/dock…" 8 minutes ago Up 8 minutes 9200/tcp, 9300/tcp recursing_elgamal

Related

Docker-compose - Cannot connect to Postgres

It looks like a common issue: can't connect to Postgres from a Django app in Docker Compose.
Actually, I tried several solution from the web, but probably I'm missing something I cannot see.
The error I got is:
django.db.utils.OperationalError: could not translate host name "db" to address: Try again
Where the "db" should be the name of the docker-compose service and which must setup in the .env.
My docker-compose.yml:
version: '3.3'
services:
web:
build: .
container_name: drf_app
volumes:
- ./src:/drf
links:
- db:db
ports:
- 9090:8080
env_file:
- /.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypass
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
postgres_data:
My .env:
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=mydb
SQL_USER=myuser
SQL_PASSWORD=mypass
SQL_HOST=db #this one should match the service name
SQL_PORT=5432
As far as I know, web and db should automatically see each other in the same network, but this doesn't happens.
Inspecting the ip address with ifconfig on each container: django app has 172.17.0.2 and the db 172.19.0.2. They are not able to ping each other.
The result of docker ps command:
400879d47887 postgres:13-alpine "docker-entrypoint.s…" 38 minutes ago Up 38 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp backend_db_1
I really cannot figure out the issue, so am I missing something?
I write this to save anyone in future from the same issue.
After countless tries, I started thinking that nothing was wrong from the pure docker perspective: I was right.
SOLUTION: My only suspect was related to the execution inside a Virtual Machine, so executing the same docker image on the host worked like a charm!
The networking issue was related to the VM (VirtualBox Ubuntu 20.04)
I do not know if there is a way to work with docker-compose inside a VM, so any suggestion is appreciated.
You said in a comment:
The command I run is the following: docker run -it --entrypoint /bin/sh backend_web
Docker Compose creates several Docker resources, including a default network. If you separately docker run a container it doesn't see any of these resources. The docker run container is on the "default bridge network" and can't use present-day Docker networking capabilities, but the docker-compose container is on a separate "user-defined bridge network" named backend_default. That's why you're seeing a couple of the symptoms you do: the two networks have separate IPv4 CIDR ranges, and Docker's container DNS resolution only happens for the current network.
There's no reason to start a container with an interactive shell and then start your application within that (any more than you don't normally run python and then manually call main() from its REPL). Just start the entire application:
docker-compose up -d
If you do happen to need an interactive shell to debug your container setup or to run some manual tasks like database migrations, you can use docker-compose run for this. This honors most, but not all, of the settings in the docker-compose.yml file. In particular you can't docker-compose run an interactive shell and start your application from it, since it ignores ports:.
# Typical use: run database migrations
docker-compose run web \
./manage.py migrate
# For debugging: run an interactive shell
docker-compose run web bash

Not able to run Elasticsearch in docker on amazon Ec2 instance

I am trying to run elasticsearch 7.7 in docker container using t2.medium instance and went through this SO question and official ES docs on installing ES using docker but even after giving discovery.type: single-node its not bypassing the bootstrap checks mentioned in several posts.
My elasticsearch.yml file
cluster.name: scanner
node.name: node-1
network.host: 0.0.0.0
discovery.type: single-node
cluster.initial_master_nodes: node-1 // tried explicitly giving this but no luck
xpack.security.enabled: true
My Dockerfile
FROM docker.elastic.co/elasticsearch/elasticsearch:7.7.0
COPY elasticsearch.yml /usr/share/elasticsearch/elasticsearch.yml
USER root
RUN chmod go-w /usr/share/elasticsearch/elasticsearch.yml
RUN chown root:elasticsearch /usr/share/elasticsearch/elasticsearch.yml
USER elasticsearch
And this is how I am building and running the image.
docker build -t es:latest .
docker run --ulimit nofile=65535:65535 -p 9200:9200 es:latest
And relevant error logs
75", "message": "bound or publishing to a non-loopback address,
enforcing bootstrap checks" } ERROR: 1 bootstrap checks failed 1:
the default discovery settings are unsuitable for production use; at
least one of [discovery.seed_hosts, discovery.seed_providers,
cluster.initial_master_nodes] must be configured ERROR: Elasticsearch
did not exit normally - check the logs at
/usr/share/elasticsearch/logs/docker-cluster.log
Elasticsearch in a single node
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: elasticsearch
environment:
- node.name=vibhuvi-node
- discovery.type=single-node
- cluster.name=vibhuvi-es-data-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- vibhuviesdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
networks:
elastic:
driver: bridge
volumes:
vibhuviesdata:
driver: local
Run
docker-compose up -d

How to mount docker volume which is running in EC2 instance to EBS volume? What are the steps for that?

Currently ec2 instance is running and inside that docker container is running.
I have 2 containers running in EC2 instance.
1 - Aplication container.
2 - Data Base(CouchDb) container.
I need to store data which is in database to EBS volumes.
I have a docker-compose file and I'm bringing up containers using 'docker-compose up' command.
version: '2.1'
services:
web:
build: .
ports:
- "4200:4200"
- "7020:7020"
depends_on:
couchdb:
condition: "service_healthy"
volumes:
- ./app:/usr/src/app/app
couchdb:
image: "couchdb:1.7.1"
ports:
- "5984:5984"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5984/"]
interval: 10s
timeout: 90s
retries: 9
You need to bind the EC2 instance directory with CouchDB container.
Where to Store Data
Important note: There are several ways to store data used by
applications that run in Docker containers. We encourage users of the
couchdb images to familiarize themselves with the options available
Create a data directory on a suitable volume on your host system, e.g.
/home/ec2-user/data. Start your couchdb container like this:
$ docker run --name mycouchdb -v /home/ec2-user/data:/opt/couchdb/data -d couchdb:tag
The -v /home/ec2-user/data:/opt/couchdb/data part of the command mounts the /home/ec2-user/data directory from the
underlying host system as /opt/couchdb/data inside the container,
where CouchDB by default will write its data files.
CouchDB Store Data
Update:
In the case of docker-compose
couchdb:
image: "couchdb:1.7.1"
volumes:
- /home/ec2-user/data:/opt/couchdb/data
ports:
- "5984:5984"

Django Docker/Kubernetes Postgres data not appearing

I just tried switching from docker-compose to docker stacks/kubernetes. In compose I was able to specify where the postgres data volume was and the data persisted nicely.
volumes:
- ./postgres-data:/var/lib/postgresql/data
I tried doing the same thing with the stack file and I can connect to the pod and use psql to see the schema but none of the data entered from docker-compose is there.
Any ideas why this might be?
Here's the stack.yml
version: '3.3'
services:
django:
image: image
build:
context: .
dockerfile: docker/Dockerfile
deploy:
replicas: 5
environment:
- DJANGO_SETTINGS_MODULE=config.settings.local
- SECRET_KEY=password
- NAME=postgres
- USER=postgres
- HOST=db
- PASSWORD=password
- PORT=5432
volumes:
- .:/application
command: ["gunicorn", "--bind 0.0.0.0:8000", "config.wsgi"]
ports:
- "8000:8000"
links:
- db
db:
image: mdillon/postgis:9.6-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
You failed to mention how your cluster is provisioned, where is it running etc. so I will make an assumption we're talking about local tests here. If so, you probably have local docker/docker-compose and minikube installed.
If that is the case, please mind that minikube runs in it's own VM so it will not be affected by changes you make on your host by ie. docker, as it has it's own filesystem in vm.
Hint: you can run docker against docker daemon of minikube if you first run eval $(minikube docker-env)
For docker stacks, run the docker inspect command, it should show the mount point of the Postgres container.
docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Mounts}} {{.Source}}{{end}}' <StackName>
Fixed in the last Docker Edge update.

Docker for AWS and Selenium Grid - Connection refused / No route to host (Host unreachable)

What I am trying to achieve is a scalable and on-demand test infrastructure using Selenium Grid.
I can get everything up and running but what I end up with is this:
Here are all the pieces:
Docker for AWS (CloudFormation Stack)
docker-selenium
Docker compose file (below)
The "implied" software used are:
Docker swarm
Stacks
Here is what I can accomplish:
Create, log into, and ping all hosts & nodes within the stack, following the guidelines here: deploy Docker for AWS
Deploy using the compose the file at the end of this inquiry by running:
docker stack deploy -c docker-compose.yml grid
View Selenium Grid console using the public facing DNS name automatically provided by AWS (upon successful creation of the stack). Here is a helpful entry on the subject: Docker Swarm Mode.
Here is are the contents of the compose file I am using:
version: '3'
services:
hub:
image: selenium/hub:3.4.0-chromium
ports:
- 4444:4444
networks:
- selenium
environment:
- JAVA_OPTS=-Xmx1024m
deploy:
update_config:
parallelism: 1
delay: 10s
placement:
constraints: [node.role == manager]
chrome:
image: selenium/node-chrome:3.4.0-chromium
networks:
- selenium
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
deploy:
placement:
constraints: [node.role == worker]
firefox:
image: selenium/node-firefox:3.4.0-chromium
networks:
- selenium
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
deploy:
placement:
constraints: [node.role == worker]
networks:
selenium:
Any guidance on this issue will be greatly appreciated. Thank you.
I have also tried opening up ports across the swarm:
swarm-exec docker service update --publish-add 5555:5555 gird
A quick Google brought up https://github.com/SeleniumHQ/docker-selenium/issues/255. You need to add the following to the Chrome and Firefox nodes:
entrypoint: bash -c 'SE_OPTS="-host $$HOSTNAME" /opt/bin/entry_point.sh'
This is because the containers have two IP addresses in Swarm Mode and the nodes are picking up the wrong address and advertising that to the hub. This change will have the nodes advertise their hostname so the hub can find the nodes by DNS instead.