Localstack SNS by http protocol cannot connect to API endpoint - amazon-web-services

I am having a problem trying to mock SNS for my application. I want to use SNS and subscribe by my API endpoint. The error I am getting is:
Could not connect to the endpoint URL: "http://localhost:6010/webhook"
Here is the docker compose I am using:
version: "3.8"
services:
localstack:
container_name: localstack
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566"
environment:
- LEGACY_INIT_DIR=1
- SERVICES=sqs,sns
- AWS_DEFAULT_REGION=eu-west-1
- AWS_ACCESS_KEY_ID=testUser
- AWS_SECRET_ACCESS_KEY=testAccessKey
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/var/lib/localstack}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
- "./localstack-scripts:/docker-entrypoint-initaws.d/"
networks:
- backend
my-api:
container_name: my-api
image: api-image
command: ["Proj.Api.Host.dll", "Proj.Api.Host"]
build:
context: ./../../
dockerfile: api/Dockerfile
env_file:
- ./../../docker/.env_default
environment:
ASPNETCORE_ENVIRONMENT: "Development"
ports:
- "6010:80"
networks:
- backend
restart: unless-stopped
networks:
backend:
driver: bridge
And the script that is running from initaws folder:
awslocal sns create-topic --name test-topic --region=eu-west-1
awslocal sns subscribe --topic-arn "arn:aws:sns:eu-west-1:000000000000:test-topic" --region=eu-west-1 --protocol http --endpoint-url=http://localhost:6010/webhook
Can someone try to help me figure it out what is the problem here?

I have resolved this issue using host.docker.internal. Hope that helps someone else with this problem. Cheers.

Related

AWS ElasticBeanstalk failed to deploy Django/Postgres app

I'm having a hard time deploying my app built with Django, Postgres, DjangoQ, Redis and ES on AWS Elastic Beanstalk, using docker-compose.yml.
I've used EB CLI (eb init, eb create) to do it and it shows the environment is successfully launched but I still have the following problem.
On the EC2 instance, there is no postgres, djangoq and ec containers built like it says in the docker-compose file as below. Only django, redis and ngnix containers are found on the ec2 instance.
The environment variables that I specified in the docker-compose.yml file aren't being configured to the django container on EC2, so I can't run django there.
I'm pretty lost and am not sure where to even start to fix the problems here.. Any insight will be very much appreciated..
version: '3'
services:
django:
build:
context: .
dockerfile: docker/Dockerfile
command: gunicorn --bind 0.0.0.0:5000 etherscan_project.wsgi:application
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- es
django-q:
build:
context: .
dockerfile: docker/Dockerfile
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py qcluster"
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- django
- es
db:
image: postgres:latest
expose:
- 5432
env_file: .env
volumes:
- ./docker/volumes/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $POSTGRES_DB"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:latest
expose:
- 6379
ports:
- 6379:6379
volumes:
- ./docker/volumes/redis:/data
nginx:
container_name: nginx
image: nginx:1.13
ports:
- 80:80
depends_on:
- db
- django
- redis
volumes:
- ./docker/nginx:/etc/nginx/conf.d
- $PWD:/srv/app/:delegated
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
ports:
- 9200:9200
- 9300:9300
expose:
- 9200
- 9300
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./docker/volumes/es:/usr/share/elasticsearch/data
volumes:
app-files:
driver_opts:
type: nfs
device: $PWD
o: bind
can you confirm that your environment variables are being used correctly? A common mistake with EB and docker-compsoe is that it is assumed that your .env file works the same way in EB as it does in docker-compose when it does not. I have made that mistake before. Check out the docs https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-env-cfg.env-variables

VueJS + Django Rest Framework in dockers

I have a VueJS front end and a Django Rest Framework Backend which are independant (Django does not serve my VueJS app)
In local they work very well together but after using a docker-compose to deploy them on the server they don't want to communicate anymore. I can see my frontend but the axios requests get a TimeOut.
How it's made in my docker compose:
version: '3'
networks:
intern:
external: false
extern:
external: true
services:
backend:
image: #from_registry
container_name: Backend
env_file:
- ../.env
depends_on:
- db
networks:
- intern
volumes:
- statics:/app/static_assets/
- medias:/app/media/
expose:
- "8000"
db:
image: "postgres:latest"
container_name: Db
environment:
POSTGRES_PASSWORD: ****
networks:
- intern
volumes:
- pgdb:/var/lib/postgresql/data
frontend:
image: from_registry
container_name: Frontend
volumes:
- statics:/home/app/web/staticfiles
- medias:/home/app/web/mediafiles
env_file:
- ../.env.local
depends_on:
- backend
networks:
- intern
- extern
labels:
- traefik.http.routers.site.rule=Host(`dev.x-fantasy.com`)
- traefik.http.routers.site.tls=true
- traefik.http.routers.site.tls.certresolver=lets-encrypt
- traefik.port=80
volumes:
pgdb:
statics:
medias:
In my AxiosConfiguration I put:
baseURL="http://backend:8000"
And my front try to access on this URL but get a timeout error.
In the console I have an error
xhr.js:177 POST https://backend:8000/api/v1/token/login net::ERR_TIMED_OUT
It seems that there is a https in place of the http. Can it come from here?
Any idea how to make them communicate?
Thanks

How to use Docker Volumes?

I am completely new to Docker, right now I am using an open-source tool, which is a Docker application, I need to make some changes to the existing application for my requirement and it should reflect the changes, I did a lot of searching, then I found that we can do this with the help of Docker Volumes, but I am unable to follow any of the articles on the web as well as documentation? Any suggestions will be appreciated.
docker-compose.yml:
version: "3.3"
services:
cvat_db:
container_name: cvat_db
image: postgres:10-alpine
networks:
default:
aliases:
- db
restart: always
environment:
POSTGRES_USER: root
POSTGRES_DB: cvat
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- cvat_db:/var/lib/postgresql/data
cvat_redis:
container_name: cvat_redis
image: redis:4.0-alpine
networks:
default:
aliases:
- redis
restart: always
cvat:
container_name: cvat
image: cvat/server
restart: always
depends_on:
- cvat_redis
- cvat_db
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy: nuclio,${no_proxy}
socks_proxy:
USER: "django"
DJANGO_CONFIGURATION: "production"
TZ: "Etc/UTC"
CLAM_AV: "no"
environment:
DJANGO_MODWSGI_EXTRA_ARGS: ""
ALLOWED_HOSTS: '*'
CVAT_REDIS_HOST: "cvat_redis"
CVAT_POSTGRES_HOST: "cvat_db"
volumes:
- cvat_data:/home/django/data
- cvat_keys:/home/django/keys
- cvat_logs:/home/django/logs
- cvat_models:/home/django/models
cvat_ui:
container_name: cvat_ui
image: cvat/ui
restart: always
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy:
socks_proxy:
dockerfile: Dockerfile.ui
networks:
default:
aliases:
- ui
depends_on:
- cvat
cvat_proxy:
container_name: cvat_proxy
image: nginx:stable-alpine
restart: always
depends_on:
- cvat
- cvat_ui
environment:
CVAT_HOST: localhost
ports:
- "8080:80"
volumes:
- ./cvat_proxy/nginx.conf:/etc/nginx/nginx.conf:ro
- ./cvat_proxy/conf.d/cvat.conf.template:/etc/nginx/conf.d/cvat.conf.template:ro
command: /bin/sh -c "envsubst '$$CVAT_HOST' < /etc/nginx/conf.d/cvat.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
default:
ipam:
driver: default
config:
- subnet: 172.28.0.0/24
volumes:
cvat_db:
cvat_data:
cvat_keys:
cvat_logs:
cvat_models:
Docker volumes are mostly used as a way to save data outside of your container. If you mount a volume and store data in it, the data will not be erased when the container is destroyed. In order to mount a volume, you have to add -v <directory in your machine>:<directory in your container> to your docker run command. It may fulfill your requirements, but it most likely wont be enough.
If your assignment requires you to change for instance the behaviour of the application, then you have to rebuild the docker image and use it in your docker-compose.yml

Undefined volume when deploying docker container to ECS

I'm following this guide and currently trying to run my compose app using docker ecs compose up but I'm getting this error:
% docker ecs compose up
service "feature-test-web" refers to undefined volume : invalid compose project
Here's my docker-compose.yml:
version: '3.7'
x-web:
&web
build: ./web
volumes:
- ./web:/app
- /app/node_modules
x-api:
&api
build: ./api
volumes:
- ./api:/app
env_file:
- ./api/.env
depends_on:
- postgres
- redis
links:
- mailcatcher
services:
web:
<< : *web
environment:
- API_HOST=http://localhost:3000
ports:
- "1234:1234"
depends_on:
- api
api:
<< : *api
ports:
- "3000:3000"
stdin_open: true
tty: true
postgres:
image: postgres:11.2-alpine
volumes:
- ./pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_USER=portal
- POSTGRES_PASS=portal
ports:
- 8000:5432
restart: on-failure
healthcheck:
test: "exit 0"
redis:
image: redis:5.0.4-alpine
ports:
- '6379:6379'
sidekiq:
build: ./api
env_file:
- ./api/.env
depends_on:
- postgres
- redis
mailcatcher:
image: schickling/mailcatcher
ports:
- '1080:1080'
feature-test-api:
<< : *api
depends_on:
- selenium
- feature-test-web
feature-test-web:
<< : *web
environment:
- API_HOST=http://feature-test-api:3210
selenium:
image: selenium/standalone-chrome-debug:3.141.59-neon
volumes:
- /dev/shm:/dev/shm
ports:
- 5901:5900
What am I missing? Running docker compose up by itself works and I'm able to go to localhost:1234 to see the app running. I'm trying to deploy this to AWS but it's been very difficult to do so if I'm doing this wrong, any pointers to the right way would be much appreciated as well.
As mentioned on the comments section, the volume mounts won't work on ECS as the cluster won't have a local copy of your code.
So as a first step, remove the entire volumes section.
Second, you'll need to first build a docker image out of your code, and push it to a docker registry of your liking, then link to it on your docker compose as follows
x-api:
&api
image: 12345.abcd.your-region.amazonaws.com/your-docker-repository
env_file:
- ./api/.env
depends_on:
- postgres
- redis
My answer here could give you more insight into how I deploy to ECS.

Traefik2 route multiple services to multiple subpaths in flask

I have an issue with multiple services running by flask.
If I comment service app1 or app2, everything works great.
But when I run two sevicves simultaneously and "curl localhost/app1", I got 404 error.
"curl localhost/app2" works great.
Here's the docker-compose.yml.
version: "2.4"
services:
traefik:
image: "traefik:v2.1.3"
container_name: "traefik"
command:
#- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
app1:
image: pytorch:1.4.0
build:
context: .
target: prod
dockerfile: ./app1/Dockerfile
labels:
- traefik.enable=true
- "traefik.http.routers.app1.rule=Path(`/app1`)"
- "traefik.http.routers.app1.entrypoints=web"
app2:
image: pytorch:1.4.0
build:
context: .
target: prod
dockerfile: ./app2/Dockerfile
labels:
- traefik.enable=true
- "traefik.http.routers.app2.rule=Path(`/app2`)"
- "traefik.http.routers.app2.entrypoints=web"