So I ran sudo docker-compose up with the following .yaml file:
version: "3"
services:
localstack:
image: localstack/localstack:latest
ports:
- "4563-4599:4563-4599"
- "8080:8080"
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
- SERVICES=s3,es,s3,ssm
- DEFAULT_REGION=us-east-1
- DATA_DIR=.localstack
- AWS_ENDPOINT=http://localstack:4566
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /tmp/localstack:/tmp/localstack
networks:
- my_localstack_network
networks:
my_localstack_network:
Then I created a ES domain:
aws es create-elasticsearch-domain --domain-name MyEsDomain --endpoint-url=http://localhost:4566
and getting the following output:
{
"DomainStatus": {
"DomainId": "000000000000/MyEsDomain",
"DomainName": "MyEsDomain",
"ARN": "arn:aws:es:us-east-1:000000000000:domain/MyEsDomain",
"Created": true,
"Deleted": false,
"Endpoint": "MyEsDomain.us-east-1.es.localhost.localstack.cloud:4566",
"Processing": true,
"UpgradeProcessing": false,
"ElasticsearchVersion": "7.10",
"ElasticsearchClusterConfig": {
"InstanceType": "m3.medium.elasticsearch",
"InstanceCount": 1,
"DedicatedMasterEnabled": true,
"ZoneAwarenessEnabled": false,
"DedicatedMasterType": "m3.medium.elasticsearch",
"DedicatedMasterCount": 1,
"WarmEnabled": false
},
...
When I try to hit the ES server thru port 4571, I'm getting "empty reply"
curl localhost:4571
curl: (52) Empty reply from server
I also tried to hit port 4566, and getting back {"status": "running"}.
Look like Elasticesearch never start on my machine.
localstack version > 0.14.0 removed 4571 port, see https://github.com/localstack/localstack/releases/tag/v0.14.0
Try using localstack/localstack-full image.
Localstack/localstack is the light version that does not include elasticsearch.
Related
I am configuring Kubernetes based on aws ec2.
I use elasticsearch's packetbeat to get the geometric of clients accessing the service.
Istio is used as the service mesh of Kubernetes, and CLB is used for the load balancer.
I want to know the client ip accessing the service and the domain address the client accesses here.
my packetbeat.yml
setup.dashboards.enabled: true
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 2
packetbeat.interfaces.device: eth0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.auto_promices_mode: true
packetbeat.interfaces.with_vlans: true
packetbeat.protocols:
- type: dhcpv4
ports: [67, 68]
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80,5601,8081,8002,5000, 8000, 8080, 9200]
send_request: true
send_response: true
send_header: ["User-Agent", "Cookie", "Set-Cookie"]
real_ip_header: "X-Forwarded-For"
- type: mysql
ports: [3306, 3307]
- type: memcache
ports: [11211]
- type: redis
ports: [6379]
- type: pgsql
ports: [5432]
- type: thrift
ports: [9090]
- type: mongodb
ports: [27017]
- type: cassandra
ports: [9042]
- type: tls
ports: [443, 993, 995, 5223, 8443, 8883,8883, 9243, 15021, 15443, 32440]
send_request: true
send_response: true
send_all_headers: true
include_body_for: ["text/html", "application/json"]
packetbeat.procs.enabled: true
packetbeat.flows:
timeout: 30s
period: 10s
fields: ["server.domain"]
processors:
- include_fields:
fields:
- source.ip
- server.domain
- add_dokcer_metadata:
- add_host_metadata:
- add_cloud_metadata:
- add_kubernetes_metadata:
host: ${HOSTNAME}
indexers:
- ip_port:
matchers:
- field_format:
format: '%{[ip]}:%{[port]}'
# with version 7 of Packetbeat use the following line instead of the one above.
#format: '%{[destination.ip]}:%{[destination.port]}'
output.elasticsearch:
hosts: ${ELASTICSEARCH_ADDRESS}
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
pipeline: geoip-info
setup.kibana:
host: 'https://myhost:443'
my CLB listener
CLB has enabled proxy protocol.
But the packet beat doesn't bring me the data I want.
search for tls log
"client": {
"port": 1196,
"ip": "10.0.0.83"
},
"network": {
"community_id": "1:+ARNMwsOGxkBkrmWfCVawtA1GKo=",
"protocol": "tls",
"transport": "tcp",
"type": "ipv4",
"direction": "egress"
},
"destination": {
"port": 8443,
"ip": "10.0.1.77",
"domain": "my host domain"
},
search for flow.final: true
"event": {
"duration": 1051434189423,
"kind": "event",
"start": "2022-10-28T05:25:14.171Z",
"action": "network_flow",
"end": "2022-10-28T05:42:45.605Z",
"category": [
"network_traffic",
"network"
],
"type": [
"connection"
],
"dataset": "flow"
},
"source": {
"geo": {
"continent_name": "Asia",
"region_iso_code": "KR",
"city_name": "Has-si",
"country_iso_code": "KR",
"country_name": "South Korea",
"region_name": "Gg",
"location": {
"lon": 126.8168,
"lat": 37.2072
}
},
"port": 50305,
"bytes": 24174,
"ip": "my real ip address",
"packets": 166
},
I can find out if I search separately, but there are no two points of contact.
I would like to see the log of the above two combined.
The domain the client accesses + real client ip.
please help me..
I've spend a lot of time trying to set up aws secrets manager with jdbc kafka connector in docker compose locally, but a I didn' succeed.
I've tested other parts separately and that's ok, except aws secrets manager. In this context three files are used. (init-secrets-manager.sh | test-jdbc-source-connector.json | docker-compose.yml)
The key is created successfully!
aws-localstack | {
aws-localstack | "ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:bd-test-FRhcf",
aws-localstack | "Name": "bd-test",
aws-localstack | "VersionId": "5cb8132b-cf25-44d0-ae81-bd5570ef01aa"
aws-localstack | }
I'm getting this error when set up a connector: (connect-distributed.log log of the landoop image)
Caused by: com.amazonaws.services.secretsmanager.model.AWSSecretsManagerException: The security token included in the request is invalid. (Service: AWSSecretsManager; Status Code: 400; Error Code: UnrecognizedClientException; Request ID: 16b70666-0536-49d6-aa32-1d5bf21b615d)
init-secrets-manager.sh
aws --endpoint http://localhost:4566 secretsmanager create-secret --name bd-test --description "Some secret" --secret-string '{"user":"adm","password":"adm","decryptionkey":"123"}' --tags '[{"Key":"user", "Value":"adm"},{"Key":"password","Value":"adm"},{"Key":"decryptionkey","Value":"123"}]' --region us-east-1
test-jdbc-source-connector.json
{
"name": "test-jdbc-source-connector",
"config": {
"name": "test-jdbc-source-connector",
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": "1",
"poll.interval.ms": "10000",
"timestamp.delay.interval.ms": "80000",
"batch.max.rows": "1000",
"topic.prefix": "customer_topic_test",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.use.latest.version": "true",
"value.converter.enhanced.avro.schema.support": "true",
"value.converter.schema.registry.url": "http://localhost:8081",
"value.converter.auto.register.schemas": "true",
"value.converter.schemas.enable": "true",
"mode": "incrementing",
"incrementing.column.name": "id",
"query": "SELECT ID, convert_from(decrypt(NAME, '${aws:bd-test:decryptionkey}'::bytea,'aes'),'UTF-8') as NAME FROM CUSTOMER",
"schema.pattern": "adm",
"connection.url": "jdbc:postgresql://postgres:5432/customer?currentSchema=adm",
"connection.user": "${aws:bd-test:user}",
"connection.password": "${aws:bd-test:password}",
"connection.attempts": "5",
"transforms": "createKey",
"transforms.createKey.type":"org.apache.kafka.connect.transforms.ValueToKey",
"transforms.createKey.fields":"id"
}
}
docker-compose.yml
version: '3.3'
services:
aws-localstack:
container_name: aws-localstack
hostname: aws-localstack
image: localstack/localstack
ports:
- 4566:4566
- 8080:8080
environment:
- SERVICES=secretsmanager
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- DOCKER_HOST=unix:///var/run/docker.sock
- DEFAULT_REGION=us-east-1
- AWS_ACCESS_KEY_ID=test
- AWS_SECRET_ACCESS_KEY=test
- LS_LOG=trace
volumes:
- ./localstack:/tmp/localstack
- /var/run/docker.sock:/var/run/docker.sock
- ./localstack/init-secrets-manager.sh:/docker-entrypoint-initaws.d/init-secrets-manager.sh
networks:
- network_test
postgresql:
image: postgres:11.5-alpine
container_name: postgres
hostname: postgres
ports:
- 5432:5432
environment:
POSTGRES_USER: adm
POSTGRES_PASSWORD: adm
POSTGRES_DB: customer
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- test_postgresql_data:/var/lib/postgresql/data
- ./postgresql/init.sh:/docker-entrypoint-initdb.d/init.sh
networks:
- network_test
kafka:
container_name: kafka
hostname: kafka
image: landoop/fast-data-dev
ports:
- 9092:9092
- 8081:8081
- 8082:8082
- 8083:8083
- 2181:2181
- 3030:3030
- 9581:9581
- 9582:9582
- 9583:9583
- 9584:9584
environment:
- CONNECT_CONFIG_PROVIDERS=aws
- CONNECT_CONFIG_PROVIDERS_AWS_CLASS=io.lenses.connect.secrets.providers.AWSSecretProvider
- CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_ACCESS_KEY=test
- CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_AUTH_METHOD=credentialst
- CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_REGION=us-east-1
- CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_SECRET_KEY=test
- CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
volumes:
- ./localstack:/tmp/localstack
networks:
- network_test
volumes:
test_postgresql_data:
networks:
network_test:
driver: bridge
While deploying dotnet app as docker with Milticontainer option in Elasticbean stalk, Getting the error like
2021-05-20 01:26:55 ERROR ECS task stopped due to: Task failed to start. (traveltouchapi: CannotPullContainerError: Error response from daemon: pull access denied for traveltouchapi, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
postgres_image: )
2021-05-20 01:26:58 ERROR Failed to start ECS task after retrying 2 times.
2021-05-20 01:27:00 ERROR [Instance: i-0844a50e307bd8b23] Command failed on instance. Return code: 1 Output: .
Environment details for: TravelTouchApi-dev3
Application name: TravelTouchApi
Region: ap-south-1
Deployed Version: app-c1ba-210520_065320
Environment ID: e-i9t6f6vszk
Platform: arn:aws:elasticbeanstalk:ap-south-1::platform/Multi-container Docker running on 64bit Amazon Linux/2.26.0
Tier: WebServer-Standard-1.0
CNAME: TravelTouchApi-dev3.ap-south-1.elasticbeanstalk.com
Updated: 2021-05-20 01:23:27.384000+00:00
My Dockerfile is
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
# Install Node.js
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get install -y \
nodejs \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /src/TravelTouchApi
COPY ["TravelTouchApi.csproj", "./"]
RUN dotnet restore "TravelTouchApi.csproj"
COPY . .
WORKDIR "/src/TravelTouchApi"
RUN dotnet build "TravelTouchApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "TravelTouchApi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "TravelTouchApi.dll"]
My docker-compose.yml is
version: '3.4'
networks:
traveltouchapi-dev:
driver: bridge
services:
traveltouchapi:
image: traveltouchapi:latest
depends_on:
- "postgres_image"
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
environment:
DB_CONNECTION_STRING: "host=postgres_image;port=5432;database=blogdb;username=bloguser;password=bloguser"
networks:
- traveltouchapi-dev
postgres_image:
image: postgres:latest
ports:
- "5432"
restart: always
volumes:
- db_volume:/var/lib/postgresql/data
environment:
POSTGRES_USER: "bloguser"
POSTGRES_PASSWORD: "bloguser"
POSTGRES_DB: "blogdb"
networks:
- traveltouchapi-dev
volumes:
db_volume:
My Dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"environment": [
{
"name": "POSTGRES_USER",
"value": "bloguser"
},
{
"name": "POSTGRES_PASSWORD",
"value": "bloguser"
},
{
"name": "POSTGRES_DB",
"value": "blogdb"
}
],
"essential": true,
"image": "postgres:latest",
"memory": 200,
"mountPoints": [
{
"containerPath": "/var/lib/postgresql/data",
"sourceVolume": "Db_Volume"
}
],
"name": "postgres_image",
"portMappings": [
{
"containerPort": 5432
}
]
},
{
"environment": [
{
"name": "DB_CONNECTION_STRING",
"value": "host=postgres_image;port=5432;database=blogdb;username=bloguser;password=bloguser"
}
],
"essential": true,
"image": "traveltouchapi:latest",
"name": "traveltouchapi",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 200
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "db_volume"
},
"name": "Db_Volume"
}
]
}
I think you are missing the login step before deploy the applications.
Can you try use this command before deploying?
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_DEFAULT_ACCID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
The image name must contains with full repo/tag name 'natheesh/traveltouchapi: latest' in Dockerrun.json
I have a multi-container app which I want to deploy on ElasticBeanstalk. Below are my files.
Dockerfile
FROM python:2.7
WORKDIR /app
ADD . /app
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y \
apt-utils \
git \
python \
python-dev \
libpcre3 \
libpcre3-dev \
python-setuptools \
python-pip \
nginx \
supervisor \
default-libmysqlclient-dev \
python-psycopg2 \
libpq-dev \
sqlite3 && \
pip install -U pip setuptools && \
rm -rf /var/lib/apt/lists/*
RUN pip install -r requirements.txt
EXPOSE 8000
RUN chmod +x entry_point.sh
docker-compose.yml
version: "2"
services:
db:
restart: always
container_name: docker_test-db
image: postgres:9.6
expose:
- "5432"
mem_limit: 10m
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
redis:
restart: always
image: redis:3.0
expose:
- "6379"
mem_limit: 10m
web:
# replace username/repo:tag with your name and image details
restart: always
build: .
image: docker_test
container_name: docker_test-container
ports:
- "8000:8000"
environment:
- DATABASE=db
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
mem_limit: 500m
depends_on:
- db
- redis
entrypoint: ./entry_point.sh
command: gunicorn docker_test.wsgi:application -w 2 -b :8000 --timeout 120 --graceful-timeout 120 --worker-class gevent
celery:
image: docker_test
container_name: docker_test-celery
command: celery -A docker_test worker -l info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
cbeat:
image: docker_test
container_name: docker_test-cbeat
command: celery beat --loglevel=info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
I works file when I run it on my local system. But when I upload it on elasticbeanstalk, It gives my following error.
ECS task stopped due to: Essential container in task exited. (celery:
db: cbeat: web: CannotPullContainerError: API error (404): pull access
denied for docker_test, repository does not exist or may require
'docker login' redis: )
I transform docker-compose.yml to Dockerrun.aws.json by using container-transform. For above file, my Dockerrun.aws.json is following.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"command": [
"celery",
"beat",
"--loglevel=info"
],
"essential": true,
"image": "docker_test",
"links": [
"db",
"redis"
],
"memory": 10,
"name": "cbeat"
},
{
"command": [
"celery",
"-A",
"docker_test",
"worker",
"-l",
"info"
],
"essential": true,
"image": "docker_test",
"links": [
"db",
"redis"
],
"memory": 10,
"name": "celery"
},
{
"environment": [
{
"name": "POSTGRES_NAME",
"value": "postgres"
},
{
"name": "POSTGRES_USER",
"value": "postgres"
},
{
"name": "POSTGRES_PASSWORD",
"value": "postgres"
},
{
"name": "POSTGRES_DB",
"value": "docker_test"
}
],
"essential": true,
"image": "postgres:9.6",
"memory": 10,
"name": "db"
},
{
"essential": true,
"image": "redis:3.0",
"memory": 10,
"name": "redis"
},
{
"command": [
"gunicorn",
"docker_test.wsgi:application",
"-w",
"2",
"-b",
":8000",
"--timeout",
"120",
"--graceful-timeout",
"120",
"--worker-class",
"gevent"
],
"entryPoint": [
"./entry_point.sh"
],
"environment": [
{
"name": "DATABASE",
"value": "db"
},
{
"name": "POSTGRES_NAME",
"value": "postgres"
},
{
"name": "POSTGRES_USER",
"value": "postgres"
},
{
"name": "POSTGRES_PASSWORD",
"value": "postgres"
},
{
"name": "POSTGRES_DB",
"value": "docker_test"
}
],
"essential": true,
"image": "docker_test",
"memory": 500,
"name": "web",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
}
],
"family": "",
"volumes": []
}
How can I resolve this problem?
Please push the image "docker_test" to either dockerhub or ECR for Beanstalk to pull image from. Currently, it's on your local & the ECS agent doesn't know about it.
Tag & Push docker_test image to a registry like dockerhub & ECR.
Update image repo URL in Dockerrun.aws.json.
Allow Beanstalk to pull the image.
I'm not that familiar with EB, but I am pretty familiar with ECR and ECS.
I usually get that error when I try pull an image from an empty repo on ECR, in other words the ECR repo was created but you havn't pushed any docker images to the repo yet.
This can also happen when you try pull an image from ECR and it can't find the version number of the image in the tag. I suggest that you change your docker-compose.yml file to use the latest version of the images. This will mean that everywhere you mention the image docker_test you will need suffix it with ":latest"
Something like this:
image: docker_test:latest
I will post my whole docker-compose.yml I made for you at the end of the reply.
I would suggest that you have a look at this doc:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html see the section:"Using Images from an Amazon ECR Repository" they explain how you can resolve the docker login issue.
I hope that helps. Please reply if you have any questions regarding this.
version: "2"
services:
db:
restart: always
container_name: docker_test-db
image: postgres:9.6
expose:
- "5432"
mem_limit: 10m
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
redis:
restart: always
image: redis:3.0
expose:
- "6379"
mem_limit: 10m
web:
# replace username/repo:tag with your name and image details
restart: always
build: .
image: docker_test:latest
container_name: docker_test-container
ports:
- "8000:8000"
environment:
- DATABASE=db
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
mem_limit: 500m
depends_on:
- db
- redis
entrypoint: ./entry_point.sh
command: gunicorn docker_test.wsgi:application -w 2 -b :8000 --timeout 120 --graceful-timeout 120 --worker-class gevent
celery:
image: docker_test
container_name: docker_test-celery
command: celery -A docker_test worker -l info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
cbeat:
image: docker_test:latest
container_name: docker_test-cbeat
command: celery beat --loglevel=info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
For some reason the environment variables, although I've configured them in my ECS task, are not set in the running container. What am I missing? Why are the values empty?
I have the following AWS::ECS::TaskDefinition:
AirflowWebTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: !Join ['', [!Ref 'AWS::StackName', -dl-airflow-web]]
ContainerDefinitions:
- Name: dl-airflow-web
Cpu: '10'
Essential: 'true'
Image: companyname-docker-snapshot-local.jfrog.io/docker-airflow:1.0
Command: ['webserver']
Memory: '1024'
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref 'AirflowCloudwatchLogsGroup'
awslogs-region: !Ref 'AWS::Region'
awslogs-stream-prefix: dl-airflow-web
PortMappings:
-
ContainerPort: 8080
Environment:
- Name: LOAD_EX
Value: n
- Name: EXECUTOR
Value: Celery
- Name: MYQL_HOST
Value: !Ref 'RDSDNSName'
- Name: MYSQL_PORT
Value: !Ref 'RDSPort'
- Name: MYSQL_DB
Value: !Ref 'AirflowDBName'
- Name: USERNAME
Value: !Ref 'AirflowDBUser'
- Name: PASSWORD
Value: !Ref 'AirflowDBPassword'
And I am using a docker image which is a fork of https://github.com/puckel/docker-airflow. The entrypoint for the image inspects environment variables as follows:
#!/usr/bin/env bash
AIRFLOW_HOME="/usr/local/airflow"
CMD="airflow"
TRY_LOOP="20"
: ${MYSQL_HOST:="default-mysql"}
: ${MYSQL_PORT:="3306"}
Where the $MYSQL_* variables are set to a default if they have not been set in the docker run command.
When I run the container image from docker-compose using the configuration below, it works and the environment variables are all set:
webserver:
image: companyname-docker-snapshot-local.jfrog.io/docker-airflow:1.0
environment:
- LOAD_EX=n
- EXECUTOR=Celery
- MYSQL_HOST=mysql
- MYSQL_PORT=3306
- USERNAME=dev-user
- PASSWORD=dev-secret-pw
- SQS_HOST=sqs
- SQS_PORT=9324
- AWS_DYNAMODB_ENDPOINT=http://dynamodb:8000
ports:
- "8090:8080"
command: webserver
And the following command in my entrypoint.sh:
echo "$(date) - Checking for MYSQL (host: $MYSQL_HOST, port: $MYSQL_PORT) connectivity"
Logs this output:
Fri Jun 2 12:55:26 UTC 2017 - Checking for MYSQL (host: mysql, port: 3306) connectivity
But inspecting my cloudwatch logs shows this output with the default values:
Fri Jun 2 14:15:03 UTC 2017 - Checking for MYSQL (host: default-mysql, port: 3306) connectivity
But I can ssh into the EC2 host, run docker inspect [container_id] and verify that the environment variables are set:
Config": {
"Hostname": "...",
"Domainname": "",
"User": "airflow",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"5555/tcp": {},
"8080/tcp": {},
"8793/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"MYSQL_PORT=3306",
"PASSWORD=rds-secret-pw",
"USERNAME=rds-user",
"EXECUTOR=Celery",
"LOAD_EX=n",
"MYQL_HOST=rds-cluster-name.cluster-id.aws-region.rds.amazonaws.com",
"MYSQL_DB=db-name",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"DEBIAN_FRONTEND=noninteractive",
"TERM=linux",
"LANGUAGE=en_US.UTF-8",
"LANG=en_US.UTF-8",
"LC_ALL=en_US.UTF-8",
"LC_CTYPE=en_US.UTF-8",
"LC_MESSAGES=en_US.UTF-8"
],
"Cmd": [
"webserver"
],
"Image": "companyname-docker-snapshot-local.jfrog.io/docker-airflow:1.0",
"Volumes": null,
"WorkingDir": "/usr/local/airflow",
"Entrypoint": [
"/entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"com.amazonaws.ecs.cluster": "...",
"com.amazonaws.ecs.container-name": "...",
"com.amazonaws.ecs.task-arn": "...",
"com.amazonaws.ecs.task-definition-family": "...",
"com.amazonaws.ecs.task-definition-version": "16"
}
},
And if I run:
$ docker exec [container-id] echo $MYSQL_HOST
The output is empty
your task definition defines env variable MYQL_HOST. You got that right in the docker compose. Just the CF, fix it and it should be fine.