AWS EC2 ELB Docker Routing - amazon-web-services

I am running into issues running docker-compose because an elastic load balancer. Setup is ELB does 443 -> TCP 80 and docker does 0.0.0.0:80->4444/tcp
However the server doesn't seem to be hit and I get DNS_PROBE_FINISHED_NXDOMAIN
Trying to verify if this is a docker setup issue. Docker version 1.12.6 and docker-compose version 1.12.0
is it normal for the bridge config to not have a Gateway defined?
```
[root#loom-server1 ec2-user]# docker network inspect 8f1b234bfb0b
[
{
"Name": "bridge",
"Id": "8f1b234bfb0b6c41962265299871cd8053757ec145f8e3f6b63960b71ceb3690",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Internal": false,
"Containers": {
"bbd2b84545a0e3519e37fb4015eea45637b75ccaa1dd362aff68ff41f3118055": {
"Name": "dockercompose_loom_1",
"EndpointID": "b7da2d31ff2503846d4f621bf355b8522afb8dabd1f02ca638c9ef032afefa76",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
```
weird part is that it's able to load assets
ec2-52-53-84-186.us-west-1.compute.amazonaws.com/assets/js/homepage.js
The link may be up or down as I experiment with instances.
This is all running on opsworks.
Any insight or help would be appreciated.

Related

AWS Elastic Beanstalk 'CannotPullContainerError' for private docker image (multi-instance)

I have a single-instance elastic beanstalk environment which runs a docker image which is hosted as a private image on Dockerhub. This works fine. I am trying to create a new multi-container environment which runs the exact same image (plus one other, not icluded in my code example here). In the multi-container environment, I cannot get elastic beanstalk to launch my docker image, I get the following error:
ECS task stopped due to: Task failed to start. (img1_name: img2_name: CannotPullContainerError: Error response from daemon: pull access denied for user/repo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied)
Here is the dockerrun for my single-instance environment:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my_bucket",
"Key": ".dockercfg"
},
"Image": {
"Name": "user/repo:tag",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 5000,
"HostPort": 443
}
],
"Logging": "/var/log/nginx"
}
And here is the .dockerfcg file:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "my_token"
}
}
}
Again, the above works fine.
My multi-instance dockerrun file is as follows:
{
"AWSEBDockerrunVersion": "2",
"authentication": {
"bucket": "my_bucket",
"key": ".dockercfg"
},
"containerDefinitions": [
{
"name": "img_name",
"image": "user/repo:tag",
"essential": true,
"memoryReservation": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 5000
}
]
}
],
"Logging": "/var/log/nginx"
}
I have ssh-ed into my elastic-beanstalk instance and run the following to check that it is able to access the .dockercfg from my s3 bucket:
aws s3api get-object --bucket mybucket --key dockercfg dockercfg
I have also tried various different formats for the .dockercfg file including...
{
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i",
"email": "email#example.com"
}
}
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i"
}
}
}
I'm tearing my hair out over this, I've found a few similar threads here and on AWS forums, but nothing seems to resolve my issue. Any help greatly appreciated.

Docker - Communication failure between containers on same network

I'm deploying an Angular - Django app on a Digital Ocean droplet. It's composed of 3 Docker containers:
cards_front: the Angular front-end
cards_api: the django rest framework back-end
cards_db: the postgres database
They're all on the same network:
[
{
"Name": "ivan_cards_api_network",
"Id": "ddbd3524e02a7c918f6e09851731e015fdb7e8647358c5ed0c4cd949cf651fd9",
"Created": "2018-10-09T23:44:33.293036243Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0d3144b27eaf6d7320357b6d703566e489f672b09b61dba0caf311c6e1c4711c": {
"Name": "cards_front",
"EndpointID": "47b1f8f42c4d18afeafeb9da502fd0197e726f29bd6d3d3c2960b44737bd579a",
"MacAddress": "02:42:ac:16:00:04",
"IPv4Address": "172.22.0.4/16",
"IPv6Address": ""
},
"3e9233f4bfc023632aaf13a146d1a50f75b4944503d9f226cf81140e92ccb532": {
"Name": "cards_api",
"EndpointID": "34d4780dc6f907a8cb9621223d6effe0a0aac1662d5272ae4a5104ba7f3808c4",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"e5e208a20523c2d41433b850dc64db175de8ee7d0d156e2917c12fd8ebdf97ab": {
"Name": "cards_db",
"EndpointID": "8a8f44bbcdf2f95e716e2763e33bed31e1d2bdbfae7f6d78c8dee33de426a7ef",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "cards_api_network",
"com.docker.compose.project": "ivan",
"com.docker.compose.version": "1.22.0"
}
}
ALLOWED_HOSTS on django settings is set to ['*']
When I test the Angular front-end on the browser I get on Chrome's developer tools:
GET http://localhost:8000/themes net::ERR_CONNECTION_RESET
So, the Angular container is failing to communicate with the django container.
But if I do a curl localhost:8000/themes from inside the DO droplet I get a response.
I know there's something missing on the network configuration, but I can't figure out what it is.
Thank you
EDIT:
If I do a curl from inside the Angular container to the django container I get a response (the empty array):
root#90cea47dd13d:/# curl 172.22.0.3:8000/themes
[]

HTTPS on Elastic Beanstalk (Docker Multi-container)

I've been looking around and haven't found much content with regards to a best practice when it comes to setting up HTTPS/SSL on Amazon Elastic Beanstalk with a Multi-container Docker environment.
There is a bunch of stuff when it comes to single container configuration, but nothing when it comes to multi-container.
My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "app-frontend",
"host": {
"sourcePath": "/var/app/current/app-frontend"
}
},
{
"name": "app-backend",
"host": {
"sourcePath": "/var/app/current/app-backend"
}
}
],
"containerDefinitions": [
{
"name": "app-backend",
"image": "xxxxx/app-backend",
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/app-backend",
"sourceVolume": "app-backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxxx"
},
{
"name": "PG_USERNAME",
"value": "xx"
},
{
"name": "PG_HOST",
"value": "xxxxx"
}
]
},
{
"name": "app-frontend",
"image": "xxxxxxx/app-frontend",
"memory": 512,
"links": [
"app-backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/app-frontend",
"sourceVolume": "app-frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 80
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "xxxxxx"
}
]
}
],
"family": ""
}
My thinking thus far is I would need to bring an nginx container into the mix in order to proxy the two services and handle things like mapping different domain names to different services.
Would I go the usual route of just setting up nginx and configuring the SSL as normal, or is there a better way, like I've seen for the single containers using the .ebextensions method (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-docker.html) ?
This is more of an idea (I haven't actually done this and not sure if it would work). But the components appear to be all available to create a ALB that could direct traffic to one process or another based on path rules.
Here is what I am thinking that could be done via .ebextensions config files based on the options available from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html:
Use aws:elasticbeanstalk:environment:process:default to make sure the default application port and health check is set the way you intend (let's say port 80 is your default in this case.
Use aws:elasticbeanstalk:environment:process:process_name to create a backend process that goes to your second service (port 4000 in this case).
Create a rule for your backend with aws:elbv2:listenerrule:backend which would use something like /backend/* as the path.
Create the SSL listener with aws:elbv2:listener:443 (example at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-applicationloadbalancer.html) that uses this new backend rule.
I am not sure if additional rules need to be created for the default listener of aws:elbv2:listener:default. It seems like the default might just match /* so in this case anything sent to /backend/* would go to port 4000 container and anything else goes to the port 3000 container.
You will definitely need an nginx container, for the simple fact that a multicontainer ELB setup does not provide one by default. The reason that you see a single container setup on ELB with these .ebextension configs, is that for this type of setup the ELB does provide nginx.
The benefit of having your own nginx container is that you won't need a frontend container (assuming you are serving static files). You can write our nginx config so that you serve static files straight.
Here is my Dockerrun file:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "dist",
"host": {
"sourcePath": "/var/app/current/frontend/dist"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/compose/production/nginx/nginx.conf"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "abc/xyz",
"essential": true,
"memory": 256,
},
{
"name": "nginx-proxy",
"image": "nginx:latest",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"depends_on": ["backend"],
"links": [
"backend"
],
"mountPoints": [
{
"sourceVolume": "dist",
"containerPath": "/var/www/app/frontend/dist",
"readOnly": true
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/nginx.conf",
"readOnly": true
}
]
}
]
}
I also highly recommend to use AWS services for setting up your SSL: Route 53 and Certificate manager. They play nice together and if I understand correctly, it allows you to apply SSL on load balancing level.

"Invalid configuration for registry" error when executing "eb local run"

I think this is a very easy to fix problem, but I just can't seem to solve it! I've spent a good amount of time looking for any leads on Google/SO but couldn't find a solution.
When executing eb local run, I'm getting this error:
Invalid configuration for registry
$ eb local run
ERROR: InvalidConfigFile :: Invalid configuration for registry 12345678.dkr.ecr.eu-west-1.amazonaws.com
The image lines in my Dockerrun.aws.json are as follows:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "frontend",
"host": {
"sourcePath": "/var/app/current/frontend"
}
},
{
"name": "backend",
"host": {
"sourcePath": "/var/app/current/backend"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/config/nginx"
}
},
{
"name": "nginx-proxy-content",
"host": {
"sourcePath": "/var/app/current/content/"
}
},
{
"name": "nginx-proxy-ssl",
"host": {
"sourcePath": "/var/app/current/config/ssl"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "123456.dkr.ecr.eu-west-1.amazonaws.com/backend:latest",
"Update": "true",
"essential": true,
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/backend",
"sourceVolume": "backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxsaxaax"
},
{
"name": "PG_USERNAME",
"value": "
},
{
"name": "PG_HOST",
"value": "123456.dsadsau89das.eu-west-1.rds.amazonaws.com"
},
{
"name": "FE_URL",
"value": "http://develop1.com"
}
]
},
{
"name": "frontend",
"image": "123456.dkr.ecr.eu-west-1.amazonaws.com/frontend:latest",
"Update": "true",
"essential": true,
"memory": 512,
"links": [
"backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/frontend",
"sourceVolume": "frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "www.eample.com"
}
]
},
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 3000
}
],
"links": [
"backend",
"frontend"
],
"mountPoints": [
{
"sourceVolume": "nginx-proxy-content",
"containerPath": "/var/www/html"
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
},
{
"sourceVolume": "nginx-proxy-ssl",
"containerPath": "/etc/nginx/ssl",
"readOnly": true
}
]
}
],
"family": ""
}
It seems that you have a broken docker-registry auth config file. In your home, this file ~/.docker/config.json, should look something like:
{
"auths": {
"https://1234567890.dkr.ecr.us-east-1.amazonaws.com": {
"auth": "xxxxxx"
}
}
}
That is generated with the command docker login (related to aws ecr get-login)
Check that. I say this because you are entering in an exception here:
for registry, entry in six.iteritems(entries):
if not isinstance(entry, dict):
# (...)
if raise_on_error:
raise errors.InvalidConfigFile(
'Invalid configuration for registry {0}'.format(registry)
)
return {}
This is due to outdated dependencies in the current version of the awsebcli tool. They pinned version "docker-py (>=1.1.0,<=1.7.2)" which does not support the newer credential helper formats. The latest version of docker-py is the first one to properly support the latest credential helper format and until the AWS EB CLI developers update docker-py to use 2.4.0 (https://github.com/docker/docker-py/releases/tag/2.4.0) this will remain broken.
First is that it's not valid json, The PG_USERNAME field does not have the enclosing quote.
{
"name": "PG_USERNAME",
"value": "
},
Should be
{
"name": "PG_USERNAME",
"value": ""
},
Next thing to check is to see if your Beanstalk instance profile has access to the ecr registry.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html
Specifies the Docker base image on an existing Docker repository from which you're building a Docker container. Specify the value of the Name key in the format / for images on Docker Hub, or // for other sites.
When you specify an image in the Dockerrun.aws.json file, each instance in your Elastic Beanstalk environment will run docker pull on that image and run it. Optionally include the Update key. The default value is "true" and instructs Elastic Beanstalk to check the repository, pull any updates to the image, and overwrite any cached images.
Do not specify the Image key in the Dockerrun.aws.json file when using a Dockerfile. .Elastic Beanstalk will always build and use the image described in the Dockerfile when one is present.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html
Test to make sure you can access your ecr outside of Elasticbeanstalk as well.
$ docker pull aws_account_id.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest
latest: Pulling from amazonlinux
8e3fa21c4cc4: Pull complete
Digest: sha256:59895a93ba4345e238926c0f4f4a3969b1ec5aa0a291a182816a4630c62df769
Status: Downloaded newer image for aws_account_id.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest
http://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-pull-ecr-image.html

Seemingly Random Timeouts using Packer with Ansible on AWS

I am baking AWS AMIs using Packer and Ansible. It seems that after about 6 minutes or so of Packer running, the process will fail. Aside from it taking about 6 minutes, I can't seem to find a logical explanation as to what is happening. The Ansbile playbook will fail at different points along the way but always about 6 minutes after I launch Packer.
I always get an Ansible error when I hit this issue. I either get Timeout (12s) waiting for privilege escalation prompt: or Connection to 127.0.0.1 closed.\r\n
Is there a way to extend the timeouts associated with a playbook or Packer builder?
Packer file contents:
{
"provisioners": [{
"type": "ansible",
"playbook_file": "../ansible/nexus.yml"
}],
"builders": [{
"type": "amazon-ebs",
"region": "us-east-2",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "amzn-ami-hvm*-gp2",
"root-device-type": "ebs"
},
"owners": ["137112412989"],
"most_recent": true
},
"instance_type": "t2.medium",
"ami_virtualization_type": "hvm",
"ssh_username": "ec2-user",
"ami_name": "Nexus (Latest Amazon Linux Base) {{isotime \"2006-01-02T15-04-05-06Z\"| clean_ami_name}}",
"ami_description": "Nexus AMI",
"run_tags": {
"AmiName": "Nexus",
"AmiCreatedBy": "Packer"
},
"tags": {
"Name": "Nexus",
"CreatedBy": "Packer"
}
}]
}
I solved this issue adding following parameters at Ansible configuration file:
[defaults]
forks = 1
[ssh_connection]
ssh_args = -o ControlMaster=no -o ControlPath=none -o ControlPersist=no
pipelining = false
Hope it helps.