"Invalid configuration for registry" error when executing "eb local run" - amazon-web-services

I think this is a very easy to fix problem, but I just can't seem to solve it! I've spent a good amount of time looking for any leads on Google/SO but couldn't find a solution.
When executing eb local run, I'm getting this error:
Invalid configuration for registry
$ eb local run
ERROR: InvalidConfigFile :: Invalid configuration for registry 12345678.dkr.ecr.eu-west-1.amazonaws.com
The image lines in my Dockerrun.aws.json are as follows:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "frontend",
"host": {
"sourcePath": "/var/app/current/frontend"
}
},
{
"name": "backend",
"host": {
"sourcePath": "/var/app/current/backend"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/config/nginx"
}
},
{
"name": "nginx-proxy-content",
"host": {
"sourcePath": "/var/app/current/content/"
}
},
{
"name": "nginx-proxy-ssl",
"host": {
"sourcePath": "/var/app/current/config/ssl"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "123456.dkr.ecr.eu-west-1.amazonaws.com/backend:latest",
"Update": "true",
"essential": true,
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/backend",
"sourceVolume": "backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxsaxaax"
},
{
"name": "PG_USERNAME",
"value": "
},
{
"name": "PG_HOST",
"value": "123456.dsadsau89das.eu-west-1.rds.amazonaws.com"
},
{
"name": "FE_URL",
"value": "http://develop1.com"
}
]
},
{
"name": "frontend",
"image": "123456.dkr.ecr.eu-west-1.amazonaws.com/frontend:latest",
"Update": "true",
"essential": true,
"memory": 512,
"links": [
"backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/frontend",
"sourceVolume": "frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "www.eample.com"
}
]
},
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 3000
}
],
"links": [
"backend",
"frontend"
],
"mountPoints": [
{
"sourceVolume": "nginx-proxy-content",
"containerPath": "/var/www/html"
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
},
{
"sourceVolume": "nginx-proxy-ssl",
"containerPath": "/etc/nginx/ssl",
"readOnly": true
}
]
}
],
"family": ""
}

It seems that you have a broken docker-registry auth config file. In your home, this file ~/.docker/config.json, should look something like:
{
"auths": {
"https://1234567890.dkr.ecr.us-east-1.amazonaws.com": {
"auth": "xxxxxx"
}
}
}
That is generated with the command docker login (related to aws ecr get-login)
Check that. I say this because you are entering in an exception here:
for registry, entry in six.iteritems(entries):
if not isinstance(entry, dict):
# (...)
if raise_on_error:
raise errors.InvalidConfigFile(
'Invalid configuration for registry {0}'.format(registry)
)
return {}

This is due to outdated dependencies in the current version of the awsebcli tool. They pinned version "docker-py (>=1.1.0,<=1.7.2)" which does not support the newer credential helper formats. The latest version of docker-py is the first one to properly support the latest credential helper format and until the AWS EB CLI developers update docker-py to use 2.4.0 (https://github.com/docker/docker-py/releases/tag/2.4.0) this will remain broken.

First is that it's not valid json, The PG_USERNAME field does not have the enclosing quote.
{
"name": "PG_USERNAME",
"value": "
},
Should be
{
"name": "PG_USERNAME",
"value": ""
},
Next thing to check is to see if your Beanstalk instance profile has access to the ecr registry.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html
Specifies the Docker base image on an existing Docker repository from which you're building a Docker container. Specify the value of the Name key in the format / for images on Docker Hub, or // for other sites.
When you specify an image in the Dockerrun.aws.json file, each instance in your Elastic Beanstalk environment will run docker pull on that image and run it. Optionally include the Update key. The default value is "true" and instructs Elastic Beanstalk to check the repository, pull any updates to the image, and overwrite any cached images.
Do not specify the Image key in the Dockerrun.aws.json file when using a Dockerfile. .Elastic Beanstalk will always build and use the image described in the Dockerfile when one is present.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html
Test to make sure you can access your ecr outside of Elasticbeanstalk as well.
$ docker pull aws_account_id.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest
latest: Pulling from amazonlinux
8e3fa21c4cc4: Pull complete
Digest: sha256:59895a93ba4345e238926c0f4f4a3969b1ec5aa0a291a182816a4630c62df769
Status: Downloaded newer image for aws_account_id.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest
http://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-pull-ecr-image.html

Related

AWS service can't start task, but starting task manually works

Until now I had a backend running single tasks. I now want to switch to services starting my tasks. For two of the tasks I need direct access to them so I tried using ServiceConnect.
When I run this task standalone it starts. When I start a service without ServiceConnect with the same task inside it also starts. When I enable ServiceConnect I get this error message inside of the 'Deployments and events' tab in the service:
service (...) was unable to place a task because no container instance met all of its requirements.
The closest matching container-instance (...) is missing an attribute required by your task.
For more information, see the Troubleshooting section of the Amazon ECS Developer Guide.
When I check the attributes of all free containers with:
ecs-cli check-attributes --task-def some-task-definition --container-instances ... --cluster some-cluster
I just get:
Container Instance Missing Attributes
heyvie-backend-dev None
My task definition looks like that:
{
"family": "some-task-definition",
"taskRoleArn": "arn:aws:iam::...:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::...:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"cpu": "1024",
"memory": "982",
"containerDefinitions": [
{
"name": "...",
"image": "...",
"essential": true,
"healthCheck": {
"command": ["..."],
"startPeriod": 20,
"retries": 3
},
"portMappings": [
{
"name": "somePortName",
"containerPort": 4321
}
],
"mountPoints": [
{
"sourceVolume": "...",
"containerPath": "..."
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "...",
"awslogs-region": "eu-...",
"awslogs-stream-prefix": "..."
}
}
}
],
"volumes": [
{
"name": "...",
"efsVolumeConfiguration": {
"fileSystemId": "...",
"rootDirectory": "/",
"transitEncryption": "ENABLED"
}
}
],
"requiresCompatibilities": ["EC2"]
}
My service definition looks like that:
{
"cluster": "some-cluster",
"serviceName": "...",
"taskDefinition": "some-task-definition",
"desiredCount": 1,
"launchType": "EC2",
"deploymentConfiguration": {
"maximumPercent": 100,
"minimumHealthyPercent": 0
},
"placementConstraints": [
{
"type": "distinctInstance"
}
],
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
...
],
"securityGroups": ["..."],
"assignPublicIp": "DISABLED"
}
},
"serviceConnectConfiguration": {
"enabled": true,
"namespace": "someNamespace",
"services": [
{
"portName": "somePortName",
"clientAliases": [
{
"port": 4321
}
]
}
]
},
"schedulingStrategy": "REPLICA",
"enableECSManagedTags": true,
"propagateTags": "SERVICE"
}
I also added this to the user data of my launch template:
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_ENABLE_TASK_IAM_ROLE=true
ECS_CLUSTER=some-cluster
EOF
Did anyone experience something similiar or does know what could cause that issue?
I used ServiceDiscovery, I think, it's the easiest way to replace a dynamic ip address of a task in a service (on every restart the ip address changes and that's probably what you're trying to avoid?).
With ServiceDiscovery you are creating a new DNS record and instead of ip-address:port you can just use serviceNameOfNamespace.namespace. to connect to a task. ServiceDiscovery worked without any problem on an existing task.
Hope that helps, I don't really know if there are any benefits for ServiceConnect except for higher connection counts and retry functionalities, so if anybody knows more about differences between those I'm happy to learn.

How to deploy multi-container on Elastic Beanstalk (AWS)?

I tried to deploy this app, which consists of a Flask API and a MongoDB database, which is mounted to a volume.
What am I doing wrong? I tried to upload the Dockerrun.aws.json file to Beanstalk, but I keep getting this error:
[Instance: i-0f9dd8d8d30059929] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
This is my Dockerrun.aws.json file:
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"essential": true,
"image": "nielshoogeveen1990/image-classifier:latest",
"links": [
"db"
],
"name": "api",
"memory": 128,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
]
},
{
"essential": true,
"image": "mongo:3.6.4",
"mountPoints": [
{
"containerPath": "/var/lib/mysql/data",
"sourceVolume": "Db-Data"
}
],
"name": "db",
"memory": 128
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "db-data"
},
"name": "Db-Data"
}
]
}

Prisma error when trying to run with elastic beanstalk

I have a prisma project that works fine locally when I run $ docker-compose up. I converted the docker-compose.yml file to Dockerrun.aws.json. But now when i try to run the project locally via $ eb local run I get an error
mysql_1 | Version: '5.7.24' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
prisma_1 | Exception in thread "main" java.sql.SQLTransientConnectionException: database - Connection is not available, request timed out after 5001ms.
Below is my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "prisma"
}
],
"essential": true,
"memory": 128,
"image": "mysql:5.7",
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "Mysql"
}
],
"name": "mysql",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 3306
}
]
},
{
"environment": [
{
"name": "PRISMA_CONFIG",
"value": "port: 4466\ndatabases:\n default:\n connector: mysql\n host: mysql\n port: 3306\n user: root\n password: prisma\n migrations: true\n"
}
],
"essential": true,
"memory": 128,
"image": "prismagraphql/prisma:1.21",
"name": "prisma",
"portMappings": [
{
"containerPort": 4466,
"hostPort": 4466
}
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "mysql"
},
"name": "Mysql"
}
]
}
The error message leads me to believe that there's an issue connecting the prisma container to the mysql instance. If i had to guess it's the PRISMA_CONFIG value but not I'm not 100% sure. Can someone tell me what I'm doing wrong here?
You can not have those /n in there. YAML cares about real carriage and spaces.

HTTPS on Elastic Beanstalk (Docker Multi-container)

I've been looking around and haven't found much content with regards to a best practice when it comes to setting up HTTPS/SSL on Amazon Elastic Beanstalk with a Multi-container Docker environment.
There is a bunch of stuff when it comes to single container configuration, but nothing when it comes to multi-container.
My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "app-frontend",
"host": {
"sourcePath": "/var/app/current/app-frontend"
}
},
{
"name": "app-backend",
"host": {
"sourcePath": "/var/app/current/app-backend"
}
}
],
"containerDefinitions": [
{
"name": "app-backend",
"image": "xxxxx/app-backend",
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/app-backend",
"sourceVolume": "app-backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxxx"
},
{
"name": "PG_USERNAME",
"value": "xx"
},
{
"name": "PG_HOST",
"value": "xxxxx"
}
]
},
{
"name": "app-frontend",
"image": "xxxxxxx/app-frontend",
"memory": 512,
"links": [
"app-backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/app-frontend",
"sourceVolume": "app-frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 80
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "xxxxxx"
}
]
}
],
"family": ""
}
My thinking thus far is I would need to bring an nginx container into the mix in order to proxy the two services and handle things like mapping different domain names to different services.
Would I go the usual route of just setting up nginx and configuring the SSL as normal, or is there a better way, like I've seen for the single containers using the .ebextensions method (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-docker.html) ?
This is more of an idea (I haven't actually done this and not sure if it would work). But the components appear to be all available to create a ALB that could direct traffic to one process or another based on path rules.
Here is what I am thinking that could be done via .ebextensions config files based on the options available from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html:
Use aws:elasticbeanstalk:environment:process:default to make sure the default application port and health check is set the way you intend (let's say port 80 is your default in this case.
Use aws:elasticbeanstalk:environment:process:process_name to create a backend process that goes to your second service (port 4000 in this case).
Create a rule for your backend with aws:elbv2:listenerrule:backend which would use something like /backend/* as the path.
Create the SSL listener with aws:elbv2:listener:443 (example at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-applicationloadbalancer.html) that uses this new backend rule.
I am not sure if additional rules need to be created for the default listener of aws:elbv2:listener:default. It seems like the default might just match /* so in this case anything sent to /backend/* would go to port 4000 container and anything else goes to the port 3000 container.
You will definitely need an nginx container, for the simple fact that a multicontainer ELB setup does not provide one by default. The reason that you see a single container setup on ELB with these .ebextension configs, is that for this type of setup the ELB does provide nginx.
The benefit of having your own nginx container is that you won't need a frontend container (assuming you are serving static files). You can write our nginx config so that you serve static files straight.
Here is my Dockerrun file:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "dist",
"host": {
"sourcePath": "/var/app/current/frontend/dist"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/compose/production/nginx/nginx.conf"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "abc/xyz",
"essential": true,
"memory": 256,
},
{
"name": "nginx-proxy",
"image": "nginx:latest",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"depends_on": ["backend"],
"links": [
"backend"
],
"mountPoints": [
{
"sourceVolume": "dist",
"containerPath": "/var/www/app/frontend/dist",
"readOnly": true
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/nginx.conf",
"readOnly": true
}
]
}
]
}
I also highly recommend to use AWS services for setting up your SSL: Route 53 and Certificate manager. They play nice together and if I understand correctly, it allows you to apply SSL on load balancing level.

Can I specify a file instead of a directory to send to the container in the Dockerrun.aws.json file?

I cannot find the reference documentation to the available fields in Dockerrun.aws.json. I'm trying to import /dev/log from the host into the container so that I can centralize logs to logstash.
From the example, we can see that there is "HostDirectory" and "ContainerDirectory" however I can't find any analogue for "HostFile"/"ContainerFile".
How can I specify a single file to be shared with an elastic-beanstalk-enabled docker container?
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": "mydockercfg"
},
"Image": {
"Name": "janedoe/image",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "1234"
}
],
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
],
"Logging": "/var/log/nginx"
}
You can mount a host file as a data volume: https://docs.docker.com/userguide/dockervolumes/#mount-a-host-file-as-a-data-volume
So you should be able to use the "Volumes" section to mount your file to your container.
"Volumes": [
{
"HostDirectory": "/dev/log",
"ContainerDirectory": "/dev/log"
}
]