I'm new to AWS and Elastic Beanstalk. I'm trying to test a multi-container Docker deployment with a simple Spring Boot Docker image https://hub.docker.com/r/springcloud/eureka/ just to see something working for now.
I'm uploading a very simple Dockerrun.aws.json file to the Beanstalk console:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "eureka1",
"image": "springcloud/eureka",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8761
}
]
}
]
}
The springcloud/eureka Docker image starts by default the server on port 8761, and I'm mapping the host's port 80 to the container's port 8761.
So opening the application's url (something like http://sample-env-1.xxx.eu-central-1.elasticbeanstalk.com/ ), it should display the Eureka server interface... It doesn't. It just says "Unable to connect" standard browser page.
The logs don't appear to indicate an error... Or at least I can't see any obvious error.
Seems like I was putting the "memory" parameter to 128 which probably was not enough. Switching it to "memoryReservation": 128 made it work.
"memory" indicates the Hard Limit and "memoryReservation" indicates Soft Limit. One should always use a soft limit if the developer is not sure about the memory requirement.
Related
I need to do get the SYS_PTRACE kernel capability on my docker container. Here's the Docerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "some-bucket",
"Key": "somekey"
},
"Image": {
"Name": "somename",
"Update": "true"
},
"Ports":[
{
"HostPort": 80,
"ContainerPort": 80
},
a few more ports
]
}
Remember, this is Amazon Linux 2, which is a whole new distribution and EB platform. We're not using Docker Compose (wherein you could add that to the yml).
I tried just adding in the following section:
"linuxParameters": {
"capabilities": {
"add": ["SYS_PTRACE"]
}
}
It was simply ignored.
Thanks!
It seems to me, this setting is not supported in v1. When looking into the docs under section "Docker platform Configuration - without Docker Compose" [1], linuxParameters is not listed as part of "Valid keys and values for the Dockerrun.aws.json v1 file". You might have to switch to v2 by using multi container Docker. The docs for v2 state that "the container definition and volumes sections of Dockerrun.aws.json use the same formatting as the corresponding sections of an Amazon ECS task definition file". [2]
It looks like your code above would work in v2 because it is a valid task definition section, see [3].
[1] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
[2] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html
[3] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
I am using docker environment in an Elastic beanstalk cluster but having trouble with open files limit. I verified that on the host my open files limit is 65535, but in the docker container the soft limit is 1024 and hard limit is 4096. I'd like to increase these limits inside the container, but when I tried to do that manually I got error even with root:
root#4020d4faf5fc:/# ulimit -n 20000
bash: ulimit: open files: cannot modify limit: Operation not permitted
A similar thread also shares some ideas but seems like those are related to increasing limit of the host vs container.
You would need the SYS_RESOURCE Linux capability to set ulimit from within the container, which would typically be specified using the --cap-add flag with docker run.
With Elastic Beanstalk this can be accomplished in the following ways:
If you are already using docker-compose, then add it to your compose file as usual (under services.<your service> key)
ulimits:
nofile:
soft: 20000
hard: 20000
If you use Dockerrun.aws.json version 1 for single-container Docker environments, see Task Definition Resource Limits:
{
"AWSEBDockerrunVersion": "1",
.
.
.
"ulimits": [
{
"name": "nofile",
"softLimit": 20000,
"hardLimit": 20000
}
]
}
If you use Dockerrun.aws.json version 2 for multi-container Docker environments, this gist may be useful
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
.
.
.
"ulimits": [
{
"hardLimit": 20000,
"name": "nofile",
"softLimit": 20000
}
]
}
]
}
See also the Elastic Beanstalk Docker docs.
I'm getting asymmetrical container discoverability with multicontainer docker on AWS. Namely, the first container can find the second, but the second cannot find the first.
I have a multicontainer docker deployment on AWS Elastic Beanstalk. Both containers are running Node servers using identical initial code, and are built with identical Dockerfiles. Everything is up to date.
Anonymized version of my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "firstContainer",
"image": "firstContainerImage",
"essential": true,
"memoryReservation":196,
"links":[
"secondContainer",
"redis"
],
"portMappings":[
{
"hostPort":80,
"containerPort":8080
}
]
},
{
"name": "secondContainer",
"image": "secondContainerImage",
"essential": true,
"memoryReservation":196,
"environment":
"links":[
"redis"
]
},
{
"name": "redis",
"image": "redis:4.0-alpine",
"essential": true,
"memoryReservation":128
}
]
}
The firstContainer proxies a subset of requests to secondContainer on port 8080, via the address http://secondContainer:8080, which works completely fine. However, if I try to send a request the other way, from secondContainer to http://firstContainer:8080, I get a "Bad Address" error of one sort or another. This is true both from within the servers running on these containers, and directly from the containers themselves using wget. It's also true when trying different exposed ports.
If I add "firstContainer" to the "links" field of the second container's Dockerrun file, I get an error.
My local setup, using docker-compose, does not have this problem at all.
Anyone know what the cause of this is? How can I get symmetrical discoverability on an AWS multicontainer deployment?
I got a response from AWS support on the topic.
The links are indeed one-way, which is an unfortunate limitation. They recommended taking one of two approaches:
Use a shared filesystem and write the IP addresses of the containers to a file, which could then be used by your application to access the containers.
Use AWS Faragate service and use ECS Service Discovery service which lets you automatically create DNS records for the tasks and make them discoverable within your VPC.
I opted for a 3rd approach, which was to have the container that can discover the rest send out pings to inform the others of its docker-network IP address.
I'm exploring another option which is a mix of links, port mappings and extraHosts.
{
"name": "grafana",
"image": "docker.pkg.github.com/safecast/reporting2/grafana:latest",
"memoryReservation": 128,
"essential": true,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
],
"links": [
"renderer"
],
"mountPoints": [
{
"sourceVolume": "grafana",
"containerPath": "/etc/grafana",
"readOnly": true
}
]
},
{
"name": "renderer",
"image": "grafana/grafana-image-renderer:2.0.0",
"memoryReservation": 128,
"essential": true,
"portMappings": [
{
"hostPort": 8081,
"containerPort": 8081
}
],
"mountPoints": [],
"extraHosts": [
{
"hostname": "grafana",
"ipAddress": "172.17.0.1"
}
]
}
This allows the grafana to resolve renderer via links as usual, but the renderer container resolves grafana to the host IP (172.17.0.1 the default docker bridge gateway) which has port 3000 bound back to the grafana port.
So far it seems to work. The portMappings on renderer might not be required, but I'm still working out all the kinks.
I have two microservices images, a Go Rest API and a React frontend which are both inside my AWS ECR. I'll be using Elastic Beanstalk. Now since I believe they're on same machine so I configured the React app to fetch data from the API on localhost:8080. Below are the Dockerfiles for both. They worked in my dev environment so I pushed them to my ECR.
Dockerfile for Golang Rest API
FROM golang
ADD . /go/src/vsts/project/gorestapi
WORKDIR /go/src/vsts/project/gorestapi
RUN go get github.com/golang/dep/cmd/dep
RUN dep ensure
RUN go install .
ENTRYPOINT /go/bin/gorestapi
EXPOSE 8080
Dockerfile for React Frontend App
FROM node:8.4.0-alpine
WORKDIR /usr/src/app
ENV NPM_CONFIG_LOGLEVEL warn
RUN npm i -g serve
CMD serve -s build/app -p 3000
EXPOSE 3000
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
I don't know if volume still needs to be declared, or the mountpoints, all I know is I need to put there the version of the dockerrun json to v2 and the name, image and portsmapping in container Definition. Most of the samples are confusing enough since none of them shows an app from private repo and they have those volumes, mountpoints, links, that I don't really understand how to use. I tried the one below but it did not work
Edit: (I changed Dockerrun.aws.json expecting Volume Host SourcePath to be the path in my machine. Please correct if I'm wrong)
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "webapp",
"host": {
"sourcePath": "/webapp"
}
},
{
"name": "gorestapi",
"host": {
"sourcePath": "/gorestapi"
}
}
],
"containerDefinitions": [
{
"name": "gorestapi",
"image": "<acctId>.dkr.ecr.us-east-1.amazonaws.com/dev/gorestapi:latest",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"links": [
"webapp"
],
"mountPoints": [
{
"sourceVolume": "gorestapi",
"containerPath": "/go/src/vsts/project/gorestapi"
}
]
},
{
"name": "webapp",
"image": "<acctId>.dkr.ecr.us-east-1.amazonaws.com/dev/webapp:latest",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
],
"mountPoints": [
{
"sourceVolume": "webapp",
"containerPath": "/usr/src/app"
}
]
}
]
}
Did I specify the correct values for the paths?
Yeah, you still need to define the volumes and their mount points if you want to use them. Here is a more comprehensive guide in my opinion with regards to setting up a multi-container environment, straight out of the official docs plus they have a private repository section. Hoping that helps >> https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html#create_deploy_docker_v2config_dockerrun
Also, consider looking into EC2 Container Service (ECS), however, Elastic Beanstalk does use it in the background to run multi-container environments but could be easier to run them directly on ECS.
I am using ElasticBeanstalk with single docker container. I am using DataDog(statsd client) for pushing metrics from the docker container. I have a running datadog-agent which is technically a statsd client on the host machine. The issue I am facing is to connect that client running at port 8125 from the container.
What I have tried is:
EXPOSE PORT 8125/udp in Dockerfile which obviously didn't work.
Added Dockerrun.aws.json with
{
"AWSEBDockerrunVersion": "1",
"portMappings": [
{
"hostPort": 8125,
"containerPort": 8125
}
]
}
But the issue is portMappings seems to added in V2 which is not available for single docker container.
Thanks in Advance
Try
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "8125"
}
]
}