I have a very simple docker-compose for locust. It consists of one master (which is basically a webserver for a client) and one slave (which is a client that actually performs load testing, which is what locust is for).
version: "3"
services:
locust-master:
image: chapkovski/locust
ports:
- "80:80"
environment:
LOCUST_MODE: master
locust-slave:
image: chapkovski/locust
"links": [
"locust-master"
]
environment:
LOCUST_MODE: slave
LOCUST_MASTER_HOST: locust-master
LOCUST_MASTER_PORT: 5557
Everything works on AWS ECS. But now I would like to have multiple slaves connected to the same master, and I can't figure out how to do this. Because when I try to scale up the tasks, that results in an error due to the fact that ports are already busy. Which is obvious because scaling up this task definition makes ECS agent to have several masters at the same port.
When I try to split master and slave so I would have two tasks, and I would be able to scale up only the 'slave' one, then of course they cannot communicate, and the master does not see any clients.
So what is the correct way of scaling up only 'client' part, if, let's say I need 20 clients and one master?
You can not scale services with predefined port, if you do so you will get error Ports are already busy.
You have to option to resolve this issue.
One service per EC2 instance ( not good enough but way around)
Dynamic port binding
With the second option, ECS agent assigns a dynamic port which not conflict with any occupied port so can scale as many tasks as you want.
You need set host port 0 in port mapping section.
understanding-dynamic-port-mapping-in-amazon-ecs-with-application-load-balancer
"portMappings": [
{
"containerPort": 3000,
"hostPort": 0
}
]
Related
I am deploying the following relatively simple docker-compose.yml file on AWS ECS via the Docker CLI.
It uses tomcat server image which can be also replaced by any other container which does not exits of startup.
services:
tomcat:
image: tomcat:9.0
command: catalina.sh run
ports:
- target: 8080
published: 8080
x-aws-protocol: http
Commands used
docker context use mycontextforecs
docker compose up
The cluster, services, task, target, security groups and application load balancer are automatically created as expected.
But, the security group created by AWS ECS allows inbound traffic on ALL ports by default instead of only the exposed 8080.
Following is a screenshot of the security group, which also has a comment -
"tomcat:8080/ on default network"
But port range is "All" instead of 8080
I've read the following and some other stackoverflow links but could not get an answer.
https://docs.docker.com/cloud/ecs-compose-features/
https://docs.docker.com/cloud/ecs-architecture/
https://docs.docker.com/cloud/ecs-integration/
I understand that the default "Fargate" instance type gets a public ip assigned.
But why does ECS allow traffic on all ports?
If I add another service in the docker-compose file, the default security group gets shared between both of them.
As a result, anyone can telnet into the port exposed by the service due to this security group rule.
My Fargate task keeps stopping after it's started and doesn't output any logs (awslog driver is selected).
The container does start up and stay running when i execute docker locally.
Docker-compose file:
version: '2'
services:
asterisk:
build: .
container_name: asterisk
restart: always
ports:
- 10000-10099:10000-10099/udp
- 5060:5060/udp
Dockerfile:
FROM debian:10.7
RUN {stuff-that-works-is-here}
# Keep Asterisk running in the foreground
ENTRYPOINT ["asterisk", "-f"]
# SIP port
EXPOSE 5060:5060/udp
# RTP ports
EXPOSE 10000-10099:10000-10099/udp
my task execution role has full cloudwatch access for debugging.
Click on the ECS task instance, expand the container section, the error should be shown there. I have attached a screen shot of it. Here is a screenshotScrenshot
The AWS log driver alone is not enough.
Unfortunately, Fargate doesn't create the log group for you unless you tell it to
See Creating a log group at https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
I had a similar problem, and the cause was the Health Check.
ECS dont have Health Check for UDP, so when you open a UDP port if you use Docker for the deploy (docker compose), it create a Health Check pointing to a TCP port, and since there was no open TCP ports for that range, the container reset itself due to Health Check.
I had to add a custom Resource to docker-compose:
x-aws-cloudformation:
Resources:
AsteriskUDP5060TargetGroup:
Type: "AWS::ElasticLoadBalancingV2::TargetGroup"
Properties:
HealthCheckProtocol: TCP
HealthCheckPort: 8088
Basically I have a Health Check for a UDP port pointing to a TCP port. Its a "hack" to bypass this problem when the deploy is made with Docker.
I have a very simple docker-compose for locust (python package for load testing). It starts a 'master' service and a 'slave' service. Everything works perfectly locally but when I deploy it to AWS ECS a 'slave' can't find a master.
services:
my-master:
image: chapkovski/locust
ports:
- "80:80"
env_file:
- .env
environment:
LOCUST_MODE: master
my-slave:
image: chapkovski/locust
env_file:
- .env
environment:
LOCUST_MODE: slave
LOCUST_MASTER_HOST: my-master
So apparently I need to refer from my-slave service not to my-master when I am on ECS. What's wrong here?
Everything works perfectly locally but when I deploy it to AWS ECS
a 'slave' can't find a master.
I assume that slave needs to access master both must be in the same task definition to access like this or you can explore service discovery?
"links": [
"master"
]
links
Type: string array
Required: no
The link parameter allows containers to communicate with each other
without the need for port mappings. Only supported if the network mode
of a task definition is set to bridge. The name:internalName construct
is analogous to name:alias in Docker links.
Note
This parameter is not supported for Windows containers or tasks using the awsvpc network mode.
Important
Containers that are collocated on a single container instance may be
able to communicate with each other without requiring links or host
port mappings. Network isolation is achieved on the container instance
using security groups and VPC settings.
"links": ["name:internalName", ...]
container_definition_network
I have setup a Logstash Cluster in Google Cloud that sits behind a Load Balancer and uses Autoscaling (-> when the load gets to high new instances are started up automatically).
Unfortunately this does not work properly with Filebeat. Filebeat only hits those Logstash Vms that existed prior to starting up Filebeat.
Example:
Lets assume I initially have those 3 Logstash hosts running:
Host1
Host2
Host3
When I startup Filebeat, it correctly distributes the messages to Host1, Host2 and Host3.
Now the Autoscaling kicks and and spins up 2 more instances, Host4 and Host5.
Unfortunately Filebeat still only sends messages to Host1, Host2 and Host3. The new hosts, Host4 and Host5, are ignored.
When I now restart Filebeat it sends messages to all 5 hosts!
So it seems Filebeat only sends messages to those hosts that have been running when Filebeat starts up.
My filebeat.yml looks like this:
filebeat.inputs:
- type: log
paths:
...
...
output.logstash:
hosts: ["logstash-loadbalancer:5044", "logstash-loadbalancer:5044"]
worker: 1
ttl: 2s
loadbalance: true
I have added the same host (the loadbalancer) twice because I've read in the forums that otherwise Filebeat won't loadbalance messages -> I can confirm that.
But still loadbalancing seems to not work properly, e.g. TTL seems not to be respected because it always targets the same connections.
Is my configuration wrong? Bug in Filebeat?
Hope you already resolved this problem. In case you haven't, you should set the pipelining to 0 as below: (ttl only works if pipelining is set to 0)
output.logstash:
hosts: ["logstash-loadbalancer:5044", "logstash-loadbalancer:5044"]
worker: 1
ttl: 2s
loadbalance: true
pipelining: 0
I'm new to both docker and AWS. I just created my very first docker image. The application is a backend microservice with rest controllers persisting data in a MySQL database. I've manually created the database in RDS and after running the container locally, the rest APIs work fine in Postman.
Here is the Dockerfile:
FROM openjdk:8-jre-alpine
MAINTAINER alireza.online
COPY ./target/Practice-1-1.0-SNAPSHOT.jar /myApplication/
COPY ./target/libs/ /myApplication/libs/
EXPOSE 8080
CMD ["java", "-jar", "./myApplication/Practice-1-1.0-SNAPSHOT.jar"]
Then I deployed the docker image via AWS Beanstalk. Here is the Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "aliam/backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
],
"Logging": "/var/log/nginx"
}
And everything went well:
But now, I'm getting "502 Bad Gateway" in postman when trying to run "backend.us-east-2.elasticbeanstalk.com/health".
I checked the log on Beanstalk and realized that the application has problem connecting to the RDS database:
"Could not create connection to database server. Attempted reconnect 3 times. Giving up."
What I tried to do to solve the problem:
1- I tried to assign the same security group the EC2 instance is using to my RDS instance, but it didn't work.
2- I tried to make more inbound rules on the security group to add public and private IPs of the EC2 instance but I was not sure about the port and the CIDR I should define and couldn't make it.
Any comment would be highly appreciated.
Here are resources in your stack:
LoadBalancer -> EC2 instance(s) -> MySQL database
All of them need to have SecurityGroups assigned to them, allowing connections on the right ports to the upstream resources.
So, if you assign sg-1234 security group to your EC2 instances, and sg-5678 to your RDS database, there must be a rule existing in the sg-5678 allowing inbound connections from sg-1234 (no need for CIDRs, you can open a connection from SG to SG). The typical MySQL port is 3306.
Similarly, the LoadBalancer (which is automatically created for you by ElasticBeanstalk) must have access to your EC2 instance's 8080 port. Furthermore, if you want to access your instances with the "backend.us-east-2.elasticbeanstalk.com/health" domain name, the loadbalancer would have to listen on port 80 and have a target group of your instances on 8080 port.
Hope this helps!