I'm running two ECS instances built from the ECR and onto EC2. One image is the fluentd and one image is the normal backend service.
When I provide the driver options as fluentd in ECS and providing the fluentd address in the console, I am getting
Cannot start service backend: failed to initialize logging driver: dial tcp:*ip-address*:22422: i/o timeout
Related
I'm trying to setup a remote write from a Prometheus Server inside AWS EKS to Amazon Managed Prometheus.
I've setup the remote write like this :
serviceAccounts:
server:
name: amp-iamproxy-ingest-service-account
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::accountID:role/amp-iamproxy-ingest-role
server:
remoteWrite:
- url: https://aps-workspaces.region.amazonaws.com/workspaces/ws-id/api/v1/remote_write
sigv4:
region: $region
queue_config:
max_samples_per_send: 1500
max_shards: 200
capacity: 6000
And I can confirm by checking the Prometheus server logs on EKS that it connects just fine, there are no errors connected to the remote write operation. When I try to check the data on the remote prometheus server(Amazon Managed Prometheus), I am not getting every metric that is being scraped from the local Prometheus server. e.g.
The container_cpu_usage_seconds_total metric, when I query it on the local prometheus server, I get back results with scraped data just fine. When I do the same on Amazon Managed Prometheus, I get no data at all, it's blank
But when I query data for the kube_pod_container_status_running, I get scraped data back from both prometheus servers, the local one and the one that is set as the remote write destination(Amazon Managed Prometheus).
Has anyone had any issue like this before where, Prometheus only remote writes some metrics to the destination Prometheus server?
I am trying to deploy a web application in AWS fargate as well as AWS Beanstalk.
my docker compose file looks like this.(just an example , please focus on ports)
services:
application-gateway:
image: "gcr.io/docker-public/application:latest"
container_name: application-name
ports:
- "443:9443"
- "8443:8443"
**Issue with AWS Fargate
**
I need to know how to map these ports - Bridge doesnt get enabled and I see only
How to change Host Port
I can see that once I deploy the public docker image it gets deployed in Fargate however how to access the application DNS URL ?
**Issue facing in AWS Beanstalk
**
I was able to deploy the application in single instance however I am unable to deploy it in application load balanced enviroment. again I suspect the issue is with the ports in load balancer , I have opened these ports in security group though.
Thanks,
I am deploying the following relatively simple docker-compose.yml file on AWS ECS via the Docker CLI.
It uses tomcat server image which can be also replaced by any other container which does not exits of startup.
services:
tomcat:
image: tomcat:9.0
command: catalina.sh run
ports:
- target: 8080
published: 8080
x-aws-protocol: http
Commands used
docker context use mycontextforecs
docker compose up
The cluster, services, task, target, security groups and application load balancer are automatically created as expected.
But, the security group created by AWS ECS allows inbound traffic on ALL ports by default instead of only the exposed 8080.
Following is a screenshot of the security group, which also has a comment -
"tomcat:8080/ on default network"
But port range is "All" instead of 8080
I've read the following and some other stackoverflow links but could not get an answer.
https://docs.docker.com/cloud/ecs-compose-features/
https://docs.docker.com/cloud/ecs-architecture/
https://docs.docker.com/cloud/ecs-integration/
I understand that the default "Fargate" instance type gets a public ip assigned.
But why does ECS allow traffic on all ports?
If I add another service in the docker-compose file, the default security group gets shared between both of them.
As a result, anyone can telnet into the port exposed by the service due to this security group rule.
I'm running SonarQube docker using the AWS ECS (EC2 instances). The container is up and running and listening on port 9000 with the below logs:-
q-process5925788013780644631properties
2021.03.17 15:50:55 INFO app[][o.s.a.SchedulerImpl] Process[web] is up
2021.03.17 15:50:55 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='ce', ipcIndex=3, logFilenamePrefix=ce]] from [/opt/sonarqube]: /opt/java/openjdk/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/common/*:/opt/sonarqube/lib/jdbc/postgresql/postgresql-42.2.17.jar org.sonar.ce.app.CeServer /opt/sonarqube/temp/sq-process3880305865950565845properties
2021.03.17 15:51:01 INFO app[][o.s.a.SchedulerImpl] Process[ce] is up
2021.03.17 15:51:01 INFO app[][o.s.a.SchedulerImpl] SonarQube is up
I'm using the VPC mode network. I'm using an application load balancer and as per the below screenshot the target groups are healthy but I still could not access my Sonar using the load balancer URL:-
Error:-
Please advise, thanks
ALB Security group screenshot:-
Your alb inbound rule only allows access in from the listed security group which would block your attempt to reach the load balancer url
I am unable to connect my docker worker to docker swam manager.
I have created multiple aws EC2 instances and have made one of them as a manager docker swarm init --listen-addr 0.0.0.0:2377 and trying to connect it via other EC2 instances docker swarm join 0.0.0.0:2377 as a worker, But it gives me an error.
"Error response from daemon: Timeout was reached before node joined`.
The attempt to join the swarm will continue in the background".
I need my docker swarm manager to list docker node ls all the nodes including manager and workers.
To resolve this problem I needed to expose respective ports from both Docker Worker and Docker Manager instances.
I discovered some information while resolving this question,
TCP Port 2377 is a Default port used for communication so add custom tcp rule for port 2377 in security group of aws EC2.
TCP port 2376 for secure Docker client communication. This port is required for Docker Machine to work. Docker Machine is used to orchestrate Docker hosts.
TCP port 2377 This port is used for communication between the nodes of a Docker Swarm or cluster. It only needs to be opened on manager nodes.
TCP and UDP port 7946 for communication among nodes (container network discovery).
UDP port 4789 for overlay network traffic (container ingress networking).
Kindly Note: Aside from those ports, port 22 (for SSH traffic) and any other ports needed for specific services to run on the cluster have to be open.
You need to use the real ip address in the docker swarm join command.
The "0.0.0.0" is not a real ip-address, it's an alias for "all (local) ip-addresses", it's not something you can connect to.
1.run the command in the master node:
docker swarm join-token worker
2.and than run the command obtained from above step
example:
root#ubuntu:~# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0akniaryx9xg8mmb08rbd42kwntigfkyk33vt7ac0wrehn58mk-5voo7jfl3kl40yl4cmvf16lgt 10.0.10.4:2377
root#ubuntu:~#
run on worker node:
docker swarm join --token SWMTKN-1-0akniaryx9xg8mmb08rbd42kwntigfkyk33vt7ac0wrehn58mk-5voo7jfl3kl40yl4cmvf16lgt 10.0.10.4:2377