How to run AWS CW Agent inside docker container? - amazon-web-services

I've been trying to run the CW Agent inside the container but failed.
error "unknown init system"
Im aware of docker logging driver but im trying to use the aws agent if possible.

You can use AWS cloud watch agent docker, Here is offical docker image amazon/cloudwatch-agent.
docker run -it --rm amazon/cloudwatch-agent
https://hub.docker.com/r/amazon/cloudwatch-agent
Building Your Own CloudWatch Agent Docker Image
You can build your own CloudWatch agent Docker image by referring to
the Dockerfile located at
https://github.com/aws-samples/amazon-cloudwatch-container-insights/blob/master/cloudwatch-agent-dockerfile/Dockerfile.
FROM debian:latest as build
RUN apt-get update && \
apt-get install -y ca-certificates curl && \
rm -rf /var/lib/apt/lists/*
RUN curl -O https://s3.amazonaws.com/amazoncloudwatch-agent/debian/amd64/latest/amazon-cloudwatch-agent.deb && \
dpkg -i -E amazon-cloudwatch-agent.deb && \
rm -rf /tmp/* && \
rm -rf /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard && \
rm -rf /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl && \
rm -rf /opt/aws/amazon-cloudwatch-agent/bin/config-downloader
FROM scratch
COPY --from=build /tmp /tmp
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=build /opt/aws/amazon-cloudwatch-agent /opt/aws/amazon-cloudwatch-agent
ENV RUN_IN_CONTAINER="True"
ENTRYPOINT ["/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent"]
ContainerInsights-build-docker-image

You can make use of Amazon ECS container agent.
The Amazon ECS container agent allows container instances to connect
to your cluster. The Amazon ECS container agent is included in the
Amazon ECS-optimized AMIs, but you can also install it on any Amazon
EC2 instance that supports the Amazon ECS specification. The Amazon
ECS container agent is only supported on Amazon EC2 instances.
Installing the Amazon ECS Container Agent
Make sure that the ec2 instance has the IAM role that allow access to ECS before you run the init command

Related

How to run codedeploy agent installation script in AWS ECS?

I have an AWS ECS cluster defined with a service that uses Replica service type. It creates an EC2 isntance with a docker container. I can access it through browser and all this stuff...
The issue is that I have to connect through ssh to the EC2 instance and run:
sudo yum update -y
sudo yum install-y ruby
sudo yum install-y wget
cd /home/ec2-user
wget https://aws-codedeploy-eu-west-1.s3.eu-west-1.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
It install codedeploy agent, so I can connect github to the instance and CI/CD code.
I would like to set up this automatically in every server that the ECS definition creates. For example if i stop the EC2 instance, the cluster raises a new EC2 instance, which doesn't have this agent...
I saw that I should configure your Amazon ECS container instance with user data, but first of all is that I am not able to find this option, and I am not quite sure if it runs into the EC2 isntance or in the docker itself.
Based on the comments.
The solution was to use Launch Template or Launch Configurations.

Docker compose commands are not working from user data?

I'm trying to install drupal on AWS ec2 instance by using terraform. I have created a script file in that I have defined docker installation and the s3 location of docker-compose.yml after that I run the docker-compose up -d command in the script. I called the script file in the user data everything is working fine on the new AWS ec2 instance except docker containers are not starting up.docker-compose file has downloaded to the instance but they were no containers actively running. If again I login into the instance and run the command then both drupal MySQL containers are starting and the drupal website is in an active state but the same command from the script is not working.
#! /bin/bash
sudo yum update -y
sudo yum install -y docker
sudo usermod -a -G docker ec2-user
sudo curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-`uname -s`-`uname -m` | sudo tee /usr/local/bin/docker-compose > /dev/null
sudo chmod +x /usr/local/bin/docker-compose
sudo service docker start
sudo chkconfig docker on
aws s3 cp s3://xxxxxx/docker-compose.yml /home/ec2-user/
docker-compose up -d
I had the same issue. Solved it with:
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Is it possible to SSH into FARGATE managed container instances?

I use to connect to EC2 container instances following this steps, https://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance-connect.html wondering how I can connect to FARGATE-managed container instances instead.
Looking on that issue on github https://github.com/aws/amazon-ecs-cli/issues/143 I think it's not possible to make docker exec from remote host into container on ECS Fargate. You can try to run ssh daemon and your main process in one container using e.g. systemd (https://docs.docker.com/config/containers/multi-service_container/) and connect to your container using SSH but generally it's not good idea in containers world.
Starting from the middle of March 2021, executing a command in the ECS container is possible when the container runs in AWS Fargate. Check the blog post Using Amazon ECS Exec to access your containers on AWS Fargate and Amazon EC2.
Quick checklist:
Enable command execution in the service.
Make sure to use the latest platform version in the service.
Add ssmmessages:.. permissions to the task execution role.
Force new deployment for the service to run tasks with command execution enabled.
AWS CLI command to run bash inside the instance:
aws ecs execute-command \
--region eu-west-1 \
--cluster [cluster-name] \
--task [task id, for example 0f9de17a6465404e8b1b2356dc13c2f8] \
--container [container name from the task definition] \
--command "/bin/bash" \
--interactive
The setup explained above should allow to run the /bin/bash command and get an interactive shell into the container running on AWS Fargate. Please check the documentation Using Amazon ECS Exec for debugging for more details.
It is possible, but not easy.straight forward.
Shortly: install SSH, don't expose ssh port out from VPC, add bastion host, SSH through bastion.
A little bit more details:
spin up SSHD with password-less authentication. Docker instructions
Fargate Task: Expose port 22
Configure your VPC, instructions
create EC2 bastion host
From there SSH into your Task's IP address
Enable execute command on service.
aws ecs update-service --cluster <Cluster> --service <Service> --enable-execute-command
Connect to fargate task.
aws ecs execute-command --cluster <Cluster> \
--task <taskId> \
--container <ContainerName> \
--interactive \
--command "/bin/sh"
Ref - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html
Here is an example of adding SSH/sshd to your container to gain direct access:
# Dockerfile
FROM alpine:latest
RUN apk update && apk add --virtual --no-cache \
openssh
COPY sshd_config /etc/ssh/sshd_config
RUN mkdir -p /root/.ssh/
COPY authorized-keys/*.pub /root/.ssh/authorized_keys
RUN cat /root/.ssh/authorized-keys/*.pub > /root/.ssh/authorized_keys
RUN chown -R root:root /root/.ssh && chmod -R 600 /root/.ssh
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
RUN ln -s /usr/local/bin/docker-entrypoint.sh /
# We have to set a password to be let in for root - MAKE THIS STRONG.
RUN echo 'root:THEPASSWORDYOUCREATED' | chpasswd
EXPOSE 22
ENTRYPOINT ["docker-entrypoint.sh"]
# docker-entrypoint.sh
#!/bin/sh
if [ "$SSH_ENABLED" = true ]; then
if [ ! -f "/etc/ssh/ssh_host_rsa_key" ]; then
# generate fresh rsa key
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
fi
if [ ! -f "/etc/ssh/ssh_host_dsa_key" ]; then
# generate fresh dsa key
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
fi
#prepare run dir
if [ ! -d "/var/run/sshd" ]; then
mkdir -p /var/run/sshd
fi
/usr/sbin/sshd
env | grep '_\|PATH' | awk '{print "export " $0}' >> /root/.profile
fi
exec "$#"
More details here: https://github.com/jenfi-eng/sshd-docker

CoreOS AWS userdata "docker run" on startup won't run

I'm trying to setup CoreOS on AWS to run specific commands on boot to download our DCOS cluster's info tarball and run scripts contained within it. These scripts help add the instance as an "agent" to our DC/OS cluster.
However, I don't seem to be able to get the docker run commands to run. I do see that the userdata is creating the tee's output file (which remains empty) and also the /opt/dcos_install_tmp/ directory (also remains empty).
The docker run commands here download an "awscli" container, fetch packages from S3 (using IAM instance profile credentials), and spit it out to the CoreOS file system.
Installing AWS CLI on CoreOS didn't seem straightforward (there's no package manager, no python), so I had to resort to this.
If I login to the instance and run the same commands by putting them in a script, I have absolutely no issues.
I check "journalctl --identifier=coreos-cloudinit" and found nothing to indicate issues. It just reports:
15:58:34 Parsing user-data as script
There is no "boot" log file for CoreOS in /var/log/ unlike in other AMIs.
I'm really stuck right now and would love some nudges in the right direction.
Here's my userdata (which I post as text during instance boot):
#!/bin/bash
/usr/bin/docker run -it --name cli governmentpaas/awscli aws s3 cp s3://<bucket>/dcos/dcos_preconfig.sh /root && /usr/bin/docker cp cli:/root/dcos_preconfig.sh . && /usr/bin/docker rm cli | tee -a /root/userdatalog.txt
/usr/bin/docker run -it --name cli governmentpaas/awscli aws s3 cp s3://<bucket>/dcos/dcos-install.tar /root && /usr/bin/docker cp cli:/root/dcos-install.tar . && /usr/bin/docker rm cli | tee -a /root/userdatalog.txt
sudo mkdir -p /opt/dcos_install_tmp
sudo tar xf dcos-install.tar -C /opt/dcos_install_tmp | tee -a /root/userdatalog.txt
sudo /bin/bash /opt/dcos_install_tmp/dcos_install.sh slave | tee -a /root/userdatalog.txt
Remove -t flag from the docker run command.
I had a similar problem: DigitalOcean: How to run Docker command on newly created Droplet via Java API
The problem ended up being the -t flag in the docker run command. Apparently this doesn't work because it isn't a terminal or something like that. Remove the flag and it runs fine.

How to automatically run consul agent and registrator container on scaling ECS instance

I have created a consul cluster of three nodes. Now I need to run consul agent and registrator containers and join consul agent with one of the consul server node whenever I up ECS instance or scale out ECS instance on which I'm running my micro services.
I have automated rest of the deployment process with rolling updates. But I have to manually start up consul agent and registrator whenever I scale out ECS instance.
Anyone have idea how can we automate this?
Create a task-definition with two containers, consul-client and registrator.
aws ecs start-task in your userdata.
This AWS post focuses on this.
edit: Since you mentioned ECS instance, I assume you already have the necessary IAM role set for the instance.
Create an ELB in front of you consul servers or an elastic IP so that it doesn't change.
Then in userdata:
#!/bin/bash
consul_host=consul.mydomain.local
#start the agent
docker run -it --restart=always -p 8301:8301 -p 8301:8301/udp -p 8400:8400 -p 8500:8500 -p 53:53/udp \
-v /opt/consul:/data -v /var/run/docker.sock:/var/run/docker.sock -v /etc/consul:/etc/consul -h \
$(curl -s http://169.254.169.254/latest/meta-data/instance-id) --name consul-agent progrium/consul \
-join $consul_host -advertise $(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)`
#start the registrator
docker run -it --restart=always -v /var/run/docker.sock:/tmp/docker.sock \
-h $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --name consul-registrator \
gliderlabs/registrator:latest -ip $(curl -s http://169.254.169.254/latest/meta-data/local-ipv4) \
consul://$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4):8500
Note: this snippet assumes your setup is all locally reachable, etc. It's from the cloudformations from this blog post and this link