Docker nfs4 mount on Elastic Beanstalk - amazon-web-services

I am stuck accessing a nfs4 share inside a docker container, running on Elastic Beanstalk.
Netshare is up and running on the EC2 instance running the Docker container. Mounting the nfs share manually on the instance works, I can access the share on the EC2 instance without problems.
However, when I run a container, trying to mount a nfs4 volume, the files are not appearing inside the container.
I do this. First, start the netshare daemon on the Docker host:
sudo ./docker-volume-netshare nfs
INFO[0000] == docker-volume-netshare :: Version: 0.18 - Built: 2016-05-27T20:14:07-07:00 ==
INFO[0000] Starting NFS Version 4 :: options: ''
Then, on the Docker host, start the docker container. Use -v to create a volume mounting the nfs4 share:
sudo docker run --volume-driver=nfs -v ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com/home/ec2-user/nfs-share/templates:/home/ec2-user/xxx -ti aws_beanstalk/current-app /bin/bash
root#0a0c3de8a97e:/usr/src/app#
That worked, according to the netshare daemon:
INFO[0353] Mounting NFS volume ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com:/home/ec2-user/nfs-share/templates on /var/lib/docker-volumes/netshare/nfs/ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com/home/ec2-user/nfs-share/templates
So I try listing the contents of /home/ec2-user/xxx inside the newly launched container - but its empty?!
root#0a0c3de8a97e:/usr/src/app# ls /home/ec2-user/xxx/
root#0a0c3de8a97e:/usr/src/app#
Strangely enough, the nfs volume has been mounted correctly on the host:
[ec2-user#ip-xxx-xxx-xxx-xxx ~]$ sudo ls -lh /var/lib/docker-volumes/netshare/nfs/ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com/home/ec2-user/nfs-share/templates | head -3
total 924K
drwxr-xr-x 5 ec2-user ec2-user 4,0K 29. Dez 14:12 file1
drwxr-xr-x 4 ec2-user ec2-user 4,0K 9. Mai 17:20 file2
Could this be a permission problem? Both the nfs server and client are using the ec2-user user/group. The docker container is running as root.
What am I missing?
UPDATE
If i start the container in --privileged mode, mounting the nfs share directly inside the container becomes possible:
sudo docker run --privileged -it aws_beanstalk/current-app /bin/bash
mount -t nfs4 ec2-xxxx-xxxx-xxxx-xxxx.us-west-2.compute.amazonaws.com:/home/ec2-user/nfs-share/templates /mnt/
ls -lh /mnt | head -3
total 924K
drwxr-xr-x 5 500 500 4.0K Dec 29 14:12 file1
drwxr-xr-x 4 500 500 4.0K May 9 17:20 file2
Unfortunately, this does not solve the problem, because Elastic Beanstalk does not allow privileged containers (unlike ECS).
UPDATE 2
Here's another workaround:
mount the nfs share on the host into /target
restart docker on the host
run container docker run -it -v /target:/mnt image /bin/bash
/mnt is now populated as expected.

#sebastian's "UPDATE 2" got me on the right track (thanks #sebastian).
But for others who may reach this question via Google like I did, here's exactly how I was able to automatically mount an EFS (NFSv4) file system on Elastic Beanstalk and make it available to containers.
Add this .config file:
# .ebextensions/01-efs-mount.config
commands:
01umount:
command: umount /mnt/efs
ignoreErrors: true
02mkdir:
command: mkdir /mnt/efs
ignoreErrors: true
03mount:
command: mount -t nfs4 -o vers=4.1 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).EFS_FILE_SYSTEM_ID.efs.AWS_REGION.amazonaws.com:/ /mnt/efs
04restart-docker:
command: service docker stop && service docker start
05restart-ecs:
command: docker start ecs-agent
Then eb deploy. After the deploy finishes, SSH to your EB EC2 instance and verify that it worked:
ssh ec2-user#YOUR_INSTANCE_IP
ls -la /mnt/efs
You should see the files in your EFS filesystem. However, you still need to verify that the mount is readable and writable within containers.
sudo docker run -v /mnt/efs:/nfs debian:jessie ls -la /nfs
You should see the same file list.
sudo docker run -v /mnt/efs:/nfs debian:jessie touch /nfs/hello
sudo docker run -v /mnt/efs:/nfs debian:jessie ls -la /nfs
You should see the file list plus the new hello file.
ls -la /mnt/efs
You should see the hello file outside of the container as well.
Finally, here's how you use -v /mnt/efs:/nfs in your Dockerrun.aws.json.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"image": "AWS_ID.dkr.ecr.AWS_REGION.amazonaws.com/myimage:latest",
"memory": 128,
"mountPoints": [
{
"containerPath": "/nfs",
"sourceVolume": "efs"
}
],
"name": "myimage"
}
],
"volumes": [
{
"host": {
"sourcePath": "/mnt/efs"
},
"name": "efs"
}
]
}

Related

Adding ec2-user to docker group

I want to install docker on my ec2 instance.
sudo yum install docker -y
I came to know that this command automatically creates a group 'docker'
which has root privileges by default.so I add my ec2-user to this group to execute commands without 'sudo'.
sudo usermod -aG docker ec2-user
Now this means ec2-user has root privileges
But if I want to start the docker service,why should I use
sudo systemctl start docker
instead of simply
systemctl start docker
Above command is giving me an error:
Failed to start docker.service: The name org.freedesktop.PolicyKit1 was not provided by any .service files
See system logs and 'systemctl status docker.service' for details.
Please help!
because docker is system service so you must use sudo or run it without sudo as root user
or you can use
sudo systemctl enable docker
and after every reboot docker service will be running automatically

Can't install CodeDeploy in Lightsail instance with Amazon Linux 2

As wasn't particularly satisfied with only being able to use Amazon Linux (wanted to use Amazon Linux 2 as well), created two instances using both OS versions and adding the same script
mkdir /etc/codedeploy-agent/
mkdir /etc/codedeploy-agent/conf
cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
---
aws_access_key_id: ACCESS
aws_secret_access_key: SECRET
iam_user_arn: arn:aws:iam::525221857828:user/GeneralUser
region: eu-west-2
EOT
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
The difference I noted between the two is that in the instance that has Linux 2, the folder /etc/codedeploy-agent/conf/ has only one file
and in Linux has two files
Knowing this, I created a new file in the Linux 2 instance with the same name
touch codedeployagent.yml
, changed its permissions from
-rw-r--r-- 1 root root 261 Oct 2 10:43 codedeployagent.yml
to
-rwxr-xr-x 1 root root 261 Oct 2 10:43 codedeployagent.yml
, and added the same content
:log_aws_wire: false
:log_dir: '/var/log/aws/codedeploy-agent/'
:pid_dir: '/opt/codedeploy-agent/state/.pid/'
:program_name: codedeploy-agent
:root_dir: '/opt/codedeploy-agent/deployment-root'
:verbose: false
:wait_between_runs: 1
:proxy_uri:
:max_revisions: 5
and then rebooted the machine. Still, this didn't fix the issue as when I run
sudo service codedeploy-agent status
will still get
Redirecting to /bin/systemctl status codedeploy-agent.service Unit
codedeploy-agent.service could not be found.
Also ensured all the updates were in place, rebooted the machine but that didn't work either.
I can provide details of my setup for Amazon Linux 2 instances to deploy CodeDeployGitHubDemo (based on past question).
1. CodeDeploy agent
Used the following as UserData (you may need to adjust region if not us-east-1):
#!/bin/bash
yum update -y
yum install -y ruby wget
cd /home/ec2-user
wget https://aws-codedeploy-us-east-1.s3.us-east-1.amazonaws.com/latest/install
chmod +x ./install
./install auto
It did not require hard-coding credentials. The following works perfectly fine on Amazon Linux 2 instances that I've used.
2. Instance role
Your instance needs a role suitable for CodeDeploy. I used an EC2 instance role with policy listed here:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
3. Deployment group
I had three instances for tests in an AutoScaling group, called myasg:
4. Deployment
I deployed from S3 without Load Balancer:
5. Results
No issues were found and deployment was successful:
And the website running (need to open port 80 in security groups):
Update
For manual installation on Amazon Linux 2. You can sudo su - to become root after login.
mkdir -p /etc/codedeploy-agent/conf
cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
---
aws_access_key_id: ACCESS
aws_secret_access_key: SECRET
iam_user_arn: arn:aws:iam::525221857828:user/GeneralUser
region: eu-west-2
EOT
yum install -y wget ruby
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
env AWS_REGION=eu-west-2 ./install rpm
To check its status:
systemctl status codedeploy-agent
With this you should get something like this
● codedeploy-agent.service - AWS CodeDeploy Host Agent
Loaded: loaded (/usr/lib/systemd/system/codedeploy-agent.service; enabled; vendor prese
t: disabled)
Active: active (running) since Sat 2020-10-03 07:18:57 UTC; 3s ago
Process: 3609 ExecStart=/bin/bash -a -c [ -f /etc/profile ] && source /etc/profile; /opt
/codedeploy-agent/bin/codedeploy-agent start (code=exited, status=0/SUCCESS)
Main PID: 3623 (ruby)
CGroup: /system.slice/codedeploy-agent.service
├─3623 codedeploy-agent: master 3623
└─3627 codedeploy-agent: InstanceAgent::Plugins::CodeDeployPlugin::CommandPo...
Oct 03 07:18:57 ip-172-26-8-137.eu-west-2.compute.internal systemd[1]: Starting AWS Cod...
Oct 03 07:18:57 ip-172-26-8-137.eu-west-2.compute.internal systemd[1]: Started AWS Code...
Hint: Some lines were ellipsized, use -l to show in full.
If you run
sudo service codedeploy-agent status
you'll get (meaning it's working as expected)
The AWS CodeDeploy agent is running as PID 3623
To start if not running:
systemctl start codedeploy-agent

Docker compose commands are not working from user data?

I'm trying to install drupal on AWS ec2 instance by using terraform. I have created a script file in that I have defined docker installation and the s3 location of docker-compose.yml after that I run the docker-compose up -d command in the script. I called the script file in the user data everything is working fine on the new AWS ec2 instance except docker containers are not starting up.docker-compose file has downloaded to the instance but they were no containers actively running. If again I login into the instance and run the command then both drupal MySQL containers are starting and the drupal website is in an active state but the same command from the script is not working.
#! /bin/bash
sudo yum update -y
sudo yum install -y docker
sudo usermod -a -G docker ec2-user
sudo curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-`uname -s`-`uname -m` | sudo tee /usr/local/bin/docker-compose > /dev/null
sudo chmod +x /usr/local/bin/docker-compose
sudo service docker start
sudo chkconfig docker on
aws s3 cp s3://xxxxxx/docker-compose.yml /home/ec2-user/
docker-compose up -d
I had the same issue. Solved it with:
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Is it possible to SSH into FARGATE managed container instances?

I use to connect to EC2 container instances following this steps, https://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance-connect.html wondering how I can connect to FARGATE-managed container instances instead.
Looking on that issue on github https://github.com/aws/amazon-ecs-cli/issues/143 I think it's not possible to make docker exec from remote host into container on ECS Fargate. You can try to run ssh daemon and your main process in one container using e.g. systemd (https://docs.docker.com/config/containers/multi-service_container/) and connect to your container using SSH but generally it's not good idea in containers world.
Starting from the middle of March 2021, executing a command in the ECS container is possible when the container runs in AWS Fargate. Check the blog post Using Amazon ECS Exec to access your containers on AWS Fargate and Amazon EC2.
Quick checklist:
Enable command execution in the service.
Make sure to use the latest platform version in the service.
Add ssmmessages:.. permissions to the task execution role.
Force new deployment for the service to run tasks with command execution enabled.
AWS CLI command to run bash inside the instance:
aws ecs execute-command \
--region eu-west-1 \
--cluster [cluster-name] \
--task [task id, for example 0f9de17a6465404e8b1b2356dc13c2f8] \
--container [container name from the task definition] \
--command "/bin/bash" \
--interactive
The setup explained above should allow to run the /bin/bash command and get an interactive shell into the container running on AWS Fargate. Please check the documentation Using Amazon ECS Exec for debugging for more details.
It is possible, but not easy.straight forward.
Shortly: install SSH, don't expose ssh port out from VPC, add bastion host, SSH through bastion.
A little bit more details:
spin up SSHD with password-less authentication. Docker instructions
Fargate Task: Expose port 22
Configure your VPC, instructions
create EC2 bastion host
From there SSH into your Task's IP address
Enable execute command on service.
aws ecs update-service --cluster <Cluster> --service <Service> --enable-execute-command
Connect to fargate task.
aws ecs execute-command --cluster <Cluster> \
--task <taskId> \
--container <ContainerName> \
--interactive \
--command "/bin/sh"
Ref - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html
Here is an example of adding SSH/sshd to your container to gain direct access:
# Dockerfile
FROM alpine:latest
RUN apk update && apk add --virtual --no-cache \
openssh
COPY sshd_config /etc/ssh/sshd_config
RUN mkdir -p /root/.ssh/
COPY authorized-keys/*.pub /root/.ssh/authorized_keys
RUN cat /root/.ssh/authorized-keys/*.pub > /root/.ssh/authorized_keys
RUN chown -R root:root /root/.ssh && chmod -R 600 /root/.ssh
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
RUN ln -s /usr/local/bin/docker-entrypoint.sh /
# We have to set a password to be let in for root - MAKE THIS STRONG.
RUN echo 'root:THEPASSWORDYOUCREATED' | chpasswd
EXPOSE 22
ENTRYPOINT ["docker-entrypoint.sh"]
# docker-entrypoint.sh
#!/bin/sh
if [ "$SSH_ENABLED" = true ]; then
if [ ! -f "/etc/ssh/ssh_host_rsa_key" ]; then
# generate fresh rsa key
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
fi
if [ ! -f "/etc/ssh/ssh_host_dsa_key" ]; then
# generate fresh dsa key
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
fi
#prepare run dir
if [ ! -d "/var/run/sshd" ]; then
mkdir -p /var/run/sshd
fi
/usr/sbin/sshd
env | grep '_\|PATH' | awk '{print "export " $0}' >> /root/.profile
fi
exec "$#"
More details here: https://github.com/jenfi-eng/sshd-docker

CoreOS AWS userdata "docker run" on startup won't run

I'm trying to setup CoreOS on AWS to run specific commands on boot to download our DCOS cluster's info tarball and run scripts contained within it. These scripts help add the instance as an "agent" to our DC/OS cluster.
However, I don't seem to be able to get the docker run commands to run. I do see that the userdata is creating the tee's output file (which remains empty) and also the /opt/dcos_install_tmp/ directory (also remains empty).
The docker run commands here download an "awscli" container, fetch packages from S3 (using IAM instance profile credentials), and spit it out to the CoreOS file system.
Installing AWS CLI on CoreOS didn't seem straightforward (there's no package manager, no python), so I had to resort to this.
If I login to the instance and run the same commands by putting them in a script, I have absolutely no issues.
I check "journalctl --identifier=coreos-cloudinit" and found nothing to indicate issues. It just reports:
15:58:34 Parsing user-data as script
There is no "boot" log file for CoreOS in /var/log/ unlike in other AMIs.
I'm really stuck right now and would love some nudges in the right direction.
Here's my userdata (which I post as text during instance boot):
#!/bin/bash
/usr/bin/docker run -it --name cli governmentpaas/awscli aws s3 cp s3://<bucket>/dcos/dcos_preconfig.sh /root && /usr/bin/docker cp cli:/root/dcos_preconfig.sh . && /usr/bin/docker rm cli | tee -a /root/userdatalog.txt
/usr/bin/docker run -it --name cli governmentpaas/awscli aws s3 cp s3://<bucket>/dcos/dcos-install.tar /root && /usr/bin/docker cp cli:/root/dcos-install.tar . && /usr/bin/docker rm cli | tee -a /root/userdatalog.txt
sudo mkdir -p /opt/dcos_install_tmp
sudo tar xf dcos-install.tar -C /opt/dcos_install_tmp | tee -a /root/userdatalog.txt
sudo /bin/bash /opt/dcos_install_tmp/dcos_install.sh slave | tee -a /root/userdatalog.txt
Remove -t flag from the docker run command.
I had a similar problem: DigitalOcean: How to run Docker command on newly created Droplet via Java API
The problem ended up being the -t flag in the docker run command. Apparently this doesn't work because it isn't a terminal or something like that. Remove the flag and it runs fine.