How to configure permissions between EFS and EC2 - amazon-web-services

I'm trying to use CloudFormation to setup a mongod instance using EFS storage, and I'm having problems understanding how to configure the file system permissions to make it work.
The EFS is not going to be accessed by any existing systems, so I can configure it exactly as I need to.
I was trying to use the following AWS example as a starting point ...
https://docs.amazonaws.cn/en_us/AWSCloudFormation/latest/UserGuide/aws-resource-efs-accesspoint.html
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: "13234"
Gid: "1322"
SecondaryGids:
- "1344"
- "1452"
RootDirectory:
CreationInfo:
OwnerGid: "708798"
OwnerUid: "7987987"
Permissions: "0755"
Path: "/testcfn/abc"
In the above example, they seem to have assigned arbitrary group and user id's. What I'm trying to figure out is given the above, how would the user accounts on the EC2 need to be configured to allow full read/write access?
I've got to the point where I'm able to mount the access point, but I haven't been able to successfully write to it.
What I've tried...
Created a new user on the EC2, and assigned the uid and gid like so...
sudo usermod -u 13234 testuser1
sudo groupmod -g 1322 testuser1
I then sudo to that user and try writing a file to the mount point... No luck
I then tried assigning the uid and gid like so...
sudo usermod -u 7987987 testuser1
sudo groupmod -g 708798 testuser1
Again, no luck writing a file.
What I'm really looking for is the simplest configuration where I can have a single EC2 user have full read/write access to an EFS folder. It will be a new EFS and new EC2, so I have full control over how it's setup, if that helps.
Possibly the examples assume some existing knowledge of the workings of NFS, which I may be lacking.

Just in case it helps anyone, I ended up defining my Access Point like so...
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: "9960"
Gid: "9940"
RootDirectory:
CreationInfo:
OwnerGid: "9940"
OwnerUid: "9960"
Permissions: "0755"
Path: "/mongo-db"
and then in my userdata for the mongodb server EC2, I added this...
sudo groupadd --system --gid 9940 mongod
sudo useradd --system --uid 9960 --gid 9940 mongod
I'm not actually sure if the gid and uid above need to match what I've defined in the AccessPoint, but it seems to make it easier, as then the server will show the owner of the files as "mongod mongod".
I mounted the EFS like so...
sudo mkdir -p /mnt/var/lib/mongo
sudo sudo mount -t efs -o tls,accesspoint=${AccessPointId} ${FileSystemId}: /mnt/var/lib/mongo
I'm still a bit confused about the original AWS provided example. If my understanding is correct, it seems it would always create a root directory which cannot be written to.
Perhaps someone can clarify where it might be helpful to have the root directory owned by a different user to the one specified in the PosixUser.

Related

How to edit EC2 userdata with SSM without overwrite the things the enduser has entered?

here is the problem description:
The SSM agent doesn't start after booting the ec2 instance with SUSE or RedHat images. The SSM agent is installed, but has not started. This problem does not exist in Amazon Linux, or Windows.
Here is a description how to add and start the SSM agent with EC2 user data.
https://aws.amazon.com/de/premiumsupport/knowledge-center/install-ssm-agent-ec2-linux/
First question: Is this a bug? Doesn't really make sense to me that the agent is installed but not started on two images (which I have checked)
Second question: Is it possible to add the user data with ssm documents without overwriting the things that the customer could possibly have entered in the user data?
-> For example: the user wants to start some things after booting the machine, but we also need the SSM agent.
That's the part we need for SUSE:
#!/bin/bash mkdir /tmp/ssm cd /tmp/ssm wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm sudo rpm --install amazon-ssm-agent.rpm sudo systemctl enable amazon-ssm-agent sudo systemctl start amazon-ssm-agent
We have tried to write an SSM automation document. To figure out what OS the new EC2 instance has. (This is already working)
For example we are getting "SUSE" as a part of "PlatformDetails". So the SSM can move on with the steps for SUSE Linux systems. At this part we would need the userdata.
Which hasn't worked with:
`
- name: ChooseOSforCommands
action: 'aws:branch'
inputs:
Choices:
- NextStep: SUSE
Variable: '{{GetOSType.InstanceOS}}'
Contains: SUSE
- name: SUSE
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: ModifyInstanceAttribute
InstanceId: "{{InstanceId}}"
** UserData: ${file(.suse12)}**
`
How to add the commands for SUSE into a YAML form?
Is it possible to find out what the user has entered and combine it with our commands?

Django can't access Azure mounted storage

I am running my Djagno app (python 2.7, django 1.11) on an Azure server using AKS (kubernetes).
I have a persistent storage volume mounted at /data/media .
When I try to upload files through my app, I get the following error:
Exception Value: [Errno 13] Permission denied: '/data/media/uploads/<some_dir>'
Exception Location: /usr/local/lib/python2.7/os.py in makedirs, line 157
The problematic line in os.py is the one trying to create a directory mkdir(name, mode) .
When I use kubectl exec -it <my-pod> bash to access the pod (user is root), I can easily cd into the /data/media directory, create sub-folders and see them reflected in the Azure portal. So my mount is perfectly fine.
I tried chmoding /data/media but that does not work. It seems like I cannot change the permissions of the folders on the mounted persistent volume, nor can I add users or change groups. So, it seems there is no problem accessing the volume from my pod, but since Django is not running as root, it cannot access it.
Ho do I resolve this? Thanks.
It turns out that since the Azure file share mount is actually owned by the k8s cluster, the Docker containers running in the pods only mount it as an entry point but cannot modify its permissions since they do not own it.
The reason it started happening now is explained here:
... it turned out that the default directory mode and file mode differs between Kubernetes versions. So while the the access mode is 0777 for Kubernetes v1.6.x, v1.7.x, in case of v1.8.6 or above it is 0755
So for me the fix was adding the required access permissions for the mounted volume to k8s spec like so:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: <volumeName>
annotations:
volume.beta.kubernetes.io/storage-class: <className>
spec:
mountOptions:
- dir_mode=0777
- file_mode=0777
accessModes:
- ReadWriteMany
...
** I wrote 0777 as an example. You should carefully set what's write for you.
Hope this helps anyone.

Mount s3fs as docker volume

So I just want to add my s3 bucket from amazon to my docker swarm. I've saw many "possible" solutions on the internet but I can't connect them to add the content of my bucket as volume.
So the last thing I've tried was the command statet here (Is s3fs not able to mount inside docker container?):
docker run --rm -t -i --privileged -e AWS_ACCESS_KEY_ID=XXXX -e AWS_SECRET_ACCESS_KEY=XXXX -e AWS_STORAGE_BUCKET_NAME=XXXX docker.io/panubo/s3fs bash
It's working pretty well but if I now exit bash the container stops and I can't do anything with it. Is it possible to make this thing to stay and add it as a volume?
Or would it be the better solution if I just mount the bucket on my Docker instance and then add it as a local volume? Would this be the better idea?
I've made it!
The configuration looks like this:
docker-compose.yml
volumes:
s3data:
driver: local
services:
s3vol:
image: elementar/s3-volume
command: /data s3://{BUCKET NAME}
environment:
- BACKUP_INTERVAL={INTERVALL IN MINUTES (2m)}
- AWS_ACCESS_KEY_ID={KEY}
- AWS_SECRET_ACCESS_KEY={SECRET}
volumes:
- s3data:/data
And after inserting this into the docker-compose file you can use the s3 storage as volume. Like this:
docker-compose.yml
linux:
image: {IMAGE}
volumes:
- s3data:/data
Hope this helps some of you in the future!
Cheers.

Creating user in ubuntu from AWS

Using AWS (Amazon Web Services) I have created an Ubuntu 16.10 instance and I am able to login using a pem file like this:
ssh -i key.pem ubuntu#52.16.73.14.54
After I am logged, I can see that I am able to execute:
sudo su
(with no password), however the file /etc/sudoers does NOT contain any reference to the user current user: ubuntu.
How can I create another user with exactly the same behavior (without touching the sudoers file) from terminal in a NON interactive way?
I tried:
sudo useradd -m -c "adding a test user" -G sudo,adm -s /bin/bash testuser
But after I become "testuser" if I invoke:
sudo su
I have to provide a password. Which is exactly the way I want to avoid.
You can't do this without touching sudo, beacuse the ubuntu user is given passwordless access specifically.
$ for group in `groups ubuntu`; do sudo grep -r ^[[:space:]]*[^#]*$group[[:space:]] /etc/sudoers* ; done
/etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL
/etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL
/etc/sudoers:%sudo ALL=(ALL:ALL) ALL
But what you can do is create a new sudoers file without touching any existing files. sudo is typically configured these days to read all the configurations in a directiory, usually /etc/sudoers.d/, preceisely so that one failing config doesn't effect the rest of sudo.
In your case, you might want to give an admin group sudoless access rather than your user. Then you can add access in the future to other users without changing sudo config.

What is the best way to pass AWS credentials to a Docker container?

I am running docker-container on Amazon EC2. Currently I have added AWS Credentials to Dockerfile. Could you please let me know the best way to do this?
A lot has changed in Docker since this question was asked, so here's an attempt at an updated answer.
First, specifically with AWS credentials on containers already running inside of the cloud, using IAM roles as Vor suggests is a really good option. If you can do that, then add one more plus one to his answer and skip the rest of this.
Once you start running things outside of the cloud, or have a different type of secret, there are two key places that I recommend against storing secrets:
Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container.
In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.
So what other options are there for secrets in Docker containers?
Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.
Option B: Also during build time, if you can use BuildKit which was released in 18.09, there are currently experimental features to allow the injection of secrets as a volume mount for a single RUN line. That mount does not get written to the image layers, so you can access the secret during build without worrying it will be pushed to a public registry server. The resulting Dockerfile looks like:
# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And you build it with a command in 18.09 or newer like:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)
For a docker run, this looks like:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
Or for a compose file, you'd have:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
You turn on swarm mode with docker swarm init for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/.aws/credentials. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name.
I often version my secrets using a script from: https://github.com/sudo-bmitch/docker-config-update
Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html
The best way is to use IAM Role and do not deal with credentials at all. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html )
Credentials could be retrieved from http://169.254.169.254..... Since this is a private ip address, it could be accessible only from EC2 instances.
All modern AWS client libraries "know" how to fetch, refresh and use credentials from there. So in most cases you don't even need to know about it. Just run ec2 with correct IAM role and you good to go.
As an option you can pass them at the runtime as environment variables ( i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)
You can access these environment variables by running printenv at the terminal.
Yet another approach is to create temporary read-only volume in docker-compose.yaml. AWS CLI and SDK (like boto3 or AWS SDK for Java etc.) are looking for default profile in ~/.aws/credentials file.
If you want to use other profiles, you just need also to export AWS_PROFILE variable before running docker-compose command.
export AWS_PROFILE=some_other_profile_name
version: '3'
services:
service-name:
image: docker-image-name:latest
environment:
- AWS_PROFILE=${AWS_PROFILE}
volumes:
- ~/.aws/:/root/.aws:ro
In this example, I used root user on docker. If you are using other user, just change /root/.aws to user home directory.
:ro - stands for read-only docker volume
It is very helpful when you have multiple profiles in ~/.aws/credentials file and you are also using MFA. Also helpful when you want to locally test docker-container before deploying it on ECS on which you have IAM Roles, but locally you don't.
Another approach is to pass the keys from the host machine to the docker container. You may add the following lines to the docker-compose file.
services:
web:
build: .
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
The following one-liner works for me even when my credentials are set up by aws-okta or saml2aws:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
Please note that for advanced use cases you might need to allow rw (read-write) permissions, so omit the ro (read-only) limitation when mounting the .aws volume in -v$HOME/.aws:/root/.aws:ro
Volume mounting is noted in this thread but as of docker-compose v3.2 + you can Bind Mount.
For example, if you have a file named .aws_creds in the root of your project:
In your service for the compose file do this for volumes:
volumes:
# normal volume mount, already shown in thread
- ./.aws_creds:/root/.aws/credentials
# way 2, note this requires docker-compose v 3.2+
- type: bind
source: .aws_creds # from local
target: /root/.aws/credentials # to the container location
Using this idea, you can publicly store your docker images on docker-hub because your aws credentials will not physically be in the image...to have them associated, you must have the correct directory structure locally where the container is started (i.e. pulling from Git)
You could create ~/aws_env_creds containing:
touch ~/aws_env_creds
chmod 777 ~/aws_env_creds
vi ~/aws_env_creds
Add these value (replace the key of yours):
AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C
Press "esc" to save the file.
Run and test the container:
my_service:
build: .
image: my_image
env_file:
- ~/aws_env_creds
If someone still face the same issue after following the instructions mentioned in accepted answer then make sure that you are not passing environment variables from two different sources. In my case I was passing environment variables to docker run via a file and as parameters which was causing the variables passed as parameters show no effect.
So the following command did not work for me:
docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1
Moving the aws credentials into the mentioned env.list file helped.
for php apache docker the following command works
docker run --rm -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html -v ~/.aws:/.aws --env AWS_PROFILE=mfa php:7.2-apache
Based on some of previous answers, I built my own as follows.
My project structure:
├── Dockerfile
├── code
│   └── main.py
├── credentials
├── docker-compose.yml
└── requirements.txt
My docker-compose.yml file:
version: "3"
services:
app:
build:
context: .
volumes:
- ./credentials:/root/.aws/credentials
- ./code:/home/app
My Docker file:
FROM python:3.8-alpine
RUN pip3 --no-cache-dir install --upgrade awscli
RUN mkdir /app
WORKDIR /home/app
CMD python main.py