How to open permissions for Deployment volumeMount mapped to EFS - amazon-web-services

After creating the AWS EFS file system I went ahead and mapped it to one of the deployments/container's Volume as /data/files directory:
volumeMounts:
- name: efs-persistent-storage
mountPath: /data/files
readOnly: false
volumes:
- name: efs-persistent-storage
nfs:
server: fs-1234.efs.us-west-2.amazonaws.com
path: /files
I now am able to delete, create and modify the files stored on EFS drive. But running .sh script that tries to copy the files fails telling that the permissions of the /data/files directory don't allow it to create the files.
I double checked the directory permissions. And they are all open. How could I make it work?
May be the problem is that I am mapping directly to the efs server fs-1234.efs.us-west-2.amazonaws.com? Would it give me more options if I would use Persistant Volume Claim instead?

Related

How can I add folders and files to a docker container running on ECS?

Assuming I have the following docker-compose
version: "3.9"
services:
skyrimserver:
container_name: skyrimserver
image: tiltedphoques/st-reborn-server:latest
ports:
- "10578:10578/udp"
volumes:
- /opt/docker/skyrimserver/config:/home/server/config
- /opt/docker/skyrimserver/logs:/home/server/logs
- /opt/docker/skyrimserver/Data:/home/server/Data
restart: unless-stopped
I would like to have the folders specified under volumes created & filled with some files before running it. It seems like the docker integration creates a EFS file system automatically which is empty. How can I hook into that to add files upon creation?
EDIT: A nice solution would be to have the ability to change the config on-the-fly and reloading within the game, not having to restart the whole server because the new docker file includes new configuration files
docker compose is a tool to define run docker. When you define volumes in yaml file it's similar to run docker with arg --mount and there is no volume in that instance for you to mount.
What you need with ECS is a fully functioning image with all sources of files in it, using a Dockerfile like this to build image might work:
FROM tiltedphoques/st-reborn-server
CP /opt/docker/skyrimserver/config /home/server/config
CP /opt/docker/skyrimserver/logs /home/server/logs
CP /opt/docker/skyrimserver/Data /home/server/Data
EXPOSE 10578
Edit: In case you still want to use ECS, you can try to SSH to that container or use system manager to connect to container and put your file in /home/server/. I strongly AGAINST this method because EFS
is ridiculously slow without dump data.
You should change to use EBS backed EC2 instances. It's easy to scale with auto scaling group and suit your usecase
Hi hope this would help just pu dot infront of you location:
version: "3.9"
services:
skyrimserver:
container_name: skyrimserver
image: tiltedphoques/st-reborn-server:latest
ports:
- "10578:10578/udp"
volumes:
- ./opt/docker/skyrimserver/config:/home/server/config
- ./opt/docker/skyrimserver/logs:/home/server/logs
- ./opt/docker/skyrimserver/Data:/home/server/Data
restart: unless-stopped

How to configure permissions between EFS and EC2

I'm trying to use CloudFormation to setup a mongod instance using EFS storage, and I'm having problems understanding how to configure the file system permissions to make it work.
The EFS is not going to be accessed by any existing systems, so I can configure it exactly as I need to.
I was trying to use the following AWS example as a starting point ...
https://docs.amazonaws.cn/en_us/AWSCloudFormation/latest/UserGuide/aws-resource-efs-accesspoint.html
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: "13234"
Gid: "1322"
SecondaryGids:
- "1344"
- "1452"
RootDirectory:
CreationInfo:
OwnerGid: "708798"
OwnerUid: "7987987"
Permissions: "0755"
Path: "/testcfn/abc"
In the above example, they seem to have assigned arbitrary group and user id's. What I'm trying to figure out is given the above, how would the user accounts on the EC2 need to be configured to allow full read/write access?
I've got to the point where I'm able to mount the access point, but I haven't been able to successfully write to it.
What I've tried...
Created a new user on the EC2, and assigned the uid and gid like so...
sudo usermod -u 13234 testuser1
sudo groupmod -g 1322 testuser1
I then sudo to that user and try writing a file to the mount point... No luck
I then tried assigning the uid and gid like so...
sudo usermod -u 7987987 testuser1
sudo groupmod -g 708798 testuser1
Again, no luck writing a file.
What I'm really looking for is the simplest configuration where I can have a single EC2 user have full read/write access to an EFS folder. It will be a new EFS and new EC2, so I have full control over how it's setup, if that helps.
Possibly the examples assume some existing knowledge of the workings of NFS, which I may be lacking.
Just in case it helps anyone, I ended up defining my Access Point like so...
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: "9960"
Gid: "9940"
RootDirectory:
CreationInfo:
OwnerGid: "9940"
OwnerUid: "9960"
Permissions: "0755"
Path: "/mongo-db"
and then in my userdata for the mongodb server EC2, I added this...
sudo groupadd --system --gid 9940 mongod
sudo useradd --system --uid 9960 --gid 9940 mongod
I'm not actually sure if the gid and uid above need to match what I've defined in the AccessPoint, but it seems to make it easier, as then the server will show the owner of the files as "mongod mongod".
I mounted the EFS like so...
sudo mkdir -p /mnt/var/lib/mongo
sudo sudo mount -t efs -o tls,accesspoint=${AccessPointId} ${FileSystemId}: /mnt/var/lib/mongo
I'm still a bit confused about the original AWS provided example. If my understanding is correct, it seems it would always create a root directory which cannot be written to.
Perhaps someone can clarify where it might be helpful to have the root directory owned by a different user to the one specified in the PosixUser.

lambda + efs - mounting vs access point

I am trying to use aws lambda and efs together so I can perform operations that exceed the default lambda storage limit of 500mb. I am confused what the difference is between Local mount path and Access point.
Is the local mount path a term used to describe where the file system is mounted in the existing file system and the access point (which also has it's own path) the location that an application would reference in code? Or does it not actually matter which path is referenced?
For example
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: "1000"
Gid: "1000"
RootDirectory:
CreationInfo:
OwnerGid: "1000"
OwnerUid: "1000"
Permissions: "0777"
Path: "/myefs"
is how I create the access point and the mount path I have specified directly on the lambda for testing.
I guess the main confusion I am having is why are there 2 paths, what is the difference between them and which one should I use in my lambda?
Your EFS can have many directories on it:
/myefs
/myefs2
/myefs3
/myefs4
/important
/images
Your AccessPointResource will only enable access to /myefs. This folder will be basically the root to anyone who uses the access point. No other folder will be exposed through this access point.
/mnt/efs is the mount folder in the lambda container. So your function will be able to access /myefs mounted in its local directory tree under the name of /mnt/efs.
Mount path has to be the same as the access point root directory - in your case you should change local mount path from "/mnt/efs" to "/mnt/myefs" (or if you want the mount path to be "/mnt/efs" you should change acces point root directory to "efs")
You can also see this answer

Django can't access Azure mounted storage

I am running my Djagno app (python 2.7, django 1.11) on an Azure server using AKS (kubernetes).
I have a persistent storage volume mounted at /data/media .
When I try to upload files through my app, I get the following error:
Exception Value: [Errno 13] Permission denied: '/data/media/uploads/<some_dir>'
Exception Location: /usr/local/lib/python2.7/os.py in makedirs, line 157
The problematic line in os.py is the one trying to create a directory mkdir(name, mode) .
When I use kubectl exec -it <my-pod> bash to access the pod (user is root), I can easily cd into the /data/media directory, create sub-folders and see them reflected in the Azure portal. So my mount is perfectly fine.
I tried chmoding /data/media but that does not work. It seems like I cannot change the permissions of the folders on the mounted persistent volume, nor can I add users or change groups. So, it seems there is no problem accessing the volume from my pod, but since Django is not running as root, it cannot access it.
Ho do I resolve this? Thanks.
It turns out that since the Azure file share mount is actually owned by the k8s cluster, the Docker containers running in the pods only mount it as an entry point but cannot modify its permissions since they do not own it.
The reason it started happening now is explained here:
... it turned out that the default directory mode and file mode differs between Kubernetes versions. So while the the access mode is 0777 for Kubernetes v1.6.x, v1.7.x, in case of v1.8.6 or above it is 0755
So for me the fix was adding the required access permissions for the mounted volume to k8s spec like so:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: <volumeName>
annotations:
volume.beta.kubernetes.io/storage-class: <className>
spec:
mountOptions:
- dir_mode=0777
- file_mode=0777
accessModes:
- ReadWriteMany
...
** I wrote 0777 as an example. You should carefully set what's write for you.
Hope this helps anyone.

How Can I deploy from Bitbucket to AWS to different instances and different folders?

I have two instances in AWS. One for production and one for homologation. I deploy automatically with CodeDeploy. I have two branches on BitBucket, the master and homolog. When I commit in the homolog deploy must go to the instance of homologation, and if I make a merge in the master deploy must be in the production stage.
To do the automatic deploy of Bitbucket to AWS there is a series of files that configure the deploy details. One of these files is the appspec.yml. According to AWS it is only possible to have an appspec.yml file.
This basic form file has the following structure:
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html
hooks:
AfterInstall:
- location: deploy-scripts/install_dependencies.py
timeout: 300
runas: root
The problem is that for each instance I have a destination folder.
If I do the deploy on the homolog instance the destination folder should be var/www/html and for the production instance it should be var/www/html/test/
I tried to do it as below:
version: 0.0
os: linux
files:
- source: /
destination: deploy-scripts/destination.py
hooks:
AfterInstall:
- location: deploy-scripts/install_dependencies.py
timeout: 300
runas: root
That's the destination.py:
if os.environ['APPLICATION_NAME'] == 'ahimsa-store-homolog':
return '/var/www/html/'
elif os.environ['APPLICATION_NAME'] == 'ahimsa-store':
return '/var/www/html/teste/'
The above option does not work. How can I do this?
The files section of appspec.yml doesn't run scripts.
Move the required files to a temporary folder using the files section
Create a script that will move these files from the temporary location to the required destination depending on your requirements. As you suggested, using os.environ['APPLICATION_NAME'] for example.
Make sure your script sets correct file permissions after moving the files.
Include that script in the AfterInstall section, so the script can find the new files "installed" in the temporary location you choose. Also make sure it is before installing the dependencies!
Another option is to have a different appspec in each branch. It would make merging more difficult, but it could help.