lambda + efs - mounting vs access point - amazon-web-services

I am trying to use aws lambda and efs together so I can perform operations that exceed the default lambda storage limit of 500mb. I am confused what the difference is between Local mount path and Access point.
Is the local mount path a term used to describe where the file system is mounted in the existing file system and the access point (which also has it's own path) the location that an application would reference in code? Or does it not actually matter which path is referenced?
For example
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: "1000"
Gid: "1000"
RootDirectory:
CreationInfo:
OwnerGid: "1000"
OwnerUid: "1000"
Permissions: "0777"
Path: "/myefs"
is how I create the access point and the mount path I have specified directly on the lambda for testing.
I guess the main confusion I am having is why are there 2 paths, what is the difference between them and which one should I use in my lambda?

Your EFS can have many directories on it:
/myefs
/myefs2
/myefs3
/myefs4
/important
/images
Your AccessPointResource will only enable access to /myefs. This folder will be basically the root to anyone who uses the access point. No other folder will be exposed through this access point.
/mnt/efs is the mount folder in the lambda container. So your function will be able to access /myefs mounted in its local directory tree under the name of /mnt/efs.

Mount path has to be the same as the access point root directory - in your case you should change local mount path from "/mnt/efs" to "/mnt/myefs" (or if you want the mount path to be "/mnt/efs" you should change acces point root directory to "efs")
You can also see this answer

Related

How to configure permissions between EFS and EC2

I'm trying to use CloudFormation to setup a mongod instance using EFS storage, and I'm having problems understanding how to configure the file system permissions to make it work.
The EFS is not going to be accessed by any existing systems, so I can configure it exactly as I need to.
I was trying to use the following AWS example as a starting point ...
https://docs.amazonaws.cn/en_us/AWSCloudFormation/latest/UserGuide/aws-resource-efs-accesspoint.html
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: "13234"
Gid: "1322"
SecondaryGids:
- "1344"
- "1452"
RootDirectory:
CreationInfo:
OwnerGid: "708798"
OwnerUid: "7987987"
Permissions: "0755"
Path: "/testcfn/abc"
In the above example, they seem to have assigned arbitrary group and user id's. What I'm trying to figure out is given the above, how would the user accounts on the EC2 need to be configured to allow full read/write access?
I've got to the point where I'm able to mount the access point, but I haven't been able to successfully write to it.
What I've tried...
Created a new user on the EC2, and assigned the uid and gid like so...
sudo usermod -u 13234 testuser1
sudo groupmod -g 1322 testuser1
I then sudo to that user and try writing a file to the mount point... No luck
I then tried assigning the uid and gid like so...
sudo usermod -u 7987987 testuser1
sudo groupmod -g 708798 testuser1
Again, no luck writing a file.
What I'm really looking for is the simplest configuration where I can have a single EC2 user have full read/write access to an EFS folder. It will be a new EFS and new EC2, so I have full control over how it's setup, if that helps.
Possibly the examples assume some existing knowledge of the workings of NFS, which I may be lacking.
Just in case it helps anyone, I ended up defining my Access Point like so...
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: "9960"
Gid: "9940"
RootDirectory:
CreationInfo:
OwnerGid: "9940"
OwnerUid: "9960"
Permissions: "0755"
Path: "/mongo-db"
and then in my userdata for the mongodb server EC2, I added this...
sudo groupadd --system --gid 9940 mongod
sudo useradd --system --uid 9960 --gid 9940 mongod
I'm not actually sure if the gid and uid above need to match what I've defined in the AccessPoint, but it seems to make it easier, as then the server will show the owner of the files as "mongod mongod".
I mounted the EFS like so...
sudo mkdir -p /mnt/var/lib/mongo
sudo sudo mount -t efs -o tls,accesspoint=${AccessPointId} ${FileSystemId}: /mnt/var/lib/mongo
I'm still a bit confused about the original AWS provided example. If my understanding is correct, it seems it would always create a root directory which cannot be written to.
Perhaps someone can clarify where it might be helpful to have the root directory owned by a different user to the one specified in the PosixUser.

How to open permissions for Deployment volumeMount mapped to EFS

After creating the AWS EFS file system I went ahead and mapped it to one of the deployments/container's Volume as /data/files directory:
volumeMounts:
- name: efs-persistent-storage
mountPath: /data/files
readOnly: false
volumes:
- name: efs-persistent-storage
nfs:
server: fs-1234.efs.us-west-2.amazonaws.com
path: /files
I now am able to delete, create and modify the files stored on EFS drive. But running .sh script that tries to copy the files fails telling that the permissions of the /data/files directory don't allow it to create the files.
I double checked the directory permissions. And they are all open. How could I make it work?
May be the problem is that I am mapping directly to the efs server fs-1234.efs.us-west-2.amazonaws.com? Would it give me more options if I would use Persistant Volume Claim instead?

Django can't access Azure mounted storage

I am running my Djagno app (python 2.7, django 1.11) on an Azure server using AKS (kubernetes).
I have a persistent storage volume mounted at /data/media .
When I try to upload files through my app, I get the following error:
Exception Value: [Errno 13] Permission denied: '/data/media/uploads/<some_dir>'
Exception Location: /usr/local/lib/python2.7/os.py in makedirs, line 157
The problematic line in os.py is the one trying to create a directory mkdir(name, mode) .
When I use kubectl exec -it <my-pod> bash to access the pod (user is root), I can easily cd into the /data/media directory, create sub-folders and see them reflected in the Azure portal. So my mount is perfectly fine.
I tried chmoding /data/media but that does not work. It seems like I cannot change the permissions of the folders on the mounted persistent volume, nor can I add users or change groups. So, it seems there is no problem accessing the volume from my pod, but since Django is not running as root, it cannot access it.
Ho do I resolve this? Thanks.
It turns out that since the Azure file share mount is actually owned by the k8s cluster, the Docker containers running in the pods only mount it as an entry point but cannot modify its permissions since they do not own it.
The reason it started happening now is explained here:
... it turned out that the default directory mode and file mode differs between Kubernetes versions. So while the the access mode is 0777 for Kubernetes v1.6.x, v1.7.x, in case of v1.8.6 or above it is 0755
So for me the fix was adding the required access permissions for the mounted volume to k8s spec like so:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: <volumeName>
annotations:
volume.beta.kubernetes.io/storage-class: <className>
spec:
mountOptions:
- dir_mode=0777
- file_mode=0777
accessModes:
- ReadWriteMany
...
** I wrote 0777 as an example. You should carefully set what's write for you.
Hope this helps anyone.

cloud-init cc_mounts.py ignores AWS EFS mounts

I am deploying an Amazon Linux AMI to EC2, and have the following directive in my user_data:
packages:
- amazon-efs-utils
mounts:
- [ "fs-12345678:/", "/mnt/efs", "efs", "tls", "0", "0" ]
I am expecting this to add the appropriate line to my /etc/fstab and mount the Amazon EFS filesystem. However, this does not work. Instead I see the following in my /var/log/cloud-init.log log file:
May 10 15:16:51 cloud-init[2524]: cc_mounts.py[DEBUG]: Attempting to determine the real name of fs-12345678:/
May 10 15:16:51 cloud-init[2524]: cc_mounts.py[DEBUG]: Ignoring nonexistent named mount fs-12345678:/
May 10 15:16:51 cloud-init[2524]: cc_mounts.py[DEBUG]: changed fs-12345678:/ => None
If I manually add the expected entry to my /etc/fstab, I can indeed mount the filesystem as expected.
I've found a couple of bugs online that talk about similar things, but they're all either not quite the same problem, or they claim to be patched and fixed.
I need this filesystem to be mounted by the time I start executing scripts via the cloud_final_modules stage, so it would be highly desirable to have the mount: directive work rather than having to do nasty hacky things in my later startup scripts.
Can anybody suggest what I am doing wrong, or if this is just not supported?
It is clear that the cloud-init mount module does not support the efs "device" name.

Can I specify puppet's root execution directory at 'puppet apply' runtime?

Suppose I'm working in AWS, and have an EBS volume attached to an instance. That volume is a copy of a root volume, insofar as it was created by snapshotting the root volume of another instance.
I'd like to run puppet against my EBS volume, but not hardcode its mounted path into my puppet manifests. Suppose it were mounted at /tmp/new-root-vol. Is there any way to run puppet apply against that path without specifying it in the manifest itself?
To put it another way, how could I get this manifest snippet to create /tmp/new-root-vol/testfile without knowing the /tmp/new-root-vol namespace until runtime?
file {'testfile':
path => '/testfile',
content => 'Hello, volume'
}
One possibility might be chroot. This feature request suggests it might work, as long as the puppet executable were accessible from the new root.
I don't know of any way you can do that. However you can create a variable in your site.pp (or include another manifest like constants.pp) and use a variable:
site.pp
$root_dir = '/'
node default {
include your_manifests
}
You file
file {'testfile':
path => "${root_dir}testfile",
content => 'Hello, volume'
}
You could even change the root directory per node too.