Issue with mounting EFS access point from an AWS ECS Fargate task - amazon-web-services

I am trying to launch fargate task from EFS . My EFS filesystem is mounted on Same subnet, VPC and security group where my ECS is mounted.
still I am facing the same issue "failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-0b5a160420b31f547.efs.us-east-1.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID"
In my security group of ECS and EFS 2049 is enabled on inbound and Outbound network.

I find the solution by my own I haven't done VPC DNS name enabled as it was set as disable i changed it to enabled and then ITWoRKS.

Related

Access AWS EFS from two different VPC's in same account

I have an EFS file system. I have created two mounts one for us-east-1a and another for us-east-1b and both are in the same VPC. Now I have a requirement to add a mount point and it's in a different VPC but in the same account. When I try to create the mount target I get the below error
aws efs create-mount-target --file-system-id fs-abcdef --subnet-id subnet-156fd195808k8l --security-groups sg-99b88u518a368dp
An error occurred (MountTargetConflict) when calling the CreateMountTarget operation: requested subnet for new mount target is not in the same VPC as existing mount targets
is there a way I can use the EFS in two different VPCs?
VPC peering OR Transit Gateway is enough in order for NFS client from different VPC to connect to EFS in separate VPC.
Only one mount target for a certain EFS is needed per AZ. The error shows that you already have mount target for the specific EFS.
To connect your NFS client you can follow the AWS provide documentation

EFS mount in ECS "Failed to resolve"

I am trying to create a Fargate container with a mounted EFS volume via access point, all being created through cloudformation. I see the EFS created in the portal however the ECS task is failing with:
Failed to resolve "fs-XXX.efs.eu-west-2.amazonaws.com" - check that your file system ID is correct
Before adding the accesspoint the mounting worked. I need the accesspoint since the container is using non-root user.
The VPC has DNS and hostname lookup enabled.
Here is the cloudformation template:
https://pastebin.com/CgtvV17B
the problem was missing EFS Mount Target https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-efs-mounttarget.html
I think the fargate tasks can't reach the EFS system, check that the EFS subnets are reachable from the Fargate ( deployed in the same subnets at least), and the route tables are well configured. The security group of the ECS and EFS are well configured ( check if your EFS authorize the TCP 2049).
Also check the version of the Fargate, I think its working with EFS just for the version > 1.4
Try to deploy an instance EC2 with the same configuration ( same VPC and subnet properties ) and check if it can reach the EFS.

How to grant EFS mount target access to DataSync Agent on-premise?

We have an on-premise DataSync agent (VM image) running, and an EFS with mount target.
We want to grant the agent access to the mount target in order to run sync tasks. However, there does not seem to be any security group assignable to the agent that we could grant egress access to the mount target.
So, currently, we grant public egress access to the mount target. Is there any way to nail this down to the agent?
If the agent was running on an EC2 instance, the instance itself could have a security group assigned, but there does not appear to be any alternative when the agent is running on-premise.
Turns out, I had a misconception.
DataSync Locations have a security group assigned, which is used when running datasync tasks using that location. And that security group needs egress access in the EFS mount target's security group.

Why have the same security group and how can have not the same one in VPC?

I would like to create EFS in AWS and it is said in documentation, that I can attach it only to instances, which have the same security group as my VPC.
How to know security group of my VPC?
Suppose it is default and my instances have different security groups, created at different times by different wizards. How can it be, that instance is belong to VPC but has different security group, than that VPC?
Amazon Elastic File System(EFS) is a regional service. If you create an EFS in a particular region (eg: us-east-1) then you can create multiple EC2 instances in different availability zones in the same us-east-1 region to access the EFS to read and write data.
All the EC2 instances in a particular region (eg: us-east-1) must belong to a VPC and a subnet.(Unless you use EC2-Classic). A VPC maps to a region and A subnet maps to an availability zone. You can setup mount targets in the availability zones of your VPC, So that EC2 instances can connect to EFS via a mount target and share the same file system.
Have a look at the following image from AWS Documentation.
Now, how can we make sure that our EFS can only be accessed by certain set of EC2 instances and not all the instances from all the subnets?
This is where the security groups come in handy. We can assign security groups to the EFS mount points such that only EC2s that the given security group is attached can access EFS via the mount target. Any other EC2 instances that are in a different security group cannot access the EFS. This is the way we restrict access to EFS.
So, when you are mounting the EFS to an EC2 instance, we have to add the same security group of the EFS to the EC2 instance.
Both an Amazon EC2 instance and a mount target have associated security groups. These security groups act as a virtual firewall that controls the traffic between them. If you don't provide a security group when creating a mount target, Amazon EFS associates the default security group of the VPC with it.
Regardless, to enable traffic between an EC2 instance and a mount target (and thus the file system), you must configure the following rules in these security groups:
The security groups you associate with a mount target must allow inbound access for the TCP protocol on the NFS port from all EC2 instances on which you want to mount the file system.
Each EC2 instance that mounts the file system must have a security group that allows outbound access to the mount target on the NFS port.
Read more about EFS security groups here.
Hope this helps.

aws efs connection timeout at mount

I am following this tutorial to mount efs on AWS EC2 instance but when Iam executing the mount command
sudo mount -t nfs4 -o vers=4.1 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).[EFS-ID].efs.[region].amazonaws.com:/ efs
I am getting connection time out every time.
mount.nfs4: Connection timed out
What may be the problem here?
Thanks in advance!
I found the accepted answer here to be incorrect & insecure, and Bao's answer above is very close - except you don't need NFS Inbound on your EC2 (mount target) security group. You just need a security group assigned to your EC2 (even with no rules) so that your EFS Security group can be limited to that security group... you know, for security! Here's what I found works:
Create a new security group for your EC2 instance. Name it EFS Target, and leave all the rules blank
Create a new security group for your EFS Mount. Name it EFS Mount, and in this one add the inbound rule for NFS. Set the SOURCE for this rule to the EFS Target security group you created above. This limits EFS to only being able to connect to EC2 instances that have the EFS Mount security group assigned (See below). If you're not worried about that, you can select "Any" from the Source dropdown and it'll work just the same, without the added level of security
Go to the EC2 console, and add the EFS Target group to your EC2 instance, assuming you're adding the extra security
Go to the EFS Console, select your EFS and choose Manage File System Access
For each EFS Mount Target (availability zone), you need to add the EFS Mount security group and remove the VPC Default group (if you haven't already)
The mount command in the AWS documentation should work now
I don't like how they mixed vernacular here in terms of EC2 being a mount-target, but also EFS has individual mount-targets for each availability zone. Makes their documentation very confusing, but following the steps above allowed me to mount an EFS securely on an Ubuntu server.
Add type with NFS and port 2049 to the Inbound of your security group that your EC2 instances and EFS running on. It works for me.
Bao
A different answer here, as I faced a very similar error and none of the answers fit.
I was trying to mount a NFS like below (in my case EKS was doing that on my behalf, but I tested the very same command manually in the worker node with the same result):
[root#host ~]# mount -t nfs fs-abc1234.efs.us-east-1.amazonaws.com:/persistentvolumes /mnt/test
Output was: mount.nfs: Connection timed out
When I simply tried the same command, but using / as the path:
[root#host ~]# mount -t nfs fs-abc1234.efs.us-east-1.amazonaws.com:/ /mnt/test
It worked like a charm!
I really do not understand how a possible wrong or missing path can lead to a time out kind of error, but that was the only thing that could fix the problem for me, all the network configuration remained the same.
As I was using EKS/Kubernetes, I dedcided to mount /, which works, and then use subPath to change the volume mounting point in the container configuration.
I had the same problem and following the Amazon AWS guides it worked for one server of mine but another one didn't want to mount the EFS volume. Analyzing the local server messages log I've found that the outgoing TCP traffic was BLOCKED even if the associated Security Group was set to allow any outgoing traffic (on any port, any external address etc.). Setting a rule on the Security Group to allow TCP connections from EC2 host to EFS service on port 2049 didn't get any effect while instead setting a specific rule on the local iptable firewall got the job and resolved the issue. I can't figure out why there was this discrepancy but it worked for me. As far as I know the local iptables fw should not be touched and it should obtain the rules directly from the SG from AWS console.
Same issue here. After a while I noticed it picks 3 randoms subnets for the mount-points, one per AZ.
I was unlucky one of these subnets didn't had the correct NACL.
After assigning the correct subnet/SG per mount point it worked immediately fine using DNS and IP.
I got the Answer.
This is happening when the subnet is blocking the flow.
Go to subnets (which you have selected while creating the EFS) and allow the traffic to particular target systems.
checkthe EFS file systems subnet.
go to subnet
add a rule
allow all-traffic ( you can give specific to your target systems)
This worked in my case
Follow these steps
Create a security group allowing NFS traffic inbound.
The EC2 which will be used for mounting - note down the respective region.
Go to EFS - select FileSystem - Network - Edit the security group corresponding to the EC2 region (Step 2) - add security group from Step 1
For me it was simply that the an EC2 disk was full.
I've cleaned it, reboot the instance and it worked.
To check your disk use: df -h or du -h --max-depth=1 /