aws efs connection timeout at mount - amazon-web-services

I am following this tutorial to mount efs on AWS EC2 instance but when Iam executing the mount command
sudo mount -t nfs4 -o vers=4.1 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).[EFS-ID].efs.[region].amazonaws.com:/ efs
I am getting connection time out every time.
mount.nfs4: Connection timed out
What may be the problem here?
Thanks in advance!

I found the accepted answer here to be incorrect & insecure, and Bao's answer above is very close - except you don't need NFS Inbound on your EC2 (mount target) security group. You just need a security group assigned to your EC2 (even with no rules) so that your EFS Security group can be limited to that security group... you know, for security! Here's what I found works:
Create a new security group for your EC2 instance. Name it EFS Target, and leave all the rules blank
Create a new security group for your EFS Mount. Name it EFS Mount, and in this one add the inbound rule for NFS. Set the SOURCE for this rule to the EFS Target security group you created above. This limits EFS to only being able to connect to EC2 instances that have the EFS Mount security group assigned (See below). If you're not worried about that, you can select "Any" from the Source dropdown and it'll work just the same, without the added level of security
Go to the EC2 console, and add the EFS Target group to your EC2 instance, assuming you're adding the extra security
Go to the EFS Console, select your EFS and choose Manage File System Access
For each EFS Mount Target (availability zone), you need to add the EFS Mount security group and remove the VPC Default group (if you haven't already)
The mount command in the AWS documentation should work now
I don't like how they mixed vernacular here in terms of EC2 being a mount-target, but also EFS has individual mount-targets for each availability zone. Makes their documentation very confusing, but following the steps above allowed me to mount an EFS securely on an Ubuntu server.

Add type with NFS and port 2049 to the Inbound of your security group that your EC2 instances and EFS running on. It works for me.
Bao

A different answer here, as I faced a very similar error and none of the answers fit.
I was trying to mount a NFS like below (in my case EKS was doing that on my behalf, but I tested the very same command manually in the worker node with the same result):
[root#host ~]# mount -t nfs fs-abc1234.efs.us-east-1.amazonaws.com:/persistentvolumes /mnt/test
Output was: mount.nfs: Connection timed out
When I simply tried the same command, but using / as the path:
[root#host ~]# mount -t nfs fs-abc1234.efs.us-east-1.amazonaws.com:/ /mnt/test
It worked like a charm!
I really do not understand how a possible wrong or missing path can lead to a time out kind of error, but that was the only thing that could fix the problem for me, all the network configuration remained the same.
As I was using EKS/Kubernetes, I dedcided to mount /, which works, and then use subPath to change the volume mounting point in the container configuration.

I had the same problem and following the Amazon AWS guides it worked for one server of mine but another one didn't want to mount the EFS volume. Analyzing the local server messages log I've found that the outgoing TCP traffic was BLOCKED even if the associated Security Group was set to allow any outgoing traffic (on any port, any external address etc.). Setting a rule on the Security Group to allow TCP connections from EC2 host to EFS service on port 2049 didn't get any effect while instead setting a specific rule on the local iptable firewall got the job and resolved the issue. I can't figure out why there was this discrepancy but it worked for me. As far as I know the local iptables fw should not be touched and it should obtain the rules directly from the SG from AWS console.

Same issue here. After a while I noticed it picks 3 randoms subnets for the mount-points, one per AZ.
I was unlucky one of these subnets didn't had the correct NACL.
After assigning the correct subnet/SG per mount point it worked immediately fine using DNS and IP.

I got the Answer.
This is happening when the subnet is blocking the flow.
Go to subnets (which you have selected while creating the EFS) and allow the traffic to particular target systems.
checkthe EFS file systems subnet.
go to subnet
add a rule
allow all-traffic ( you can give specific to your target systems)
This worked in my case

Follow these steps
Create a security group allowing NFS traffic inbound.
The EC2 which will be used for mounting - note down the respective region.
Go to EFS - select FileSystem - Network - Edit the security group corresponding to the EC2 region (Step 2) - add security group from Step 1

For me it was simply that the an EC2 disk was full.
I've cleaned it, reboot the instance and it worked.
To check your disk use: df -h or du -h --max-depth=1 /

Related

Network interface of AWS EFS fails to be associated with a static IP

My issue is basically what is said in this question, except it's about EFS, rather than EC2, and I can't solve my problem with Route 53, as it's suggested.
I have an EFS instance and I try to mount it locally on my Windows machine (over WSL running Ubuntu 22.04.1 LTS) like so:
sudo mount -t efs -o tls,accesspoint=fsap-08fa969084c23b344 fs-003f3467bf1e15b13:/ efs
This results in the following:
Failed to resolve "fs-003f3467bf1e15b13.efs.us-east-1.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID.
See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail.
Attempting to lookup mount target ip address using botocore. Failed to import necessary dependency botocore, please install botocore first.
It seems that the issue arises from the fact that I'm not trying to access EFS from an EC2 instance in my VPC, but from the public internet, where the DNS fs-003f3467bf1e15b13.efs.us-east-1.amazonaws.com and the private IP 172.31.43.109 obviously can't be resolved.
Therefore, I want to assign a static Elastic IP to the network interface of EFS, so I can access it publicly, but I get the following error:
Failed to associate address with eni-0fa8cf69d68b7bb01: You do not have permission to access the specified resource.
I don't think that I "do not have permission" because I'm the owner of the account and I have the AdministratorAccess IAM policy.
Is there a way to make EFS publicly accessible or mount it in any other way on my own machine?
Therefore, I want to assign a static Elastic IP to the network interface of EFS, so I can access it publicly, but I get the following error:
That's not a supported configuration on AWS. You can't assign a public IP to EFS. You need to look into SSH tunneling, or a VPN connection into the VPC, in order to mount an EFS volume from outside the VPC.
My guess is that AWS doesn't allow me to make EFS publicly available because that might make their AWS Transfer Family product obsolete, since it seems to solve the same problem - using EFS outside the cloud.
That's a very cynical take on things. In actuality Amazon simply designed EFS to be a service that complemented their compute services (EC2, ECS, EKS, Fargate, and Lambda). They did not design it to be a global, public NFS mountable file system.

EFS mount in ECS "Failed to resolve"

I am trying to create a Fargate container with a mounted EFS volume via access point, all being created through cloudformation. I see the EFS created in the portal however the ECS task is failing with:
Failed to resolve "fs-XXX.efs.eu-west-2.amazonaws.com" - check that your file system ID is correct
Before adding the accesspoint the mounting worked. I need the accesspoint since the container is using non-root user.
The VPC has DNS and hostname lookup enabled.
Here is the cloudformation template:
https://pastebin.com/CgtvV17B
the problem was missing EFS Mount Target https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-efs-mounttarget.html
I think the fargate tasks can't reach the EFS system, check that the EFS subnets are reachable from the Fargate ( deployed in the same subnets at least), and the route tables are well configured. The security group of the ECS and EFS are well configured ( check if your EFS authorize the TCP 2049).
Also check the version of the Fargate, I think its working with EFS just for the version > 1.4
Try to deploy an instance EC2 with the same configuration ( same VPC and subnet properties ) and check if it can reach the EFS.

Unable to SSH into AWS EC2 Instance: Operation timed out

Please help! I've spent multiple days trying to ssh into my EC2 instance.
I'd been able to do this for the first 24 or so hours. Then as I was adding dependencies to my instance I got booted. Now I'm unable to get back in. At one point my Public DNS changed but I've accounted for this.
My security groups, VPCs, internet gateways, route tables, subnets, firewall, etc. seem to all be in order too.
What is the issue here? Please advise!
Test connectivity to SSH
Create another EC2 instance in the same subnet of the target EC2.
Make sure the egress rule allow all outbound, and inbound to port 22.
Copy the SSH private key to ~/.ssh/ and make sure to remove group/other rw permissions.
Install telnet or nc if not installed in the new EC2.
Test the connectivity to the target EC2 from the new EC2.
telnet ${TARGET_HOST_IP} 22
If this works and you can connect, then SSH server is up and running. If not, SSH server is not running, or the port 22 is not open.
If somehow the SSH server is down, there could be some ways to try to fix.
See User is reporting that they've unable to SSH into an EC2 instance in AWS?
for the options such as mount the root EBS volume to another EC2, or use USEDATA to reconfigure.
Login to EC2 from EC2 console
Connect Using the Browser-based Client
If you can login, then make sure SSH server is up and running. Then make sure ~/.ssh/authorized_key has the public key. Verify /var/log/auth or /var/log/security to verify if login gets denied if try to ssh into the EC2 from outside.
Clone to investigate or to replace
If you can, shut the EC2 instance, take a ELB snapshot of the root volume, then mount it to another EC2 you can SSH into, and investigate dmesg, /var/log files for any errors that may prevent SSH connections. Verify ssh server configuration, ~/.ssh/ files.
Or simply copy the contents you need from the ELB snapshot to a new EC2 instance and replace the original one with the new one.
AWS is clear that to create a snapshot of the root ELB volume, the instance needs to be shutdown. Otherwise the integrity of the snapshot is not assured.
Update
To restore the SSH public key or permission of the ~/.ssh folder, also see [User is reporting that they've unable to SSH into an EC2 instance in AWS?

How to grant EFS mount target access to DataSync Agent on-premise?

We have an on-premise DataSync agent (VM image) running, and an EFS with mount target.
We want to grant the agent access to the mount target in order to run sync tasks. However, there does not seem to be any security group assignable to the agent that we could grant egress access to the mount target.
So, currently, we grant public egress access to the mount target. Is there any way to nail this down to the agent?
If the agent was running on an EC2 instance, the instance itself could have a security group assigned, but there does not appear to be any alternative when the agent is running on-premise.
Turns out, I had a misconception.
DataSync Locations have a security group assigned, which is used when running datasync tasks using that location. And that security group needs egress access in the EFS mount target's security group.

AWS switch from EBS to EFS

I was thinking about switching from AWS Elastic Block Storage to AWS Elastic Filesystem (mainly for the easy scalability, also shareable storage seems nice).
At the moment I have one debian EC2 instance with one EBS volume. What's the easiest way to transfer my data from EBS to EFS?
The fastest way to achieve this is mount that EFS file system to your EC2 instance with EBS and then transfer the data from your EBS to EFS.
Follow this guide for mounting the EFS to your EC2 instance. https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html
EFS is good for sharing data between multiple EC2 instances, but you would still want to use EBS for the root drive (boot volume) of your instance.
You cannot boot from an EFS volume.
You mention that you have "one debian EC2 instance with one EBS volume". However, it is generally best to keep data separate from the boot volume (eg in a database, an S3 bucket or in EFS). This allows the instance to be recreated from an AMI in case of problems, without losing data.
If you wish to move/copy data to an EFS volume, just use normal filesystem commands (eg cp -r).
I think you can also use AWS DataSync to copy data from existing folder to EFS mounted folder.
(1) You need to set up an NFS service using the instance that you have your EBS attached to.
cf. https://linuxhint.com/install-and-configure-nfs-server-ubuntu-22-04/ for example and step-by-steps.
You can test that your NFS server works by using another instance, and mounting it there using /etc/fstab .. (I think the link above shows you how to do that).
You will need the IP address of your NFS server (for me, this is 10.0.33.5)
(2) You will need to deploy a DataSync Agent - this is a new instance. It need lots of ram (so, expensive) - eg m1.xlarge - especially if your EBS is big and has many thousands of files. look here for that https://docs.aws.amazon.com/datasync/latest/userguide/deploy-agents.html#ec2-deploy-agent
You now have an DataSync Agent Instance (which should be on the same subnet and AZ as your NFS instance) showing on your EC2. You will need it's private IP number.. (for me, this is 10.0.33.111)
(3) You need to create an AWS Endpoint. (in VPC) You are going to add one with AWS Services - search for and choose 'datasync', and add that to the subnets that your NFS Server and Agent are on. Once that is created, you will need the IP address of the subnet / AZ that you are using. (For me this is 10.0.33.222)
(4) You will need to get your Agent Activation Key. ssh into an instance (like your nfs server) on the same subnet and then to get your key, using the url below with your region (mine is eu-west-1 ) and the two IP numbers you have recorded.. Do not use MY ones!!
curl "http://10.0.33.111/?gatewayType=SYNC&activationRegion=eu-west-1&privateLinkEndpoint=10.0.33.222&endpointType=PRIVATE_LINK&no_redirect"
If all is well you will get a long Activation Key string like XXXX-XXXX-XXXX-XXXX
(5) Now you need to add all this into your DataSync Agents list (it's an Amazon EC2 Hypervisor, using a VPC endpoint using "AWS PrivateLink". The endpoint should show automatically) and paste in your ID Activation Key from step 4 above. You should now see an active Agent in your Agents list (in DataSync).
(6) Now you can create a Location that uses that agent. Select NFS, and your Agent, Now put the ip address of your nfs server (from step 1, eg 10.0.33.5) and the mount path (the same as what you used in your /etc/exports file on the instance where you have attached your EBS eg /mnt/mydrive
(7) NOW you can create a DataSync task from your NFS to your EFS..