Access AWS EFS from two different VPC's in same account - amazon-web-services

I have an EFS file system. I have created two mounts one for us-east-1a and another for us-east-1b and both are in the same VPC. Now I have a requirement to add a mount point and it's in a different VPC but in the same account. When I try to create the mount target I get the below error
aws efs create-mount-target --file-system-id fs-abcdef --subnet-id subnet-156fd195808k8l --security-groups sg-99b88u518a368dp
An error occurred (MountTargetConflict) when calling the CreateMountTarget operation: requested subnet for new mount target is not in the same VPC as existing mount targets
is there a way I can use the EFS in two different VPCs?

VPC peering OR Transit Gateway is enough in order for NFS client from different VPC to connect to EFS in separate VPC.
Only one mount target for a certain EFS is needed per AZ. The error shows that you already have mount target for the specific EFS.
To connect your NFS client you can follow the AWS provide documentation

Related

How to mount aws EC2 files on aws EKS as presistent volume?

I have a aws EC2 (EC2-A) and Amazon Managed Blockchain running in VPC (VPC-A)
This EC2-A instance has some files and certificates (required for executing transactions in the blockchain)
EC2-A has EBS storage which can be mounted on only one EC2 instance at one time.
Transactions can be only executed to the blockchain network from the EC2-A, since they're are in the same VPC-A.
I have an aws EKS (Kubernetes cluster) running in VPC-B.
How can I access the files and certificates of EC2-A from a pod in my k8s cluster. Also I have another pod which will be blockchain client executing transactions in the blockchain network, which is in VPC-A.
Both these VPC-A and VPC-B are in the same aws account.
Mount a folder/files on an EC2 instance to a pod running in EKS is not supported. For your use case, you can easily share folder/files using EFS if not S3. If you are only allow to do pod to EC2 communication, you need a way for these resources to reach each other either by public IP if not VPC peering. Then you can run sftp, scp... any kind of off the shelf file sharing software you knew best for file exchange.
You need to connect 2 VPCs with VPC Peering, then you can install NFS in your EC2, and write PV, PVC point to the NFS EC2.

EFS mount in ECS "Failed to resolve"

I am trying to create a Fargate container with a mounted EFS volume via access point, all being created through cloudformation. I see the EFS created in the portal however the ECS task is failing with:
Failed to resolve "fs-XXX.efs.eu-west-2.amazonaws.com" - check that your file system ID is correct
Before adding the accesspoint the mounting worked. I need the accesspoint since the container is using non-root user.
The VPC has DNS and hostname lookup enabled.
Here is the cloudformation template:
https://pastebin.com/CgtvV17B
the problem was missing EFS Mount Target https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-efs-mounttarget.html
I think the fargate tasks can't reach the EFS system, check that the EFS subnets are reachable from the Fargate ( deployed in the same subnets at least), and the route tables are well configured. The security group of the ECS and EFS are well configured ( check if your EFS authorize the TCP 2049).
Also check the version of the Fargate, I think its working with EFS just for the version > 1.4
Try to deploy an instance EC2 with the same configuration ( same VPC and subnet properties ) and check if it can reach the EFS.

How to mount the AWS EFS with EC2 on different availability zone?

I am trying to mount the EFS with EC2 and what I have done is created the EFS on private subnet and EC2 on public subnet. The private and public subnets are in different availability regions for example us-east-1 and us-east-2.
I am able to connect the EC2 and EFS if putting both of them in public network.As per the official AWS docs its says
"Ensure that there's an Amazon EFS mount target in the same
Availability Zone as the Amazon EC2 instance"
I don't want to put the EFS in public subnet.
When mounting the EFS to the EC2 I am getting this error message:
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-b3XXXXXXXXXXXXXXXXX.amazonaws.com:/ /mnt/wordpress
mount.nfs4: Failed to resolve server fs-b3XXXXXXXXXXXXXXXXX.amazonaws.com: No address associated with hostname
The dhcp and dns related settings for VPC are all turned on.
I don't want to put the EFS in public subnet.
That's good. You shouldn't, although it technically would not matter because EFS endpoints are still private even when placed in a public subnet.
But if you only have two subnets -- one public, one private -- in a VPC, then they almost certainly should be in the same availability zone. Traffic crossing AZ boundaries is billable per gigabyte, and this is exactly why you should never try to mount EFS across zone boundaries. This error appears to be protecting you from yourself.
As noted, you probably shouldn't have one subnet one AZ and one in another, without a compelling reason, so fixing that is one solution. Another solution is to simply add a new private subnet in the correct zone. EFS has no problem crossing subnet boundaries within a zone, and there is no bandwidth charge in that case.

Amazon EFS within diferent zones

Is it possible to use an EFS in AWS for several instances located in different regions?
If not, is it possible to do somethink like that using AWS console?? Doesn't matter latency or throughput between EC2 instance and network volume.
EFS can be accessed through Direct Connect or VPN. Estabilsh VPN connection between regions and you can mount EFS with the IP address of the corresponding mount target.

Why have the same security group and how can have not the same one in VPC?

I would like to create EFS in AWS and it is said in documentation, that I can attach it only to instances, which have the same security group as my VPC.
How to know security group of my VPC?
Suppose it is default and my instances have different security groups, created at different times by different wizards. How can it be, that instance is belong to VPC but has different security group, than that VPC?
Amazon Elastic File System(EFS) is a regional service. If you create an EFS in a particular region (eg: us-east-1) then you can create multiple EC2 instances in different availability zones in the same us-east-1 region to access the EFS to read and write data.
All the EC2 instances in a particular region (eg: us-east-1) must belong to a VPC and a subnet.(Unless you use EC2-Classic). A VPC maps to a region and A subnet maps to an availability zone. You can setup mount targets in the availability zones of your VPC, So that EC2 instances can connect to EFS via a mount target and share the same file system.
Have a look at the following image from AWS Documentation.
Now, how can we make sure that our EFS can only be accessed by certain set of EC2 instances and not all the instances from all the subnets?
This is where the security groups come in handy. We can assign security groups to the EFS mount points such that only EC2s that the given security group is attached can access EFS via the mount target. Any other EC2 instances that are in a different security group cannot access the EFS. This is the way we restrict access to EFS.
So, when you are mounting the EFS to an EC2 instance, we have to add the same security group of the EFS to the EC2 instance.
Both an Amazon EC2 instance and a mount target have associated security groups. These security groups act as a virtual firewall that controls the traffic between them. If you don't provide a security group when creating a mount target, Amazon EFS associates the default security group of the VPC with it.
Regardless, to enable traffic between an EC2 instance and a mount target (and thus the file system), you must configure the following rules in these security groups:
The security groups you associate with a mount target must allow inbound access for the TCP protocol on the NFS port from all EC2 instances on which you want to mount the file system.
Each EC2 instance that mounts the file system must have a security group that allows outbound access to the mount target on the NFS port.
Read more about EFS security groups here.
Hope this helps.