EFS can't be mounted on EC2 instances - amazon-web-services

I'm trying to mount EFS system on my EC2 instances.
I have followed this Walkthrough very well. But It seems that the EFS is not mounting by using DNS.
When I use IP it works but I don't find the files created by instance 1 inside the mounted folder in the instance 2. I mean the EFS is not realy shared.
Please Help?
For information, DNS settings are enabled in the VPC.
EFS and EC2 are in the same VPC.
EFS security Group has ingeress rule that allows the EC2 Security group on the port 2049.
What else should I check?
root#ip:~# mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EC2_AVAIL_ZONE.fs-4644458f.efs.$REGION.amazonaws.com:/ /efs-mount-point
mount.nfs4: Failed to resolve server eu-west-1a.fs-4644458f.efs.eu-west-1.amazonaws.com: Name or service not known
root#ip:~#
root#ip:~# mount -a -t nfs4
mount.nfs4: Failed to resolve server eu-west-1a.fs-4644458f.efs.eu-west-1.amazonaws.com: Name or service not known
root#ip:~#
root#ip:~# mount -a
mount.nfs4: Failed to resolve server eu-west-1a.fs-4644458f.efs.eu-west-1.amazonaws.com: Name or service not known
root#ip:~#

If you have a custom DNS, you may need to redirect DNS queries to AWS DNS server:
echo "server=/amazonaws.com/169.254.169.253" > /etc/dnsmasq.d/amazonaws.com.conf
echo "prepend domain-name-servers 127.0.0.1;" >> /etc/dhcp/dhclient.conf
service dnsmasq restart
service network restart

Related

AWS NFS mount volume issue in kubernetes cluster (EKS)

I am using AWS EKS. As i am trying to mount efs to my eks cluster, getting the following error.
Warning FailedMount 3m1s kubelet Unable to attach or mount volumes: unmounted volumes=[nfs-client-root], unattached volumes=[nfs-client-root nfs-client-provisioner-token-8bx56]: timed out waiting for the condition
Warning FailedMount 77s kubelet MountVolume.SetUp failed for volume "nfs-client-root" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/b07f3f15-b655-435c-8ec1-8d14b8690c1d/volumes/kubernetes.io~nfs/nfs-client-root --scope -- mount -t nfs 172.31.26.154:/mnt/nfs_share/ /var/lib/kubelet/pods/b07f3f15-b655-435c-8ec1-8d14b8690c1d/volumes/kubernetes.io~nfs/nfs-client-root
Output: Running scope as unit run-23226.scope.
mount.nfs: Connection timed out
And also i tried to connect with external nfs server, also getting the same warning message.
i have opened the inbound allow all traffic in eks cluster, efs and nfs security groups.
If it is the problem with nodes to install nfs-common, please let me know the steps how to install the nfs-common package inside the nodes.
As i am using AWS EKS, i am unable to login to the nodes.
While creating an ec2 machine for an external NFS-server, you must add it to the vpc used by the eks cluster and include it in the security group that nodes use to communicate with each other.

Unable to attach EFS to EC2s. We tried various ways to mount, even it is throwing the same error

Unable to attach EFS to EC2s. We tried various ways to mount, even it is throwing the same error . logs:
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-c04ksakbe520.efs.us-wsjn-1.amazonaws.com :/ efs mount.nfs4: Connection timed out
we found that the VPC was using a custom DNS in the DHCP option set to resolve Company on premises URLs. In order to mount an EFS using the DNS name, the connecting EC2 instance must be inside a VPC and must be configured to use the DNS server provided by Amazon [1]. Using the IP address of the mount target in the same Availability Zone as the instance (us-east-1a), we were able to mount the EFS [2] using the following command:
mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 83.23.23.4:/ efs-mount-point
Then added the following line to the /etc/fstab file in order to mount the EFS automatically on boot:
83.23.23.4:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0
tested successfully by running "mount -a"
Mounting on Amazon EC2 with a DNS Name - https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html
Mounting File Systems Without the EFS Mount Helper - https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-old.html

'unknown host' error when trying to link EB and EFS from different regions

Today my EC2 instance had some trouble, EB did his job, create new instance and terminated the old one.
The the problem is that my /mnt/efs folder is empty.
I tried to mount it by hand but I get this error:
unknown host fs-xxx.efs.eu-central-1.amazonaws.com
Here is my command to mount the volume:
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-846896dd.efs.eu-central-1.amazonaws.com:/ /mnt/efs
Another important thing: my elastic beanstalk environment and EFS are not on the same region:
EB -> Paris
EFS -> Frankfurt
I found that I cannot link EFS and EB because they must be on the same zone, so I moved EB on Frankfurt.

Accessing GCP Memorystore from local machines

Whats the best way to access Memorystore from Local Machines during development? Is there something like Cloud SQL Proxy that I can use to set up a tunnel?
You can spin up a Compute Engine instance and use port forwarding to connect to your Redis machine.
For example if your Redis machine has internal IP address 10.0.0.3 you'd do:
gcloud compute instances create redis-forwarder --machine-type=f1-micro
gcloud compute ssh redis-forwarder -- -N -L 6379:10.0.0.3:6379
As long as you keep the ssh tunnel open you can connect to localhost:6379
Update: this is now officially documented:
https://cloud.google.com/memorystore/docs/redis/connecting-redis-instance#connecting_from_a_local_machine_with_port_forwarding
I created a vm on google cloud
gcloud compute instances create redis-forwarder --machine-type=f1-micro
then ssh into it and installed haproxy
sudo su
apt-get install haproxy
then updated the config file
/etc/haproxy/haproxy.cfg
....existing file contents
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
I was then able to connect to memory store from my local machine for development
You can spin up a Compute Engine instance and setup an haproxy using the following docker image haproxy docker image then haproxy will forward your tcp requests to memorystore.
For example i want to access memorystore instance with ip 10.0.0.12 so added the following haproxy configs:
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server 10.0.0.12:6379 check
So now you can access memorystore from your local machine using the following command:
redis-cli -h <your-haproxy-public-ipaddress> -p 6379
Note: replace with you actual haproxy ip address.
Hope that can help you to solve your problem.
This post builds on earlier ones and should help you bypass firewall issues.
Create a virtual machine in the same region(and zone to be safe) as your Memorystore instance. On this machine:
Add a network tag with which we will create a firewall rule to allow traffic on port 6379
Add an external IP with which you will access this VM
SSH into this machine and install haproxy
sudo su
apt-get install haproxy
add the following below existing config in the /etc/haproxy/haproxy.cfg file
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
Now create a firewall rule that allows traffic on port 6379 on the VM. Ensure:
It has the same target tag as the networking tag we created on the VM.
It allows traffic on port 6379 for the TCP protocol.
Now you should be able to connect remotely like so:
redis-cli -h [VM IP] -p 6379
Memorystore does not allow connecting from local machines, other ways like from CE, GAE are expensive especially your project is small or in developing phase, I suggest you create a cloud function to execute memorystore, it's serverless service which means lower fee to execute. I wrote small tool for this, the result is similar to run on local machine. You can check if help to you.
Like #Christiaan answered above, it almost worked for me but I needed a few other things to check to make it work well.
Firstly, in my case, my Redis is running in a specific network other than default network, so I had to create the jumpbox inside the same network (let's call it my-network)
Secondly, I needed to apply a firewall rule to open port 22 in that network.
So putting all my needed command it looks like this:
gcloud compute firewall-rules create default-allow-ssh --project=my-project --network my-network --allow tcp:22 --source-ranges 0.0.0.0/0
gcloud compute instances create jump-box --machine-type=f1-micro --project my-project --zone europe-west1-b --network my-network
gcloud compute ssh jump-box --project my-project --zone europe-west1-b -- -N -L 6379:10.177.174.179:6379
Then I have access to Redis locally on 6379

Docker Container/AWS EC2 Public DNS Refusing to Connect

I am unable to connect to my EC2 instance via its public dns on a browser, even though for security groups "default and "launch-wizard-1" port 80 is open for inbound and outbound traffic.
It may be important I note that I have a docker image that is running in the instance, one I launched with:
docker run -d -p 80:80 elasticsearch
I'm under the impression this forwards port 80 of the container to port 80 of the EC2 instance, correct?
The problem was that elasticsearch serves http over port 9200.
So the correct command was:
docker run -d -p 80:9200 elasticsearch
The command was run under root.