I m using CDH 4.3.0 and I have mounted hdfs using FUSE on my edge node. And the fuse mount point automatically changes the permissions from root:root to hdfs:hadoop.
When I try to export it over NFS, it throws a permissions error to me. Can anyone help me how to get around this. I ve read somewhere that it works only in kernel versions 2.6.27 and above, and mine is 2.6.18...Is this true ?
my mount command gives this output for hdfs fuse:
fuse on /hdfs-root/hdfs type fuse (rw,nosuid,nodev,allow_other,allow_other,default_permissions)
my /etc/exports looks like this:
/hdfs-root/hdfs/user *(fsid=0,rw,wdelay,anonuid=101,anongid=492,sync,insecure,no_subtree_check,no_root_squash)
my /etc/fstab looks like this:
hadoop-fuse-dfs#hdfs:: /hdfs-root/hdfs fuse allow_other,usetrash,rw 2 0
:/hdfs-root/hdfs/user /hdfsbkp nfs rw
the first line is for FUSE, and 2nd line is to export the mounted HDFS over NFS..
when I run mount -a, I get the following error...
"mount: :/hdfs-root/hdfs/user failed, reason given by server: Permission denied"
Also, I tried to change the ownership of FUSE mount to root:root, but it wouldnt let me do that either...by the way, we are using kerberos as authentication method to access hadoop...
Any help is really appreciated!!!
Related
I have an EC2 instance with a 20GB root volume. I attached an additional 200GB volume for partitioning to comply with DISA/NIST 800-53 STIG by creating separate partitions for directories such as /home, /var, and /tmp, as well as others required by company guidelines. Using Red Hat Enterprise Linux 7.5. I rarely ever do this and haven't done it for years so I'm willing to accept stupid solutions.
I've followed multiple tutorials using various methods and get the same result each time. The short (from my understanding), the OS cannot access certain files/directories on these newly mounted partitions.
Example: When I mount say /var/log/audit, the audit daemon will fail.
"Job for auditd.service failed because the control process exited with error code. See..."
systemctl status auditd says "Failed to start Security Auditing Service". I am also unable to login via public key when I mount /home but these types of problems go away when I unmount the new partitions. journalctl -xe reveals "Could not open dir /var/log/audit (Permission denied).
Permission for each dir is:
drwxr-xr-x root root /var
drwxr-xr-x root root /var/log
drws------ root root /var/log/audit
Which is consistent with the base OS image, which DOES work when the partition isn't mounted.
What I did:
-Create EC2 with 2 volumes (EBS)
-Partitioned the new volume /dev/xvdb
-Formatted partition to extf
-Create /etc/fstab entries for the partitions
-Mounted partitions to a temp place in /mnt then copied the contents using rsync -av <source> <dest>
-Unmounted the new partitions and updated fstab to reflect actual mount locations, e.g. /var/log/audit
I've tried:
-Variations such as different disk utilities (e.g. fdisk, parted)
-Using different partition schemes (GPT, DOS [default], Windows basic [default for the root volume, not sure why], etc.)
-Using the root volume from an EC2, detaching, attaching to other instance as a 2nd volume, and only repartitioning
Ownership, permissions, and directory structures are exactly the same, as far as I can tell, between the original directory and the directory created on the new partitions.
I know it sounds stupid but did you try restarting instance after new mounts?
The reason why I suggest this is because linux caches dir/file path to inode mapping. When you change mount, I am not sure if cache is invalidated. And that can possibly the reason for errors.
Also, though it is Ubuntu have a look at: https://help.ubuntu.com/community/Partitioning/Home/Moving
I am trying to mount a persistent disk with data to a VM to use it in Google Datalab. So far no success, ideally I would like to see my files in the Datalab notebook.
First, I added the disk in VM settings with Read/Write mode.
Second, I ran $lsblk to see what disks there are.
Then tried this: sudo mount -o ro /dev/sdd /mnt/disks/z
I got this error:
wrong fs type, bad option, bad superblock on /dev/sdd,
P.S. I used the disk I want to mount on another VM and downloaded some data on it. It was formatted as NTFS disk.
Any ideas?
I would need to create a file system as noted in the following document
$ mkfs.vfat /dev/sdX2
Creates a new partition, without creating a new file system on that partition.
Command: mkpart [part-type fs-type name] start end
I have created a standard persistent disk and successfully mounted it in Read/Write configuration on a node in my Kubernetes cluster.
I would now like to populate that disk with some content I currently have locally. The scp tool in the gcloud SDK seems like the ideal way to do this.
However, when I run:
gcloud compute scp ~/Desktop/subway-explorer-api/logbooks.sqlite gke-webapp-default-pool-49338587-d78l:/mnt/subway-explorer-datastore --zone us-central1-a
I get:
scp: /mnt/subway-explorer-datastore: Read-only file system
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
My questions are:
What is my error here? Why is the disk being reported as being read-only?
How do I fix it?
Is this indeed a good use of the gcloud scp utility (I got to here by looking at this answer), or is there a better way to do this?
I misinterpreted the folder the disk got mounted to, and I was trying to write to a folder that didn't actually exist. The error message led me to misdiagnose this as a permissions error, when in reality it was operator error.
For further details see this answer on the Unix StackExchange.
what do I want to do?
Step1: Mount a S3 Bucket to an EC2 Instance.
Step2: Install a FTP Server on the EC2 Instance and tunnel ftp-requests to files in the bucket.
What did I do so far?
create bucket
create security group with open input ports (FTP:20,21 - SSH:22 - some more)
connect to ec2
And the following code:
wget https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/s3fs/s3fs-1.74.tar.gz
tar -xvzf s3fs-1.74.tar.gz
yum update all
yum install gcc libstdc++-devel gcc-c++ fuse fuse-devel curl-devel libxml2-devel openssl-devel mailcap
cd s3fs-1.74
./configure --prefix=/usr
make
make install
vi /etc/passwd-s3fs # set access:secret keys
chmod 640 /etc/passwd-s3fs
mkdir /s3bucket
cd /s3bucket
And cd anwers: Transport endpoint is not connected
Dunno what's wrong. Maybe I am using the wrong user? But currently I only have one user (for test reasons) except for root.
Next step would be the ftp tunnel, but for now I'd like getting this to work.
I followed these instructions now. https://github.com/s3fs-fuse/s3fs-fuse
I guess they are calling the API in background too, but it works as I wished.
One possible solution to mount S3 to an EC2 instance is to use the new file gateway.
Check out this:
https://aws.amazon.com/about-aws/whats-new/2017/02/aws-storage-gateway-supports-running-file-gateway-in-ec2-and-adds-file-share-security-options/
http://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html
Point 1
Whilst the other answerer is correct in saying that S3 is not built for this, it's not true to say a bucket cannot be mounted (I'd seriously consider finding a better way to solve your problem however).
That being said, you can use s3fuse to mount S3 buckets within EC2. There's plenty of good reasons not to do this, detailed here.
Point 2
From there it's just a case of setting up a standard FTP server, since the bucket now appears to your system as if it is any other file system (mostly).
vsftpd could be good choice for this. I'd have a go at both and then post separate questions with any specific problems you run into, but this should give you a rough outline to work from. (Well, in reality I'd have a go at neither and use S3 via app code consuming the API, but still).
I want to upload with a Django ImageField to a nfs storage but I get this error:
[Errno 37] No locks available
This is in /etc/fstab/:
173.203.221.112:/home/user/project/media/uploads/ /home/user/project/media/uploads nfs rw,bg,hard,lock,intr,tcp,vers=3,wsize=8192,rsize=8192 0 0
I also tried to patch django to use flock() instead of lockf() but still not working.
http://code.djangoproject.com/ticket/9400
Any idea whats wrong?
I have this messy issue once, and after losing a lot of time looking for an answer I found this solution: rpc.statd
I have to execute that command in both sides of the NFS folders, in my case was my Computer and a Virtual Machine
Some information about this command can be found here:
Linux command: rpc.statd - NSM status monitor
In case that is not enough, some times I faced this issue I have to execute the statd service manually because it wasn't running. This other way to fix the problem is execute in both sides of the NFS the command:
service statd start
After executing the command in both sides the locking problem should dissappear.
Some more information on NFS software can be found here:
Archlinux wiki: NFS
You could check if nfslock is running on both the nfs server and client machines. It is responsible for managing the locks.