I have tried all the answers provided in similar questions but none is helpful.
I installed S3 Fuse so that I can mount S3 bucket. After the installation, I performed the following the steps:
Step 1 Create the mount point for S3 bucket mkdir –p /var/s3fs-drive-fs
Step 2 Then I am able to mount the S3 bucket in the new directory with the IAM role by running the following commands: s3fs myresearchdatasets /var/s3fs-drive-fs -o iam_role=EC2-to-S3-Buckets-Role -o allow_other, and it works fine.
However, I found out that the bucket disappears each time I reboot the system, which means I have to run the command above to remount the S3 bucket each time after restarting the system.
I found the steps to complete an Automatic mount at reboot by editing the fstab file with the lines below
s3fs myresearchdatasets /var/s3fs-drive-fs fuse_netdev,allow_other,iam_role=EC2-to-S3-Buckets-Role,umask=777, 0 0
To check whether the fstab is working correctly, I tried mount /var/s3fs-drive-fs/
but I got the following errors, "mount: can't find /var/s3fs-drive-fs/ in /etc/fstab"
Can anyone help me please?
The first field should include the mount type and the bucket name, e.g.,
s3fs#mybucket /path/to/mountpoint fuse _netdev,allow_other 0 0
The s3fs README has other examples.
Related
I have created an FSX for Lustre service in AWS, with a Data Repository Association to an S3 bucket. I am looking to create an EC2 instance that has the FSX file share mounted and contains the files that are in the S3 bucket as locally mounted files.
I am creating the EC2 instance from a launch template (shown below), and I do see the FSX folder present, however, I do not see any files within the folder.
My FSX AWS resource is:
My launch template user-data is:
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/cloud-config; charset="us-ascii"
runcmd:
- region=us-west-2
- amazon-linux-extras install -y lustre2.10
- mkdir -p /data/fsx
- mount -t lustre fs-xxxxxx.fsx.us-west-2.amazonaws.com#tcp:fsx" /data/fsx
--==MYBOUNDARY==--
When I ssh into the EC2 instance created with the launch template I see an empty folder under where my FSX file share should be mounted:
$ ls -lah /data/fsx/
total 0
drwxr-xr-x 2 root root 6 Jan 13 21:40 .
drwxr-xr-x 3 root root 17 Jan 13 21:40 ..
Does anyone have any pointers as to why my /data/fsx folder is empty, and how I can get it populated with the data in my FSX data repository path S3 bucket?
From what you've described it might be multiple issues related to AWS configuration and OS and/or lustre configuration. I would follow the next troubleshooting steps:
When you sshed to the instance, check if you can list or get files from the bucket. For that, you can use aws s3 commands. Like this:
aws s3 ls s3://<bucket-name>/
aws s3 cp s3://<bucket-name>/<some-file> ./
If you have permission denied, check if you have needed permissions assigned to an EC2 instance's IAM profile. See for more info https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#attach-iam-role
If s3 permissions are settled up correctly, then check luster logs and folder permissions.
User data commands run only during the initial boot by default. If you have rebooted the instance, the mount command in the user data did not run. In order to mount the file system after every reboot, you should add a row to the /etc/fstab file.
There could be also some configuration settings mentioned in the official guide which help to get the file system mounted (networking):
https://docs.aws.amazon.com/fsx/latest/LustreGuide/mount-fs-auto-mount-onreboot.html
I set up a RoboMaker project on AWS. When I am on the virtual desktop of my development environment, I would like to work with the files from my S3 bucket. I figured mounting the S3 bucket via s3fs is the best option. When using the aws s3 CLI I can interact with the buckets from the EC2 instance without any problem, but for my current project I need them to be mounted.
I followed this tutorial. My command to mount the bucket finally looks like this:
s3fs my-bucket /path/to/local/folder -o iam_role="my-mounting-role" -o url=https:/s3.eu-central-1.amazonaws.com -o endpoint=eu-central-1 -o use_path_request_style -o allow_other -o uid=1000 -o gid=1000
Now the command apparently executes without any problem, but when I look in my local folder where the bucket should be mounted to it is still empty. This confuses me a little, since even when I change the bucket name to something which does not exist or if I change the iam role to something which does not exist, the command still executes without error feedback. I am a bit lost where to start looking for the error. Is there some s3fs expert out here who could help me troubleshoot this issue?
Thanks a lot!
try foreground and debug option while mounting. -f --debug
I want to access the files in my GCP bucket from Colab. I followed these instructions
As you can see from the screen shots, there is nothing in the folder after mounting. What am I missing? The Data folder in my bucket is full of data.
Turns out you can't mount the bucket name + a path within the bucket. Removing the /Data/ after the name of the bucket led to a successful mount. So since my bucket name is hellonearth, the command is just:
!gcsfuse --implicit-dirs hellonearth myfolder
You are using the commands in a wrong way
after !apt -qq install gcsfuse
run:
!mkdir folderOnColab
!gcsfuse gs://folderOnBucket folderOnColab
Then runt !ls instead of just ls
I have an EC2 instance in which I want to download a file.
I could able to download using s3cmd command
s3cmd sync s3://<Bucket>/filename /tmp
100% downloaded
But, using the 'asw s3 sync' it is not working,
aws s3 sync s3://<Bucket>/filename /tmp
NO OUTPUT
Both do not work at a time?
This happened to me. I didn't have the right permissions for the folder that I was trying to copy to. The aws command didn't give me any errors though. I just paused for a while making it seem like the command was doing something and then completed with no output.
Redhat with Fuse 2.4.8
S3FS version 1.59
From the AWS online management console i can browse the files on the S3 bucket.
When i log-in (ssh) to my /s3 folder, i cannot access it.
also the command: "/usr/bin/s3fs -o allow_other bucket /s3"
return: s3fs: unable to access MOUNTPOINT /s3: Transport endpoint is not connected
What could be the reason? How can i fix it ? does this folder need to be unmount and then mounted again ?
Thanks !
Well, the solution was simple: to unmount and mount the dir. The error transport endpoint is not connected was solved by unmounting the s3 folder and then mounting again.
Command to unmount
fusermount -u /s3
Command to mount
/usr/bin/s3fs -o allow_other bucketname /s3
Takes 3 minutes to sync.
I don't recommend to access s3 via quick and dirty fuse drivers.
S3 isn't really designed to act as a file system,
see this SOF answer for a nice summary
You would probably never dare to mount a Linux mirror website just because it holds files. This is comparable
Let your process write files to your local fs, then sync your s3 bucket with tools like cron and s3cmd
If you insist in using s3fs..
sudo echo "yourawskey:yourawssecret" > /etc/passwd-s3fs
sudo chmod 640 /etc/passwd-s3fs
sudo /usr/bin/s3fs yours3bucket /yourmountpoint -ouse_cache=/tmp
Verify with mount
Source: http://code.google.com/p/s3fs/wiki/FuseOverAmazon
I was using old security credential before. Regeneration of security credentials (AccessId, AccessKey) solved the issue.
This was a permissions issue on the bucket for me. Adding the "list" and "view permissions" for "everyone" in the AWS UI allowed bucket access.
If you don't want to allow everyone access, then make sure you are using the AWS credentials associated with the user that has access to the bucket in S3Fuse
I had this problem and i found that the bucket can only have lowercase characters. Trying to access a bucket named "BUCKET1" via the https://BUCKET1.s3.amazonaws.com or https://bucket1.s3.amazonaws.com will both fail, but if the bucket is called "bucket1", https://bucket1.s3.amazonaws.com will success.
So it is not enough to lowercase the name for you the s3fs command line, you MUST also create the bucket in lowercase.
Just unmount the directory and reboot the server if you already made changes in /etc/fstab which mounts the directory automatically.
To unmount sudo umount /dir
In /etc/fstab these lines should be present. then only it will mount automtically after reboot
s3fs#bucketname /s3 fuse allow_other,nonempty,use_cache=/tmp/cache,multireq_max=500,uid=505,gid=503 0 0
This issue could be due to policy attached to IAM user. make sure IAM user have AdministratorAccess.
I have face same issue & by changing policy to AdministratorAccess issue got fixed.