Amazon S3 with s3fs and fuse, transport endpoint is not connected - amazon-web-services

Redhat with Fuse 2.4.8
S3FS version 1.59
From the AWS online management console i can browse the files on the S3 bucket.
When i log-in (ssh) to my /s3 folder, i cannot access it.
also the command: "/usr/bin/s3fs -o allow_other bucket /s3"
return: s3fs: unable to access MOUNTPOINT /s3: Transport endpoint is not connected
What could be the reason? How can i fix it ? does this folder need to be unmount and then mounted again ?
Thanks !

Well, the solution was simple: to unmount and mount the dir. The error transport endpoint is not connected was solved by unmounting the s3 folder and then mounting again.
Command to unmount
fusermount -u /s3
Command to mount
/usr/bin/s3fs -o allow_other bucketname /s3
Takes 3 minutes to sync.

I don't recommend to access s3 via quick and dirty fuse drivers.
S3 isn't really designed to act as a file system,
see this SOF answer for a nice summary
You would probably never dare to mount a Linux mirror website just because it holds files. This is comparable
Let your process write files to your local fs, then sync your s3 bucket with tools like cron and s3cmd
If you insist in using s3fs..
sudo echo "yourawskey:yourawssecret" > /etc/passwd-s3fs
sudo chmod 640 /etc/passwd-s3fs
sudo /usr/bin/s3fs yours3bucket /yourmountpoint -ouse_cache=/tmp
Verify with mount
Source: http://code.google.com/p/s3fs/wiki/FuseOverAmazon

I was using old security credential before. Regeneration of security credentials (AccessId, AccessKey) solved the issue.

This was a permissions issue on the bucket for me. Adding the "list" and "view permissions" for "everyone" in the AWS UI allowed bucket access.
If you don't want to allow everyone access, then make sure you are using the AWS credentials associated with the user that has access to the bucket in S3Fuse

I had this problem and i found that the bucket can only have lowercase characters. Trying to access a bucket named "BUCKET1" via the https://BUCKET1.s3.amazonaws.com or https://bucket1.s3.amazonaws.com will both fail, but if the bucket is called "bucket1", https://bucket1.s3.amazonaws.com will success.
So it is not enough to lowercase the name for you the s3fs command line, you MUST also create the bucket in lowercase.

Just unmount the directory and reboot the server if you already made changes in /etc/fstab which mounts the directory automatically.
To unmount sudo umount /dir
In /etc/fstab these lines should be present. then only it will mount automtically after reboot
s3fs#bucketname /s3 fuse allow_other,nonempty,use_cache=/tmp/cache,multireq_max=500,uid=505,gid=503 0 0

This issue could be due to policy attached to IAM user. make sure IAM user have AdministratorAccess.
I have face same issue & by changing policy to AdministratorAccess issue got fixed.

Related

AWS: Mounting S3 bucket to EC2 via s3fs (iam role) executes but does not work

I set up a RoboMaker project on AWS. When I am on the virtual desktop of my development environment, I would like to work with the files from my S3 bucket. I figured mounting the S3 bucket via s3fs is the best option. When using the aws s3 CLI I can interact with the buckets from the EC2 instance without any problem, but for my current project I need them to be mounted.
I followed this tutorial. My command to mount the bucket finally looks like this:
s3fs my-bucket /path/to/local/folder -o iam_role="my-mounting-role" -o url=https:/s3.eu-central-1.amazonaws.com -o endpoint=eu-central-1 -o use_path_request_style -o allow_other -o uid=1000 -o gid=1000
Now the command apparently executes without any problem, but when I look in my local folder where the bucket should be mounted to it is still empty. This confuses me a little, since even when I change the bucket name to something which does not exist or if I change the iam role to something which does not exist, the command still executes without error feedback. I am a bit lost where to start looking for the error. Is there some s3fs expert out here who could help me troubleshoot this issue?
Thanks a lot!
try foreground and debug option while mounting. -f --debug

How to mount GCP Bucket in Google Colab

I want to access the files in my GCP bucket from Colab. I followed these instructions
As you can see from the screen shots, there is nothing in the folder after mounting. What am I missing? The Data folder in my bucket is full of data.
Turns out you can't mount the bucket name + a path within the bucket. Removing the /Data/ after the name of the bucket led to a successful mount. So since my bucket name is hellonearth, the command is just:
!gcsfuse --implicit-dirs hellonearth myfolder
You are using the commands in a wrong way
after !apt -qq install gcsfuse
run:
!mkdir folderOnColab
!gcsfuse gs://folderOnBucket folderOnColab
Then runt !ls instead of just ls

Automatically mounting S3 bucket using s3fs on Amazon CentOS

I have tried all the answers provided in similar questions but none is helpful.
I installed S3 Fuse so that I can mount S3 bucket. After the installation, I performed the following the steps:
Step 1 Create the mount point for S3 bucket mkdir –p /var/s3fs-drive-fs
Step 2 Then I am able to mount the S3 bucket in the new directory with the IAM role by running the following commands: s3fs myresearchdatasets /var/s3fs-drive-fs -o iam_role=EC2-to-S3-Buckets-Role -o allow_other, and it works fine.
However, I found out that the bucket disappears each time I reboot the system, which means I have to run the command above to remount the S3 bucket each time after restarting the system.
I found the steps to complete an Automatic mount at reboot by editing the fstab file with the lines below
s3fs myresearchdatasets /var/s3fs-drive-fs fuse_netdev,allow_other,iam_role=EC2-to-S3-Buckets-Role,umask=777, 0 0
To check whether the fstab is working correctly, I tried mount /var/s3fs-drive-fs/
but I got the following errors, "mount: can't find /var/s3fs-drive-fs/ in /etc/fstab"
Can anyone help me please?
The first field should include the mount type and the bucket name, e.g.,
s3fs#mybucket /path/to/mountpoint fuse _netdev,allow_other 0 0
The s3fs README has other examples.

mount GCPs buckets with write access

I can successfully mount my bucket using the following command
sudo mount -t gcsfuse -o rw,noauto,user,implicit_dirs,allow_other fakebucket thebucket/
I can go into the bucket find the subfolders and etc. however I can't write anything
touch: cannot touch 'aaa': Permission denied
I have tried to use various parameters in the gcsfuse for example rw,noauto,user,implicit_dirs,allow_other - even I tried a regular chmod command after
sudo chmod -R 777 thebucket/
with no error, but the permission has not changed, neither I can write into the bucket.
Thank you in advance,
Have you checked if your instance has the required API access scopes to write to storage?
By default the access scope to storage is "Read only", this is why you can mount the bucket and list the contents but not write to it.
To edit the scopes you can do it from the web interface, after turning off the instance and editing it or with this command:
gcloud beta compute instances set-scopes INSTANCE_NAME scopes=storage-full
Be sure to add all the scopes you need , the command above will disable all scopes and give you rw access to the storage API.

Uploading file to AWS from local machine

How to use scp command to upload file to aws server
I have .pem file in /Downloads in local machine
I am trying to copy file to /images folder in AWS server
What command can i use ?
Thanks,
You can use plain scp:
scp -i ~/Downloads/file.pem local_image_file user#ec2_elastic_ip:/home/user/images/
You need to put an Elastic IP to the EC2 instance, open port 22 to your local machine IP in the EC2 instance security group, and use the right user (it can be ec2-user, admin or ubuntu (look at the AMI documentation)).
Diego's answer works.. However, if you're unaware of your elastic IP, then you can simply scp using following command (check the order of arguments)
scp -i path-to-your-identifier.pem file-to-be-copied ubuntu#public-IP:/required-path
just for reference, here ubuntu is your AWS user and public-IP is somewhat like 54.2xx.xxx.xxx e.g. 54.200.100.100 or such
(If order is messed up: filename before identifier, then you'll get a Permission denied (publickey).lost connection error)
Also, keep in mind the permissions of .pem file.. Should be 400 or 600. Not public to all.
Hope it helps!
there are number of ways to achieve what you want
use s3cmd http://s3tools.org/s3cmd
or use cyberduck http://cyberduck.ch/
or write a tool using amazon Java API
You can try kitten utility which is a wrapper around boto3. You can easily upload/download files and run commands on EC2 server or on multiple servers at once for that matter.
kitten put -i ~/.ssh/key.pem cat.jpg /tmp [SERVER NAME][SERVER IP]
Where server name is e.g ubuntu or ec2-user etc.
This will upload cat.jpg file to /tmp directory of server
This is the correct way uploading from local to remote.
scp -i "zeus_aws.pem" ~/Downloads/retail_market_db-07-01-2021.sql ubuntu#ec2-3-108-200-27.us-east-2.compute.amazonaws.com:/var/www/
Could be a better approach
Another alternative way to scp is rsync.
Some of the benefits of rsync
faster - uploads only the deltas
enable compression
you can exclude some files from the upload
resume
limit the transfer bandwidth
The rsync cmd
rsync -ravze "ssh -i /home/your-user/your-key.pem " --exclude '.env' --exclude '.git/' /var/www/your-folder-to-upload/* ubuntu#xx.xxx.xxx.xxx:/var/www/your-remote-folder
Now, in case you find this syntax a little bit verbose you can use aws-upload which does all the above but you just tabbing.