Problems mounting a S3 bucket with s3fs - amazon-web-services

I am trying to mount a S3 bucket on an AWS EC2 instance following this instruction. I was able to install the dependencies via yum, followed by cloning the git repository, and then making and installing the s3fs tool.
Furthermore, I ensured my AWSACCESSKEYID and AWSSECRETACCESSKEY values were in several locations (because I could not get the tool to work and searching for an answer suggest placing the file in different locations).
~/.passwd-s3fs
/etc/.passwd-s3fs
~/.bash_profile
For the .passwd-s3fs I have set the permissions as follows.
chmod 600 ~/.passwd-s3fs
chmod 640 /etc/.passwd-s3fs
Additionally, the .passwd-s3fs files have the content as suggested in this format: AWSACCESSKEYID:AWSSECRETACCESSKEY.
I have also logged out and in just to make sure the changes take effect. When I execute this command /usr/bin/s3fs bucketname /mnt, I get the following response.
s3fs: MOUNTPOINT: /mnt permission denied.
When I run the same command with sudo, e.g. sudo /usr/bin/s3fs mybucket /mnt, I get the following message.
s3fs: could not determine how to establish security credentials.
I am using s3fs v1.84 on the following AMI ami-0ff8a91507f77f867 (Amazon Linux AMI 2018.03.0.20180811 x86_64 HVM GP2). From the AWS Console for S3, my bucket's name is NOT mybucket but something just as simple (I am wondering if there's anything special I have to do with naming).
Additionally, my AWS access and secret key pair is generated from the IAM web interface and placed into the admin group (having AdministratorAccess policy) defined below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
Any ideas on what's going on? Did I miss a step?

After tinkering a bit, I found the following helps.
/usr/bin/s3fs mybucket /mnt -o passwd_file=.passwd-s3fs -o allow_other
Note that I specify the .passwd-s3fs file's location. And also note that I allow others to view the mount. Additionally, I had to modify /etc/fuse.conf to enable user_allow_other.
# mount_max = 1000
user_allow_other
To test, I typed in touch /mnt/README.md and then observed the file in my S3 bucket (web UI).
I am a little disappointed that this problem is not better documented. I would have expected the default home location or /etc to be where the .passwd-s3fs file would be looked by the tool, but that's not the case. Additionally, sudo (as suggested by a link I did not bookmark) forces the tool to look in ~/home/root, which does not exists.

For me it was mismatch in IAM profile while mounting and IAM profile of ec2 server.
EC2 was launched with role2 and I was mouting with
/usr/local/bin/s3fs -o allow_other mybucket /mnt/s3fs/mybucketfolder -o iam_role='role1'
which dit not through any error but did not mounted.
PS
I do not have any access keys or s3 password file on ec2 server.

Related

gsutil rsync with s3 buckets gives InvalidAccessKeyId error

I am trying to copy all the data from an AWS S3 bucket to a GCS bucket. Acc. to this answer rsync command should have been able to do that. But I am receiving the following error when trying to do that
Caught non-retryable exception while listing s3://my-s3-source/: AccessDeniedException: 403 InvalidAccessKeyId
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>{REDACTED}</AWSAccessKeyId><RequestId>{REDACTED}</RequestId><HostId>{REDACTED}</HostId></Error>
CommandException: Caught non-retryable exception - aborting rsync
This is the command I am trying to run
gsutil -m rsync -r s3://my-s3-source gs://my-gcs-destination
I have the AWS CLI installed which is working fine with the same AccessKeyId and listing buckets as well as objects in the bucket.
Any idea what am I doing wrong here?
gsutil can work with both Google Storage and S3.
gsutil rsync -d -r s3://my-aws-bucket gs://example-bucket
You just need to configure it with both - Google and your AWS S3 credentials. For GCP you need to add the Amazon S3 credentials to ~/.aws/credentials or you can also store your AWS credentials in the .boto configuration file for gsutil. However, when you're accessing an Amazon S3 bucket with gsutil, the Boto library uses your ~/.aws/credentials file to override other credentials, such as any that are stored in ~/.boto.
=== 1st update ===
Also make sure you have to make sure you have the correct IAM permissions on the GCP side and the correct AWS IAM credentials. Also depending if you have a prior version of Migrate for Compute Engine (formerly Velostrata) use this documentation and make sure you set up the VPN, IAM credentials and AWS network. If you are using the current version (5.0), use the following documentation to check everything is configured correctly.

AWS Cloudformation Windows 2016 EC2 S3 silent install

I have architecture created using CloudFormation utilizing Windows 2016 EC2 server and S3, written in JSON. I have 7 executables uploaded onto my S3 bucket. I can manually silently install everything from a Powershell for AWS prompt, once I Remote into the EC2. I can do it one at a time, and even have it in a .ps1 file and run it in Powershell for AWS and it runs correctly.
I am now trying to get this to install silently when the EC2 instance is created. I just can't do it and I can't understand why. The JSON code looks correct. As you can see, I first download everything from the S3 bucket, switch to the c:\TEMP directory where they were all downloaded, then run the executables in unattended install mode. I don't get any errors in my CloudFormation template. It runs "successfully." The problem is that nothing happens. Is it a permissions thing? Any help is welcome and appreciated. Thanks!
Under the AWS::EC2::Instance section I have the UserData section looking something like this (I shortened the executable names below):
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"<powershell>\n",
"copy-S3Object -BucketName mySilentInstallBucket -KeyPrefix * -LocalFolder c:\\TEMP\\",
"\n",
"cd c:\\TEMP\\",
"\n",
"firefox.exe -S ",
"\n",
"notepadpp.exe /S",
"\n",
"Git.exe /SILENT",
"\n",
"</powershell>"
]]}}
This troubleshooting doc will cover the various reasons you may not be able to connect to S3: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/
To connect to your S3 buckets from your EC2 instances, you need to do
the following:
Create an AWS Identity and Access Management (IAM) profile role that grants access to Amazon S3.
Attach the IAM instance profile to the instance.
Validate permissions on your S3 bucket.
Validate network connectivity from the EC2 instance to Amazon S3.
Validate access to S3 buckets.
The CloudFormation template won't fail based on UserData execution exceptions.

aws ec2 modern.ie image upload

Has anyone been successful to upload a modern.ie vdmk image to aws ec2?
I've tried via the ec2 import instance command:
ec2-import-instance IE10.Win7.For.Windows.VMWare\IE10_-_Win7-disk1.vmdk -f vmdk -t t2.small -a i386 -b xxxx --subnet subnet-xxxxx -p Windows -o %AWS_ACCESS_KEY% -w %AWS_SECRET_KEY% ...
but once i described the import, i got: ClientError: Unsupported Windows OS
After some reading I attempted to create an AMI via the aws cli interface after loading the file to s3 creating the policies etc:
aws ec2 import-image --cli-input-json "{ \"Description\": \"ModernIE Win7IE10\", \"DiskContainers\": [ { \"Description\": \"First CLI task\",
\"UserBucket\": { \"S3Bucket\": \"xxx_temp\", \"S3Key\" : \"IE10_-_Win7-disk1.vmdk\" } } ], \"LicenseType\": \"BYOL\", \"Architecture\": \"i386\", \"Platform\": \"Windows\"}"
But describing the import i get : "StatusMessage": "ClientError: Disk validation failed [Invalid S3 source location]"
I've even made the bucket url public!
Anyone have any ideas?
Thanks!
Use the AWS CLI to test that error:
aws s3 ls s3://xxx_temp
If you do not see the IE10_-_Win7-disk1.vmdk listed there, then the S3 upload is your problem. Re-verify your S3 key.
Also check the bucket policy and make sure the configured IAM user for your CLI has access to that bucket.
If you're seeing the Unsupported Windows OS I would check the Prerequisites very carefully.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites-ImportInstance.html
Not all operating systems can be imported. I frequently have an issue importing a linux VM where I've upgraded the kernel version and it becomes "Unsupported". The importer is very picky.
During the import process you can use the identifier returned from the import command to follow its status like so:
aws ec2 describe-import-image-tasks --cli-input-json "{"ImportTaskIds":["$IMPORT_ID"]}"
I have been most successful converting the VM to an OVA first, uploading THAT to S3 and running the import command against that.
If you are using VirtualBox you can do that from the command line:
vboxmanage export ${VM_NAME} -o MyExportedVM.ova;

aws s3 sync not working as expected

I'm trying to set up a one-way directory sync process from one local PC to an AWS EC2 instance via S3.
Both machines are Windows.
I tried using the command line interface.
On the local machine:
aws s3 sync source_dir s3://bucket --region eu-central-1
This command seems to to work well. If there is nothing new, nothing is sync'ed. So far so good.
On the AWS instance:
aws s3 sync s3://bucket target_dir --region eu-central-1
With this command, I have a an issue. Whenever I run it, there is always something to download (it seems to be always the same set of files, perhaps they are all of them, but it seems a subset of them). My expectation was that once in sync, running the command again produced no downloads.
I granted these permissions in the policy:
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bucket_name",
"arn:aws:s3:::bucket_name/*"
]
Am I missing anything in this setup so that I do not get files downloaded if there is nothing to download when I run the second sync?
You appear to be doing two separate syncs:
From a local machine to Amazon S3
From Amazon S3 to an Amazon EC2 instance
The problem might be related to timestamps. Amazon EC2 instances always operate as UTC. This might be different to the originating local machine.
If you run the S3->EC2 sync and then run it again immediately, there should be no files copied the second time. If files ARE copied, try updating your AWS CLI to the latest version. If problems persist, try syncing from EC2->S3 and then try S3->EC2 again.

Amazon S3 with s3fs and fuse, transport endpoint is not connected

Redhat with Fuse 2.4.8
S3FS version 1.59
From the AWS online management console i can browse the files on the S3 bucket.
When i log-in (ssh) to my /s3 folder, i cannot access it.
also the command: "/usr/bin/s3fs -o allow_other bucket /s3"
return: s3fs: unable to access MOUNTPOINT /s3: Transport endpoint is not connected
What could be the reason? How can i fix it ? does this folder need to be unmount and then mounted again ?
Thanks !
Well, the solution was simple: to unmount and mount the dir. The error transport endpoint is not connected was solved by unmounting the s3 folder and then mounting again.
Command to unmount
fusermount -u /s3
Command to mount
/usr/bin/s3fs -o allow_other bucketname /s3
Takes 3 minutes to sync.
I don't recommend to access s3 via quick and dirty fuse drivers.
S3 isn't really designed to act as a file system,
see this SOF answer for a nice summary
You would probably never dare to mount a Linux mirror website just because it holds files. This is comparable
Let your process write files to your local fs, then sync your s3 bucket with tools like cron and s3cmd
If you insist in using s3fs..
sudo echo "yourawskey:yourawssecret" > /etc/passwd-s3fs
sudo chmod 640 /etc/passwd-s3fs
sudo /usr/bin/s3fs yours3bucket /yourmountpoint -ouse_cache=/tmp
Verify with mount
Source: http://code.google.com/p/s3fs/wiki/FuseOverAmazon
I was using old security credential before. Regeneration of security credentials (AccessId, AccessKey) solved the issue.
This was a permissions issue on the bucket for me. Adding the "list" and "view permissions" for "everyone" in the AWS UI allowed bucket access.
If you don't want to allow everyone access, then make sure you are using the AWS credentials associated with the user that has access to the bucket in S3Fuse
I had this problem and i found that the bucket can only have lowercase characters. Trying to access a bucket named "BUCKET1" via the https://BUCKET1.s3.amazonaws.com or https://bucket1.s3.amazonaws.com will both fail, but if the bucket is called "bucket1", https://bucket1.s3.amazonaws.com will success.
So it is not enough to lowercase the name for you the s3fs command line, you MUST also create the bucket in lowercase.
Just unmount the directory and reboot the server if you already made changes in /etc/fstab which mounts the directory automatically.
To unmount sudo umount /dir
In /etc/fstab these lines should be present. then only it will mount automtically after reboot
s3fs#bucketname /s3 fuse allow_other,nonempty,use_cache=/tmp/cache,multireq_max=500,uid=505,gid=503 0 0
This issue could be due to policy attached to IAM user. make sure IAM user have AdministratorAccess.
I have face same issue & by changing policy to AdministratorAccess issue got fixed.