I can successfully mount my bucket using the following command
sudo mount -t gcsfuse -o rw,noauto,user,implicit_dirs,allow_other fakebucket thebucket/
I can go into the bucket find the subfolders and etc. however I can't write anything
touch: cannot touch 'aaa': Permission denied
I have tried to use various parameters in the gcsfuse for example rw,noauto,user,implicit_dirs,allow_other - even I tried a regular chmod command after
sudo chmod -R 777 thebucket/
with no error, but the permission has not changed, neither I can write into the bucket.
Thank you in advance,
Have you checked if your instance has the required API access scopes to write to storage?
By default the access scope to storage is "Read only", this is why you can mount the bucket and list the contents but not write to it.
To edit the scopes you can do it from the web interface, after turning off the instance and editing it or with this command:
gcloud beta compute instances set-scopes INSTANCE_NAME scopes=storage-full
Be sure to add all the scopes you need , the command above will disable all scopes and give you rw access to the storage API.
Related
I would like to share a GitHub project ssh key pair with all new instances that I create so that it's possible to git clone and launch the program from the user data file without having to ssh in the instance.
Quite easy to do on GCP but not quite sure how to do any of that in AWS ec2 instances.
Edit: In GCP I would simply use the "Secret manager" which is shared between instances.
Since you mention that you'd use Secret Manager in a Google Cloud, it seems reasonable to suggest the AWS Secrets Manager service.
Set your private key as a Secret, and grant access to it with an IAM role attached to the EC2 instance. Then install the AWS CLI package before building the AMI, and you can use it to fetch the secret on first boot with a User Data script.
Because I find the AWS secret manager hard to use and expensive compared to GCP here's the solution I ended up using.
this is my user data file that is passed to the instance on creation.
sudo mkdir ~/.ssh
sudo touch ~/.ssh/id_rsa
sudo echo "-----BEGIN OPENSSH PRIVATE KEY-----
My GitHub private key" >> ~/.ssh/id_rsa
sudo chmod 700 ~/.ssh/
sudo chmod 600 ~/.ssh/id_rsa
git clone https://wwww.github.com/your-repo
# other commands goes here
Note that it will add this to the root user.
not the cleanest solution but it works well
edit: sudo shouldn't be required because it all runs as root
I accidentally ran "sudo make chmod -R 777 /" on my GCP, now I'm not able to access the SSH anymore (Neither by terminal or browser):
Permissions 0777 for '/Users/username/.ssh/id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
How can I access my VM and restore it?
As it was suggested by #John Hanley, you have (must) create a new instance to avoid serious problems in future with this broken VM.
To solve permissions issue with ~/.ssh/id_rsa.pub you can follow documentation Running startup scripts or/and article suggested by #John Hanley to execute command sudo chmod 644 ~/.ssh/id_rsa.pub or follow instructions from this article to connect to your instance via serial console and after that run sudo chmod 644 ~/.ssh/id_rsa.pub to set proper permissions.
Keep in mind that restoring SSH access WON'T SOLVE all other possible issues with your VM related to sudo make chmod -R 777 /. So, you can skip it and follow instructions below:
To move your data from broken VM to new VM you can follow this steps:
create snapshot of the boot disk of broken instance
$ gcloud compute disks snapshot BROKEN_INSTANCE_BOOT_DISK_NAME --snapshot-names=TEMPORARY_SNAPSHOT_NAME
create temporary disk with snapshot
$ gcloud compute disks create TEMPORARY_DISK_NAME --source-snapshot=TEMPORARY_SNAPSHOT_NAME
create new instance
attach temporary disk to new instance
$ gcloud compute instances attach-disk NEW_ISTANCE_NAME --disk=TEMPORARY_DISK_NAME
mount temporary disk
$ sudo su -
$ mkdir /mnt/TEMPORARY_DISK
$ mount /dev/disk/by-id/scsi-0Google_PersistentDisk_TEMPORARY_DISK_NAME /mnt/TEMPORARY_DISK
copy data from temporary disk to new instance
unmount temporary disk :
$ sudo umount /dev/disk/by-id/scsi-0Google_PersistentDisk_TEMPORARY_DISK_NAME
detach temporally disk
$ gcloud compute instances detach-disk NEW_ISTANCE_NAME --disk=TEMPORARY_DISK_NAME
I'm trying to make a bucket in Google Cloud Storage public, but I'm receiving this error:
Error
Sorry, there’s a problem. If you entered information, check it and try again. Otherwise, the problem might clear up on its own, so check back later.
Tracking Number: 8176737072451350548
Send feedback
I'm sending permission to allUsers to the role StorageObjectViewer. I'm doing this direct by the platform
On GCP uses the shell command inside your project:
$ gsutil defacl set public-read gs://your-bucket-name
after use:
$ gsutil ls -L -b gs://your-bucket-name
to see the ACL configuration of your bucket.
https://codelabs.developers.google.com/codelabs/cloud-upload-objects-to-cloud-storage/#0
I am trying to build a Docker image and I need to copy some files from S3 to the image.
Inside the Dockerfile I am using:
Dockerfile
FROM library/ubuntu:16.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
# Copy files from S3 inside docker
RUN aws s3 COPY s3://filepath_on_s3 /tmp/
However, aws requires AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I know I can probably pass them using ARG. But, is it a bad idea to pass them to the image at build time?
How can I achieve this without storing the secret keys in the image?
In my opinion, Roles is the best to delegate S3 permissions to Docker containers.
Create role from IAM -> Roles -> Create Role -> Choose the service that will use this role, select EC2 -> Next -> select s3policies and Role should be created.
Attach Role to running/stopped instance from Actions-> Instance Settings -> Attach/Replace Role
This worked successfully in Dockerfile:
RUN aws s3 cp s3://bucketname/favicons /var/www/html/favicons --recursive
I wanted to build upon #Ankita Dhandha answer.
In the case of Docker you are probably looking to use ECS.
IAM Roles are absolutely the way to go.
When running locally, locally tailored Docker file and mount your AWS CLI ~/.aws directory to the root users ~/.aws directory in the container (this allows it to use your or a custom IAM user's CLI credentials to mock behavior in ECS for local testing).
# local sytem
from ubuntu:latest
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
docker run --mount type=bind,source="~/.aws",target=/root/.aws
Role Types
EC2 Instance Roles define the global actions any instance can preform. An example would be having access to S3 to download ecs.config to /etc/ecs/ecs.config during your custom user-data.sh setup.
Use the ECS Task Definition to define a Task Role and a Task Execution Role.
Task Roles are used for a running container. An example would be a live web app that is moving files in and out of S3.
Task Execution Roles are for deploying the task. An example would be downloading the ECR image and deploying it to ECS, downloading an environment file from S3 and exporting it to the Docker container.
General Role Propagation
In the example of C# SDK there is a list of locations it will look in order to obtain credentials. Not everything behaves like this. But many do so you have to research it for your situation.
reference: https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/creds-assign.html
Plain text credentials fed into either the target system or environment variables.
CLI AWS credentials and a profile set in the AWS_PROFILE environment variable.
Task Execution Role used to deploy the docker task.
The running task will use the Task Role.
When the running task has no permissions for the current action it will attempt to elevate into the EC2 instance role.
Blocking EC2 instance role access
Because of the EC2 instance role commonly needing access for custom system setup such as configuring ECS its often desirable to block your tasks from accessing this role. This is done by blocking the tasks access to the EC2 metadata endpoints which are well known DNS endpoints in any AWS VPC.
reference: https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-ec2-metadata/
AWS VPC Network Mode
# ecs.confg
ECS_AWSVPC_BLOCK_IMDS=true
Bind Network Mode
# ec2-userdata.sh
# install dependencies
yum install -y aws-cli iptables-services
# setup ECS dependencies
aws s3 cp s3://my-bucket/ecs.config /etc/ecs/ecs.config
# setup IPTABLES
iptables --insert FORWARD 1 -i docker+ --destination 169.254.169.254/32 --jump DROP
iptables --append INPUT -i docker+ --destination 127.0.0.1/32 -p tcp --dport 51679 -j ACCEPT
service iptables save
Many people pass in the details through the args, which I see as being fine and the way I would personally do it. I think you can overkill certain processes and this I think this is one of them.
Example docker with args
docker run -e AWS_ACCESS_KEY_ID=123 -e AWS_SECRET_ACCESS_KEY=1234
Saying that I can see why some companies want to hide this away and get this from a private API or something. This is why AWS have created IAM roles - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.
Details could be retrieved from the private ip address which the S3 can only access meaning you would never have to store your credentials in your image itself.
Personally i think its overkill for what you are trying to do, if someone hacks your image they can console the credentials out and still get access to those details. Passing them in as args is safe as long as you protect yourself as you should anyway.
you should configure your credentials on the ~/.aws/credentials file
~$ cat .aws/credentials
[default]
aws_access_key_id = AAAAAAAAAAAAAAAAAAAAAAAAAAAAa
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBB
Redhat with Fuse 2.4.8
S3FS version 1.59
From the AWS online management console i can browse the files on the S3 bucket.
When i log-in (ssh) to my /s3 folder, i cannot access it.
also the command: "/usr/bin/s3fs -o allow_other bucket /s3"
return: s3fs: unable to access MOUNTPOINT /s3: Transport endpoint is not connected
What could be the reason? How can i fix it ? does this folder need to be unmount and then mounted again ?
Thanks !
Well, the solution was simple: to unmount and mount the dir. The error transport endpoint is not connected was solved by unmounting the s3 folder and then mounting again.
Command to unmount
fusermount -u /s3
Command to mount
/usr/bin/s3fs -o allow_other bucketname /s3
Takes 3 minutes to sync.
I don't recommend to access s3 via quick and dirty fuse drivers.
S3 isn't really designed to act as a file system,
see this SOF answer for a nice summary
You would probably never dare to mount a Linux mirror website just because it holds files. This is comparable
Let your process write files to your local fs, then sync your s3 bucket with tools like cron and s3cmd
If you insist in using s3fs..
sudo echo "yourawskey:yourawssecret" > /etc/passwd-s3fs
sudo chmod 640 /etc/passwd-s3fs
sudo /usr/bin/s3fs yours3bucket /yourmountpoint -ouse_cache=/tmp
Verify with mount
Source: http://code.google.com/p/s3fs/wiki/FuseOverAmazon
I was using old security credential before. Regeneration of security credentials (AccessId, AccessKey) solved the issue.
This was a permissions issue on the bucket for me. Adding the "list" and "view permissions" for "everyone" in the AWS UI allowed bucket access.
If you don't want to allow everyone access, then make sure you are using the AWS credentials associated with the user that has access to the bucket in S3Fuse
I had this problem and i found that the bucket can only have lowercase characters. Trying to access a bucket named "BUCKET1" via the https://BUCKET1.s3.amazonaws.com or https://bucket1.s3.amazonaws.com will both fail, but if the bucket is called "bucket1", https://bucket1.s3.amazonaws.com will success.
So it is not enough to lowercase the name for you the s3fs command line, you MUST also create the bucket in lowercase.
Just unmount the directory and reboot the server if you already made changes in /etc/fstab which mounts the directory automatically.
To unmount sudo umount /dir
In /etc/fstab these lines should be present. then only it will mount automtically after reboot
s3fs#bucketname /s3 fuse allow_other,nonempty,use_cache=/tmp/cache,multireq_max=500,uid=505,gid=503 0 0
This issue could be due to policy attached to IAM user. make sure IAM user have AdministratorAccess.
I have face same issue & by changing policy to AdministratorAccess issue got fixed.