access multiple buckets in s3 - amazon-web-services

My manager has an AWS account n using his credentials we create buckets per employee. Now i want to access another bucket through command line. So is it possible that i can access two buckets (mine and one more)? I have the access key for both buckets. But still i am not able to access both the buckets simultaneously.so that i could upload and download my files on which ever bucket i want..?
I have already tried changing access key and security in my s3config. But it didn't serve the purpose.
I have been already granted the ACL for that new bucket.
Thanks

The best you can do without having a single access key that has permissions for both buckets is create a separate .s3cfg file. I'm assuming you're using s3cmd.
s3cmd --configure -c .s3cfg_bucketname
Will allow you to create a new configuration in the config file .s3cfg_bucketname. From then on when you are trying to access that bucket you just have to add the command line flag to specify which configuration to use:
s3cmd -c .s3cfg_bucketname ls
Of course you could add a bash function to your .bashrc (now I'm assuming bash... lots of assumptions! Let me know if I'm in the wrong, please) to make it even simpler:
function s3bucketname(){
s3cmd -c ~/.s3cfg_bucketname "$#"
}
Usage:
s3bucketname ls

I'm not sure which command line tool you are using, if you are using Timothy Kay's tool
than you will find that the documentation allows you the set the access key and secret key as environment variables and not only in a config file so you can set them in command line before the put command.

I am one of the developer of Bucket Explorer and you can use its Transfer panel with two credentials and perform operation between your and other bucket.
for more details read Copy Move Between two different account

Related

How to disable directory listing in Google Cloud [duplicate]

We're using google cloud storage as our CDN.
However, any visitors can list all files by typing: http://ourcdn.storage.googleapis.com/
How to disable it while all the files under the bucket is still public readable by default?
We previously set the acl using
gsutil defacl ch -g AllUsers:READ
In GCP dashboard:
get in your bucket
click "Permissions" tab and get in.
in member list find "allUsers", change role from Storage Object Viewer to Storage Legacy Object Reader
then, listing should be disabled.
Update:
as #Devy comment, just check the note below here
Note: roles/storage.objectViewer includes permission to list the objects in the bucket. If you don't want to grant listing publicly, use roles/storage.legacyObjectReader.
Upload an empty index.html file in the root of your bucket. Open the bucket settings and click Edit website configuration - set index.html as the Main Page.
It will prevent the listing of the directory.
Your defacl looks good. The problem is most likely that for some reason AllUsers must also have READ, WRITE, or FULL_CONTROL on the bucket itself. You can clear those with a command like this:
gsutil acl ch -d AllUsers gs://bucketname
Your command set the default object ACL on the bucket to READ, which means that objects will be accessible by anyone. To prevent users from listing the objects, you need to make sure users don't have an ACL on the bucket itself.
gsutil acl ch -d AllUsers gs://yourbucket
should accomplish this. You may need to run a similar command for AllAuthenticatedUsers; just take a look at the bucket ACL with
gsutil acl get gs://yourbucket
and it should be clear.

How do I share a bucket across accounts, to where no files do not need re-applying permissions?

I current apply permissions to a bucket like this
gsutil acl ch -u service#account:O gs://my-bucket/
gsutil acl ch -r -u service#account:O gs://my-bucket/*
Then I add a file and the above permissions don't get applied, I have to reapply them.
Is there any way to apply the permissions to all new files added to bucket? I would want to share these files with 5 different projects.
Another way to do this, is share a user across multiple project.
In the project you created the bucket, you create a service account that has only the right to access the bucket. Then you share this account with the other 4 projects.
With this you won't have to reapply rights each time and you can access your datas.

wget parameters.yml from non-publics S3 Bucket

I recently moved my app to elasticbeanstalk, and i am running Symfony3, there is a mandatory parameters.yml file that has to be populated with Environmental variables.
Id like to wget the parameters.yml from a private S3 bucket, limiting access to instances only.
I know i can set the environmental variables directly on the environment, but i have some very very sensitive stuff there, and environmental variables get leaked into my logging system, which is very bad.
I also have multiple environments such as workers using the same environmental variables, and copy pasting them is quite annoying.
So i am wondering if its possible to have the app wget it on deploy, i know how to do that, but i cant seem to configure the S3 bucket to only allow access from instances.
Yep, that definitely can be done, there are different ways of doing this depending what approach you want to take. I would suggest to use .ebextentions to create IAM Role -> grant access for that role to your bucket -> after package is unzip on the instance -> copy object from s3 using instance role
Create custom IAM role using AWS console or using .ebextentions custom resources and grant access for that role to the objects in your bucket.
Related read
Using above mentioned .ebextentions set aws:autoscaling:launchconfiguration in options_setting to specify instance profile you created before.
Again, using .ebxtentions use container_commands option to run aws s3 cp command

Copied S3 Bucket is not public by default

I have two S3 buckets -
production
staging
I want to periodically refresh the staging bucket so it has all the latest production objects for testing, so I used the aws-cli as follows -
aws s3 sync s3://production s3://staging
So now both buckets have the exact same files.
However, for any given file/object, the production link works and the staging doesn't
e.g.
This works: https://s3-us-west-1.amazonaws.com/production/users/photos/000/001/001/medium/my_file.jpg
This doesn't: https://s3-us-west-1.amazonaws.com/staging/users/photos/000/001/001/medium/my_file.jpg
The staging bucket's objects are not public links, and are private by default.
Is there a way to correct this or avoid this with the aws-cli? I know I can change the bucket policy itself, but it was previously working with all the files that were there. So I'm wondering what it is about copying files over that changed their visibility.
Thanks!
you should be able to add --acl flag :
aws s3 sync s3://production s3://staging --acl public-read
as mentioned in doc private acl is the default
Just did some more research.
Frédéric's answer is correct, but just wanted to expand on that a bit more.
aws s3 sync isn't really a true "sync" by default. It just goes through each file in the source bucket and copies files into the target bucket
If a target file with the same name already exists. I looked for a --force flag to force the overwrite, but apparently none exists
It won't delete "extra" files in the target directory by default (i.e. a file that does not exist in the source directory). The --delete flag will allow you to do that
It does not copy over permissions by default. It's true that --acl public-read will set the target permissions to publicly readable, but that has 2 problems - (1) it just blindly sets that for all files, which you may not want, and (2) it doesn't work when you have several files of varying permissions.
There's an issue about it here, and a PR that's open but still un-merged as of today.
So if you're trying to do a full blind refresh like me for testing purposes, the best option is to
Completely empty the target staging bucket by right clicking in the console and clicking Empty
Run the sync and blindly set everything as public-read (other visibility options are available, see documentation here). - aws s3 sync s3://production s3://staging --acl public-read

Simultaneous public and private credential access

I have received a username, access key ID, and secret access key ID for a public dataset on Amazon S3 (public for authorised users). I have been using s3cmd with my private account and S3 buckets. How can I configure s3cmd so that at the same time I can access my previous private credentials, and the new public data credentials I have received?
When first configuring s3cmd you probably ran s3cmd --configure and input your access and secret keys. This saves the credentials to a file ~/.s3cfg looking something like this:
[default]
access_key=your_access_key
...bunch of options...
secret_key=your_secret_key
s3md accepts the -c flag to point at a config file. Set up two config files, one with your first set of credentials (for example, ~/.s3cfg-private) and one with the other set (for example, ~/.s3cfg-public). Then you can use:
s3cmd -c ~/.s3cfg-public s3://my-public-bucket
s3cmd -c ~/.s3cfg-private s3://my-private-bucket
For convenience, leave the credentials you need most frequently in the file named ~/.s3cfg as it will be used by default.