I have received a username, access key ID, and secret access key ID for a public dataset on Amazon S3 (public for authorised users). I have been using s3cmd with my private account and S3 buckets. How can I configure s3cmd so that at the same time I can access my previous private credentials, and the new public data credentials I have received?
When first configuring s3cmd you probably ran s3cmd --configure and input your access and secret keys. This saves the credentials to a file ~/.s3cfg looking something like this:
[default]
access_key=your_access_key
...bunch of options...
secret_key=your_secret_key
s3md accepts the -c flag to point at a config file. Set up two config files, one with your first set of credentials (for example, ~/.s3cfg-private) and one with the other set (for example, ~/.s3cfg-public). Then you can use:
s3cmd -c ~/.s3cfg-public s3://my-public-bucket
s3cmd -c ~/.s3cfg-private s3://my-private-bucket
For convenience, leave the credentials you need most frequently in the file named ~/.s3cfg as it will be used by default.
Related
My C++ program must access both public and private AWS buckets. The public buckets don’t belong to me so when the program tries to access them and my credentials are visible I get the following type errors:
Aws::S3::S3Errors::INVALID_ACCESS_KEY_ID
"InvalidAccessKeyId"
"The AWS Access Key Id you provided does not exist in our records."
If I manually hide my credentials like this
mv ~/.aws/credentials ~/.aws/credentials-hidden
before running the program I can successfully list and get the public objects. But then, the program can't access my private buckets.
I’ve searched S3Client and ClientConfiguration for some option to disable and re-enable credentials checks but haven’t found it.
Please tell me how this is done.
I found a solution. To access public buckets without hiding my ~/.aws/credentials file I can create an S3Client with empty credentials.
Aws::Auth::AWSCredentials empty_credentials { };
Aws::S3::S3Client s3_client { empty_credentials, config };
Cyberduck version: Version 7.9.2
Cyberduck is designed to access non-public AWS buckets. It asks for:
Server
Port
Access Key ID
Secret Access Key
The Registry of Open Data on AWS provides this information for an open dataset (using the example at https://registry.opendata.aws/target/):
Resource type: S3 Bucket
Amazon Resource Name (ARN): arn:aws:s3:::gdc-target-phs000218-2-open
AWS Region: us-east-1
AWS CLI Access (No AWS account required): aws s3 ls s3://gdc-target-phs000218-2-open/ --no-sign-request
Is there a version of s3://gdc-target-phs000218-2-open that can be used in Cyberduck to connect to the data?
If the bucket is public, any AWS credentials will suffice. So as long as you can create an AWS account, you only need to create an IAM user for yourself with programmatic access, and you are all set.
No doubt, it's a pain because creating an AWS account needs your credit (or debit) card! But see https://stackoverflow.com/a/44825406/1094109 and https://stackoverflow.com/a/44825406/1094109
I tried this with s3://gdc-target-phs000218-2-open and it worked:
For RODA buckets that provide public access to specific prefixes, you'd need to edit the path to suit. E.g. s3://cellpainting-gallery/cpg0000-jump-pilot/source_4/ (this is a RODA bucket maintained by us, yet to be released fully)
NOTE: The screenshots below show a different URL that's no longer operational. The correct URL is s3://cellpainting-gallery/cpg0000-jump-pilot/source_4/
No, it's explicitly stated in the documentation that
You must obtain the login credentials [in order to connect to Amazon S3 in Cyberduck]
My goal is to access my s3 buckets from the command line using my AWS educate account.
I expected to get a list of my s3 buckets in the command prompt. I typed in this command. aws s3 ls
I actually received an error message saying Invalid Access Key ID.
They shown do not match they key on the home age of my AWS educate account.
How do I change the keys listed to match the ones on my AWS Educate home page? I think if I correct this then I will be able to access my s3 buckets with aws cli.
Run:
aws configure
And follow the prompts to configure a new Access Key and Secret Access Key.
If this isn't working, there are two other things you can check:
Have you set any of the following environment variables? https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html. These can override the ones set using aws configure
If that fails, check that $HOME/.aws/credentials is not write protected, or even try updating the credentials manually.
I am wanting to set up a recursive sync from a Linux machine (Fedora) to an AWS S3 bucket. I am logged into Linux as root and have an AWS Key and Secret associated with a specific AWS user "Lisa".
I have installed aws-cli, s3cmd, and attempted to configure both. I have verified the aws/configure and aws/credentials files both have a default user and a "Lisa" user with Access Key and Secret pairs. I receive errors stating that Access is Denied, access key and secret pair not found. I have researched this on the web and verified that there are no environment variables that could be overriding the configure & credential files. I have also granted full access permissions to the bucket created through the AWS Console to all logged in users. I have not rotated the keys, as they were first created a week ago, and I was able to log-in & set-up the AWS console using that same key pair.
What else should I be doing before rotating the keys?
It looks like you haven't configured AWS credentials correctly. Make sure that you have correct access keys in your credentials file. If you don't specify any profiles, awscli uses the default profile.
~/.aws/credentials
[default]
aws_access_key_id=AKIAIDEFAULTKEY
aws_secret_access_key=Mo9T7WNO….
[Lisa]
aws_access_key_id=AKIAILISASKEY
aws_secret_access_key=H0XevhnC….
This command uses the default profile:
aws s3 ls
This command uses Lisa profile:
aws s3 ls --profile Lisa
You can set an environment variable to override the default profile.
export AWS_DEFAULT_PROFILE=Lisa
Now this command uses the profile Lisa:
aws s3 ls
If you don't know which profile is active, you can just invoke the following command:
aws sts get-caller-identity
You seem to have several terms intermixed, so it's worth knowing the difference:
Username and password is used to login to the web-based management console. They are short, to be human-readable and easy to remember.
Access Key (starting with AKIA) and Secret Key is used for making API calls. It is also used by the AWS CLI (which makes API calls on your behalf)
Key pair consists of a public and private key, used for authenticating SSH connections. It is a very long block of text.
You mention that an Access Key is not found. This could be because the wrong type of credential is being provided.
My manager has an AWS account n using his credentials we create buckets per employee. Now i want to access another bucket through command line. So is it possible that i can access two buckets (mine and one more)? I have the access key for both buckets. But still i am not able to access both the buckets simultaneously.so that i could upload and download my files on which ever bucket i want..?
I have already tried changing access key and security in my s3config. But it didn't serve the purpose.
I have been already granted the ACL for that new bucket.
Thanks
The best you can do without having a single access key that has permissions for both buckets is create a separate .s3cfg file. I'm assuming you're using s3cmd.
s3cmd --configure -c .s3cfg_bucketname
Will allow you to create a new configuration in the config file .s3cfg_bucketname. From then on when you are trying to access that bucket you just have to add the command line flag to specify which configuration to use:
s3cmd -c .s3cfg_bucketname ls
Of course you could add a bash function to your .bashrc (now I'm assuming bash... lots of assumptions! Let me know if I'm in the wrong, please) to make it even simpler:
function s3bucketname(){
s3cmd -c ~/.s3cfg_bucketname "$#"
}
Usage:
s3bucketname ls
I'm not sure which command line tool you are using, if you are using Timothy Kay's tool
than you will find that the documentation allows you the set the access key and secret key as environment variables and not only in a config file so you can set them in command line before the put command.
I am one of the developer of Bucket Explorer and you can use its Transfer panel with two credentials and perform operation between your and other bucket.
for more details read Copy Move Between two different account