I need to access a Cloudian S3 bucket and copy certain files to my local directory. What I was given was 4 piece of info in the following format:
• Access key: 5x4x3x2x1xxx
• Secret key: ssssssssssss
• Region: us-east-1
• S3 endpoint: https://s3-aaa.xxx.bbb.net
• Storage path: store/STORE1/
What I do is first configure and create a profile called cloudian which asks above info:
aws configure --profile cloudian
And then I run a sample command. For instance to copy a file:
aws --profile=cloudian --endpoint-url= https://s3-aaa.xxx.bbb.net s3 cp s3://store/STORE1/FILE.mp4 /home/myfile.mp4
But it keeps waiting, no output, no errors, nothing. Am I doing anything wrong? Is there anything missing?
If you have set the profile properly, this should work:
aws --profile cloudian s3 cp s3://s3-aaa/store/STORE1/FILE.mp4 /home/myfile.mp4
where s3-aaa is the name of the bucket.
Related
I am trying to migrate my aws elastic redis from one region to another Region. Followed this article.
took the backup of source region redis
Created the s3 bucket at the same source region and provide the necessary ACL permission Like adding canonical details
Import the backup to source s3 bucket (2 files as having 2 shards)
copied the .rdb file to destination s3 bucket
added the permission like canonical details
Tried to create the new redis on destination region via exporting from s3 bucket but getting below error
Error:
Unable to fetch metadata from S3 for bucket: s3: and object: /Bucket/backup.rdb. Please verify the input.
I was give this info:
AWS s3 Bucket
ci****a.open
Amazon Resource Name (ARN)
arn:aws:s3:::ci****a.open
AWS Region
US West (Oregon) us-west-2
how am I supposed to download the folder without Access Key ID and Secret Access Key ?
I tried with CLI and it still ask me Access Key ID and Secret Access Key.
I usually use s3 browser, but it also ask for Access Key ID and Secret Access Key
I tried with CLI and it still ask me Access Key ID and Secret Access Key.
For CLI you have to use --no-sign-request for credentials to be skipped. This will only work if the objects and/or your bucket is public.
CLI S3 commands, such as cp require S3Url, not S3 arn:
s3://bucket-name
you can create it yourself from arn, as bucket-name will be in the arn. In your case it would be ci****a.open:
s3://ci****a.open
So you can try the following to copy everything to current working folder:
aws s3 cp s3://ci****a.open . --recursive --no-sign-request
I'm new to AWS S3. I need to access a Cloudian S3 bucket and copy files within a bucket to my local directory. What I was given was 4 piece of info in the following format:
• Access key: 5x4x3x2x1xxx
• Secret key: ssssssssssss
• S3 endpoint: https://s3-aaa.xxx.bbb.net
• Storage path: store/STORE1/
When I'm trying to do a simple command like ls, I get this error:
aws s3 ls s3-aaa.xxx.bbb.net or aws s3 ls https://s3-aaa.xxx.bbb.net:
An error occurred (NoSuchBucket) when calling the ListObjectsV2 operation: The specified bucket does not exist
What is the right commands to access the bucket and copy a file to my local directory?
It looks like you are missing your bucket name - you should be able to see it on the AWS S3 console.
You should also be able to use either the cp or sync command like so:
aws s3 cp s3://SOURCE_BUCKET_NAME/s3/file/key SomeDrive:/path/to/my/local/directory
Or:
aws s3 sync s3://SOURCE_BUCKET_NAME/s3/file/key SomeDrive:/path/to/my/local/directory
You may also need to check the permissions on the s3 bucket.
More info:
aws s3 sync: https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
aws s3 cp: https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
aws s3 permissions: https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-default-encryption/
I am trying to download all the available files from my s3 bucket to my local machine. I have installed AWS cli. and then I have used aws configure to setup access key and secret key too. I am facing issue while trying to execute the following command:
$ aws s3 sync s3://tempobjects .
Setup commands
LAMU02XRK97:s3 vsing$ export AWS_ACCESS_KEY_ID=*******kHXE
LAMU02XRK97:s3 vsing$ export AWS_SECRET_ACCESS_KEY=******Ssv
LAMU02XRK97:s3 vsing$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************kHXE shared-credentials-file
secret_key ****************pSsv shared-credentials-file
region us-east-1 config-file ~/.aws/config
Error:
LAMU02XRK97:s3 vsing$ aws s3 sync s3://tempobjects .
fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The AWS Access Key Id you provided does not exist in our records.
I have replicated the scenario and to make it work you need to make sure that the user you are using for CLI is having the same access keys configured in the IAM.
Below is what configured in AWS CLI.
Below is what configured in AWS IAM for the same user :
Access Key ending with QYHP is configured at both the places and hence it is working fine for me.
I have the AWS cli installed on an EC2 instance, and I configured it by running aws configure and giving it my AWSAccessKeyId and AWSSecretKey keys so if I run the command aws s3 ls it returns the name of my S3 bucket (call it "mybucket").
But, if I then try aws s3 cp localfolder/ s3://mybucket/ --recursive I get an error that looks like
A client error (AccessDenied) occurred when calling the CreateMultipartUpload operation: Anonymous users cannot initiate multipart uploads. Please authenticate.
I thought that by running aws configure and giving it my root key that I was effectively giving the aws cli everything it needs to authenticate? Is there something I am missing regarding copying to an S3 bucket as opposed to listing them?
Thought I would add in a very similar issue that I had where I could list buckets but could not write to a given bucket returning the error
An error occurred (AccessDenied) when calling the
CreateMultipartUpload operation: Access Denied
If the bucket uses server-side encryption you'll need to add the --sse flag to be able to write to this bucket.
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
Root Access keys and Secret key have full control and full privileges to interact with the AWS. Please try running the aws configure again to recheck the setting and try again.
PS: it is highly not recommended to use root access keys - please give a thought is creating an IAM ( which take admin privileges- like root ) and use those.
If you have environment variables AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID and AWS_REGION set, AWS CLI gives higher precedence to them, and not to credentials you specify with aws configure.
So, in my case, bash command unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY solved the problem.