I'm new to AWS S3. I need to access a Cloudian S3 bucket and copy files within a bucket to my local directory. What I was given was 4 piece of info in the following format:
• Access key: 5x4x3x2x1xxx
• Secret key: ssssssssssss
• S3 endpoint: https://s3-aaa.xxx.bbb.net
• Storage path: store/STORE1/
When I'm trying to do a simple command like ls, I get this error:
aws s3 ls s3-aaa.xxx.bbb.net or aws s3 ls https://s3-aaa.xxx.bbb.net:
An error occurred (NoSuchBucket) when calling the ListObjectsV2 operation: The specified bucket does not exist
What is the right commands to access the bucket and copy a file to my local directory?
It looks like you are missing your bucket name - you should be able to see it on the AWS S3 console.
You should also be able to use either the cp or sync command like so:
aws s3 cp s3://SOURCE_BUCKET_NAME/s3/file/key SomeDrive:/path/to/my/local/directory
Or:
aws s3 sync s3://SOURCE_BUCKET_NAME/s3/file/key SomeDrive:/path/to/my/local/directory
You may also need to check the permissions on the s3 bucket.
More info:
aws s3 sync: https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
aws s3 cp: https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
aws s3 permissions: https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-default-encryption/
Related
I am trying to migrate my aws elastic redis from one region to another Region. Followed this article.
took the backup of source region redis
Created the s3 bucket at the same source region and provide the necessary ACL permission Like adding canonical details
Import the backup to source s3 bucket (2 files as having 2 shards)
copied the .rdb file to destination s3 bucket
added the permission like canonical details
Tried to create the new redis on destination region via exporting from s3 bucket but getting below error
Error:
Unable to fetch metadata from S3 for bucket: s3: and object: /Bucket/backup.rdb. Please verify the input.
I was give this info:
AWS s3 Bucket
ci****a.open
Amazon Resource Name (ARN)
arn:aws:s3:::ci****a.open
AWS Region
US West (Oregon) us-west-2
how am I supposed to download the folder without Access Key ID and Secret Access Key ?
I tried with CLI and it still ask me Access Key ID and Secret Access Key.
I usually use s3 browser, but it also ask for Access Key ID and Secret Access Key
I tried with CLI and it still ask me Access Key ID and Secret Access Key.
For CLI you have to use --no-sign-request for credentials to be skipped. This will only work if the objects and/or your bucket is public.
CLI S3 commands, such as cp require S3Url, not S3 arn:
s3://bucket-name
you can create it yourself from arn, as bucket-name will be in the arn. In your case it would be ci****a.open:
s3://ci****a.open
So you can try the following to copy everything to current working folder:
aws s3 cp s3://ci****a.open . --recursive --no-sign-request
I need to access a Cloudian S3 bucket and copy certain files to my local directory. What I was given was 4 piece of info in the following format:
• Access key: 5x4x3x2x1xxx
• Secret key: ssssssssssss
• Region: us-east-1
• S3 endpoint: https://s3-aaa.xxx.bbb.net
• Storage path: store/STORE1/
What I do is first configure and create a profile called cloudian which asks above info:
aws configure --profile cloudian
And then I run a sample command. For instance to copy a file:
aws --profile=cloudian --endpoint-url= https://s3-aaa.xxx.bbb.net s3 cp s3://store/STORE1/FILE.mp4 /home/myfile.mp4
But it keeps waiting, no output, no errors, nothing. Am I doing anything wrong? Is there anything missing?
If you have set the profile properly, this should work:
aws --profile cloudian s3 cp s3://s3-aaa/store/STORE1/FILE.mp4 /home/myfile.mp4
where s3-aaa is the name of the bucket.
I have an access with both aws and Google Cloud Platform.
Is this possible to do the following,
List Google Cloud Storage bucket using aws-cli
PUT a CSV file to Google Cloud Storage bucket using aws-cli
GET an object(s) from Google Cloud Storage bucket using aws-cli
It is possible. Per GCP documentation
The Cloud Storage XML API is interoperable with ... services such as Amazon Simple Storage Service (Amazon S3)
To do this you need to enable Interoperability in the Settings screen in the Google Cloud Storage console. From there you can creates a storage access key.
Configure the aws cli with those keys. IE aws configure.
You can then use the aws s3 command with the --endpoint-url flag set to https://storage.googleapis.com.
For example:
MacBook-Pro:~$ aws s3 --endpoint-url https://storage.googleapis.com ls
2018-02-09 14:43:42 foo.appspot.com
2018-02-09 14:43:42 bar.appspot.com
2018-05-02 20:03:08 etc.appspot.com
aws s3 --endpoint-url https://storage.googleapis.com cp test.md s3://foo.appspot.com
upload: ./test.md to s3://foo.appspot.com/test.md
I had a requirement to copy objects from GC storage bucket to S3 using AWS Lambda.
Python boto3 library allows listing and downloading objects from GC bucket.
Below is sample lambda code to copy "sample-data-s3.csv" object from GC bucket to s3 bucket.
import boto3
import io
s3 = boto3.resource('s3')
google_access_key_id="GOOG1EIxxMYKEYxxMQ"
google_access_key_secret="QifDxxMYSECRETKEYxxVU1oad1b"
gc_bucket_name="my_gc_bucket"
def get_gcs_objects(google_access_key_id, google_access_key_secret,
gc_bucket_name):
"""Gets GCS objects using boto3 SDK"""
client = boto3.client("s3", region_name="auto",
endpoint_url="https://storage.googleapis.com",
aws_access_key_id=google_access_key_id,
aws_secret_access_key=google_access_key_secret)
# Call GCS to list objects in gc_bucket_name
response = client.list_objects(Bucket=gc_bucket_name)
# Print object names
print("Objects:")
for blob in response["Contents"]:
print(blob)
object = s3.Object('my_aws_s3_bucket', 'sample-data-s3.csv')
f = io.BytesIO()
client.download_fileobj(gc_bucket_name,"sample-data.csv",f)
object.put(Body=f.getvalue())
def lambda_handler(event, context):
get_gcs_objects(google_access_key_id,google_access_key_secret,gc_bucket_name)
You can loop through blob to download all objects from GC bucket.
Hope this helps someone who wants to use AWS lambda to transfer objects from GC bucket to s3 bucket.
~$ aws configure
AWS Access Key ID [****************2ZL8]:
AWS Secret Access Key [****************obYP]:
Default region name [None]: us-east-1
Default output format [None]:
~$aws s3 ls --endpoint-url=<east-region-url>
2019-02-18 12:18:05 test
~$aws s3 cp test.py s3://<bucket-name> --endpoint-url=<east-region-url>
~$aws s3 mv s3://<bucket-name>/<filename> test1.txt --endpoint-url=<east-region-url>
Unfortunately this is not possible,
Could you maybe update your question to why you want to do this, maybe we know of an alternative solution to your question?
I want to sync data between two s3 buckets.
The problem is that each one is owned by different AWS accounts (i.e. access key id and secret access key).
I tried to make the destination bucket publicly writable, but I still get
fatal error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
How to solve this?
I solved by giving permissions to write the destination bucket to the source bucket's AWS account.
I went to bucket "Permissions" tab of the destination bucket, "Access for other AWS accounts" and I gave permissions to the source bucket's AWS account by using the account email.
Then I copied the files by using AWS CLI (don't forget to grant full access to the recipient account!):
aws s3 cp s3://<source_bucket>/<folder_path>/ s3://<destination_bucket> --recursive --profile <source_AWSaccount_profile> --grants full=emailaddress=<destination_account_emailaddress>