I'm running through the Redshift tutorials on the AWS site, and I can't access their sample data buckets with the COPY command. I know I'm using the right Key and Secret Key, and have even generated new ones to try, without success.
The error from S3 is S3ServiceException:Access Denied,Status 403,Error AccessDenied. Amazon says this is related to permissions for a bucket, but they don't specify credentials to use for accessing their sample buckets, so I assume they're open to the public?
Anyone got a fix for this or am I misinterpreting the error?
I was misinterpreting the error. The buckets are publicly accessible and you just have to give your IAM user access to the S3 service.
Related
I have an old archive folder that exists on an on premise Windows server that I need to put into an S3 bucket, but having issues, it's more my knowledge of AWS tbh, but I'm trying.
I have created the S3 bucket and I can to attach it to the server using net share (AWS gives you the command via the AWS gateway) and I gave it a drive letter. I then tried to use robocopy to copy the data, but it didn't like the drive letter for some reason.
I then read I can use the AWS CLI so I tried something like:
aws s3 sync z: s3://archives-folder1
I get - fatal error: Unable to locate credentials
I guess I need to put some credentials in somewhere (.aws), but after reading too many documents I'm not sure what to do at this point, could someone advise?
Maybe there is a better way.
Thanks
You do not need to 'attach' the S3 bucket to your system. You can simply use the AWS CLI command to communicate directly with Amazon S3.
First, however, you need to provide the AWS CLI with a set of AWS credentials that can be used to access the bucket. You can do this with:
aws configure
It will ask for an Access Key and Secret Key. You can obtain these from the Security Credentials tab when viewing your IAM User in the IAM management console.
I have an Amazon S3 bucket that is being used by CloudTrail.
However, the S3 bucket is not visible in S3.
When I click on the bucket in CloudTrail, it links to S3 but I get access denied.
The bucket is currently in use by CloudTrail, and based on the icons that seems to be working fine.
So, it seems this is an existing bucket but I cannot access it!
I also tried to access the S3 bucket with the root account, but the same issue occurs there.
Please advise on how I would regain access.
Just because cloudtrail has access to the bucket, doesn't mean your account also does.
You would need to talk to whoever manages your security and request access. or if this is your account, make sure you are logged in with credentials that have the proper access.
I am trying to deploy JIRA on AWS, but am having a hard time setting it up. I couldn't find any document on how to troubleshoot the following errors.
First one is:
S3 error: Access Denied For more information check
I made a S3 bucket public, and was able to bypass this error, but I don't want it to be public, but since creating a whole new stack, I don't have any information of an instance to adjust allow permission to S3 bucket.
Is there any way to troubleshoot this error without adjusting the bucket to be public?
After bypassing the previous error, I was getting this error:
S3 error: The specified key does not exist.
I definitely couldn't find how to troubleshoot this issue? What needs to be done to fix this error?
The Access Denied indicates that you do not have permissions to access content in Amazon S3. The normal way of providing these permissions is:
Create an IAM Role
Assign permission to the role sufficient to access the S3 bucket
Assign the Role to the Amazon EC2 instance running the software
The specified key does not exist error basically means File Not Found.
If you wish any further trouble shooting tips, you'll need to provide details of what you are doing (eg the commands used) and what specific errors you are receiving.
You may also wish to read:
Getting started with JIRA Data Center on AWS - Atlassian Documentation
JIRA on AWS - Quick Start
Need Solution for this issue we have loaded many images on the server all URL's not working
Make sure that you have setup proper ACL's for your images.Enable Read permissions for everyone on all images or you can setup bucket policy.
I am new to Amazon EMR and Hadoop in general. I am currently trying to set up a Pig job on an EMR cluster and to import and export data from S3. I have set up a bucket in s3 with my data named "datastackexchange". In an attempt to begin to copy the data to Pig, I have used the following command:
ls s3://datastackexchange
And I am met with the following error message:
AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).
I presume I am missing some critical steps (presumably involving setting up the access keys). As I am very new to EMR, could someone please explain what I need to do to get rid of this error and allow me to use my S3 data in EMR?
Any help is greatly appreciated - thank you.
As you correctly observed, your EMR instances do not have the privileges to access the S3 data. There are many ways to specify the AWS credentials to access your S3 data, but the correct way is to create IAM role(s) for accessing your S3 data.
Configure IAM Roles for Amazon EMR explains the steps involved.