Background
I am attempting to upload a file to an AWS S3 bucket in Jenkins. I am using the steps/closures provided by the AWS Steps plugin. I am using an Access Key ID and an Access Key Secret and storing it as a username and password, respectively, in Credential Manager.
Code
Below is the code I am using in a declarative pipeline script
sh('echo "test" > someFile')
withAWS(credentials:'AwsS3', region:'us-east-1') {
s3Upload(file:'someFile', bucket:'ec-sis-integration-test', acl:'BucketOwnerFullControl')
}
sh('rm -f someFile')
Here is a screenshot of the credentials as they are stored globally in Credential Manager.
Issue
Whenever I run the pipeline I get the following error
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 5N9VEJBY5MDZ2K0W; S3 Extended Request ID: AJmuP635cME8m035nA6rQVltCCJqHDPXsjVk+sLziTyuAiSN23Q1j5RtoQwfHCDXAOexPVVecA4=; Proxy: null), S3 Extended Request ID: AJmuP635cME8m035nA6rQVltCCJqHDPXsjVk+sLziTyuAiSN23Q1j5RtoQwfHCDXAOexPVVecA4=
Does anyone know why this isn't working?
Trouble Shooting
I have verified the Access Key ID and Access Key Secret combination works by testing it out through a small Java application I wrote. Additionally I set the id/secret via Java system properties ( through the script console ), but still get the same error.
System.setProperty("aws.accessKeyId", "<KEY_ID>")
System.setProperty("aws.secretKey", "<KEY_SECRET>")
I also tried to change the credential manager type from username/password to aws credentials as seen below. It made no difference
it might be a bucket and object ownership issue. check if the credentials you use allow you to upload to the bucket ec-sis-integration-test.
Related
I am trying to add external storage to my Nextcloud to use. That would be an AWS S3 bucket. However, this is not possible because I get the following error message:
Exception: Creation of bucket \"nextcloud-modul346\" failed. Error executing \"CreateBucket\" on \"http:\/\/nextcloud-modul346.s3.eu-west-1.amazonaws.com\/\"; AWS HTTP error: Client error: `PUT http:\/\/nextcloud-modul346.s3.eu-west-1.amazonaws.com\/` resulted in a `403 Forbidden` response:\n\u003C?xml version=\"1.0\" encoding=\"UTF-8\"?\u003E\n\u003CError\u003E\u003CCode\u003EInvalidAccessKeyId\u003C\/Code\u003E\u003CMessage\u003EThe AWS Access Key Id you provided (truncated...)\n InvalidAccessKeyId (client): The AWS Access Key Id you provided does not exist in our records. - \u003C?xml version=\"1.0\" encoding=\"UTF-8\"?\u003E\n\u003CError\u003E\u003CCode\u003EInvalidAccessKeyId\u003C\/Code\u003E\u003CMessage\u003EThe AWS Access Key Id you provided does not exist in our records.\u003C\/Message\u003E\u003CAWSAccessKeyId\u003EASIARERFVIEWRBG5WD63\u003C\/AWSAccessKeyId\u003E\u003CRequestId\u003EM6BN3MC6F0214DQM\u003C\/RequestId\u003E\u003CHostId\u003EgVf0nUVJXQDL2VV50pP0qSzbTi+N+8OMbgvj4nUMv10pg\/T5VVccb4IstfopzzhuxuUCtY+1E58=\u003C\/HostId\u003E\u003C\/Error\u003E
However, I cannot use IAM users or groups as this is blocked by my organization. Also, I work with the AWS Learner Lab and I have to use S3.
As credentials I have specified in Nextcloud the aws_access_key_id and aws_secret_access_key from Learnerlab. However, I cannot connect to it. This Post havn't helped either.
Does anyone know a solution to this problem which does not involve IAM?
Thanks for any help!
I have a Spark on Hadoop Application that writes to AWS S3.
The problem is I am using AssumedRoleCredentialProvider as a credential provider because I have role_arn
spark.sparkContext._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider")
spark.sparkContext._jsc.hadoopConfiguration().set("fs.s3a.assumed.role.arn","arn:aws:iam::321849:role/some_role")
But I also have a session token to be used:
spark.sparkContext._jsc.hadoopConfiguration().set('fs.s3a.access.key','xxxx')
spark.sparkContext._jsc.hadoopConfiguration().set('fs.s3a.secret.key','xxxx')
spark.sparkContext._jsc.hadoopConfiguration().set('fs.s3a.session.token','xxxx')
As I am not providing org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider as a credential provider, I am hitting below even though I have a valid session token with me:
The security token included in the request is invalid. (Service: AWSSecurityTokenService; Status Code: 403; Error Code: InvalidClientTokenId;
Any pointers on how to solve having talked on both the scenarios?
AssumedRoleCredentialProvider needs full credentials as it
talks to STS to get credentials for the given role.
AFAIK you can't talk to STS with session creds.
but the hadoop utility tool cloudstore can generate a set of session keys and give you the values in the forms of hadoop xml, spark defaults and env vars for bash and fish. I use it when I need to use my creds on a remote cluster.
https://github.com/steveloughran/cloudstore/blob/trunk/src/main/site/sessionkey.md
I'm running Spark 2.4 on an EC2 instance. I am assuming an IAM role and setting the key/secret key/token in the sparkSession.sparkContext.hadoopConfiguration, along with the credentials provider as "org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider".
When I try to read a dataset from s3 (using s3a, which is also set in the hadoop config), I get an error that says
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 7376FE009AD36330, AWS Error Code: null, AWS Error Message: Forbidden
read command:
val myData = sparkSession.read.parquet("s3a://myBucket/myKey")
I've repeatedly checked the S3 path and it's correct. My assumed IAM role has the right privileges on the S3 bucket. The only thing I can figure at this point is that spark has some sort of hidden credential chain ordering and even though I have set the credentials in the hadoop config, it is still grabbing credentials from somewhere else (my instance profile???). But I have no way to diagnose that.
Any help is appreciated. Happy to provide any more details.
spark-submit will pick up your env vars and set them as the fs.s3a access +secret + session key, overwriting any you've already set.
If you only want to use the IAM credentials, just set fs.s3a.aws.credentials.provider to com.amazonaws.auth.InstanceProfileCredentialsProvider; it'll be the only one used
Further Reading: Troubleshooting S3A
I am attempting to setup AWS with an Elastic Beanstalk instance I have previously created. I have entered the various details into the config file, however when I try to use aws.push I get the error message
Updating the AWS Elastic Beanstalk environment x-xxxxxx...
Error: Failed to get the Amazon S3 bucket name
I have checked the credentials in IAM and have full administrator privileges. When I run eb status show green I get the following message:
InvalidClientTokenId. The security token included in the request is invalid.
Run aws configure again, and re-enter your credentials.
It's likely you're using old (disabled or deleted) access/secret keys, or you accidentally swapped the access key with the secret key.
For me is was that my system clock was off by more than 5 minutes. Updating the time fixed the issue.
I'm trying to just test out AWS s3 with eclipse using Java, I'm just trying to execute the Amazon s3 sample, but it doesn't recognise my credentials, and I'm sure my credentials are legitimate, it gives me the following error:
===========================================
Getting Started with Amazon S3
===========================================
Listing buckets
Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason.
Error Message: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 057D91D336C1FASC, AWS Error Code: InvalidAccessKeyId, AWS Error Message: The AWS Access Key Id you provided does not exist in our records.
HTTP Status Code: 403
AWS Error Code: InvalidAccessKeyId
Error Type: Client
Request ID: 057D91D336C1FASC
a little update here:
so there's a credential file that aws creates in the computer system. mine case was '/Users/macbookpro/.aws/credentials'
the file in this place decides the default accessKeyId and stuff.. go ahead and update it.
So I ran into the same issue, but i think i figured it out.
I was using Node.js, but i think the problem should be the same since it's how they have structured their object was the issue.
in javascript if you run this in the backend,
var aws = require('aws-sdk');
aws.config.accessKeyId= "Key bablbalab"
console.log(aws.config.accessKeyId)
you will find it prints out something different. coz the correct way of setting the accessKeyId isn't what they have provided in the official website tutorial
aws.config.accessKeyId="balbalb"
or
aws.config.loadFromPath = ('./awsConfig.json')
or any of that.
If you log the entire "aws.config", you will find the correct way is
console.log(aws.config)
console.log(aws.config.credentials.secretAccessKey)
aws.config.credentials.secretAccessKey="Key balbalab"
you see the structure of the object? there's the inconsistence