I am trying to upload a file using Java to Amazon S3 bucket.
I have the below policy which should restrict the files that not encrypted.
{
"Version": "2012-10-17",
"Id": "PutObjPolicy",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::file-upload-test/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}
Now, when I try to use the below code, I get Access Denied exception.
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: F287836ABBF2B486)
The java code that I am using is:
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(fileSize);
objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
objectMetadata.setContentType(contentType);
putObjectRequest = new PutObjectRequest(bucketName, fileName, inputStream, objectMetadata);
s3client.putObject(putObjectRequest);
Is this correct that I should use this Java code to achieve this?
If yes, then what modifications are required.
If no, then should I use encryption at client end?
Please suggest.
Related
Most of threads on similar topic has advised creating an IAM role to be assigned to Lamda function and creating a bucket level policy in S3 console to allow access for above role.
Have created these as below
Role for my Lamda function -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmtlamda",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::referencedataepiko/*",
"arn:aws:s3:::referencedataepiko"
]
}
]
}
I have a policy configured at a bucket level -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::************:role/$role_name_lamda"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::referencedataepiko/*",
"arn:aws:s3:::referencedataepiko"
]
}
]
}
Thought these would have been sufficient:) Post above configurations tried testing the Lamda from its Test tab. But still get the same error ""error": "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied.."
Would appreciate if i can know whats missing above.
(Note - There is no server side encryption for s3 buckets)
1 more addition to the post - I am using a Java AWS sdk for writing the Lamda function. Have tried few combinations on setting up the client object for S3. Am including this to know if this is not a problem..
//com.amazonaws.services.s3.model.Region region = com.amazonaws.services.s3.model.Region.US_East_2;
//AmazonS3 client = AmazonS3ClientBuilder.standard().build();
//AmazonS3 client = AmazonS3ClientBuilder.standard().withRegion(Regions.AP_SOUTH_1).build();
Creds creds = new Creds();
AmazonS3 client = new AmazonS3Client(new BasicAWSCredentials(creds.getAWSAccessKeyId(), creds.getAWSSecretKey()))
.withRegion(Region.getRegion(Regions.US_EAST_1));
The problem is here.
Creds creds = new Creds();
AmazonS3 client = new AmazonS3Client(new BasicAWSCredentials(creds.getAWSAccessKeyId(), creds.getAWSSecretKey()))
.withRegion(Region.getRegion(Regions.US_EAST_1));
You are creating Creds object and in next line using getAWSSecretKey and getAWSAcccessKey method. That might be null.
There is no need to pass accessKey and secretKey separately while creating client of any AWS service inside Lambda function. You can try this.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.build();
I have an EMR Application that runs inside a VPC in a Private Subnet with NAT on.
This application can read files from the buckets in the same account, but when it tries to read a file in a bucket in other aws account it gets access denied.
The emr applications has the following policy:
S3FullAccess:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"s3-object-lambda:*"
],
"Resource": "*"
}
]
}
EMR Spark packages settings:
--conf spark.jars.packages=org.apache.hadoop:hadoop-aws:3.2.0
Job Spark:
from pyspark.sql import SparkSession
spark = (SparkSession.builder
.config("spark.hadoop.fs.s3a.fast.upload", True)
.config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
.config("spark.sql.adaptive.enabled", "true")
.config("spark.jars.packages", "org.apache.hadoop:hadoop-aws:3.2.0")
.enableHiveSupport().getOrCreate()
)
df_from_another_s3_account_bucket = spark.read.parquet('s3a://bucket_account_b/path/file.parquet')
This execution is returning the error:
java.nio.file.AccessDeniedException: s3a://bucket_account_b/path/file.parquet: getFileStatus on s3a://bucket_account_b/path/file.parquet: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden;
I have tried using credentials in the spark configs:
.config("fs.s3a.access.key", key_bucket_account_b)
.config("fs.s3a.secret.key", secret_bucket_account_b)
I have also tried creating a bucket policy allowing the emr aws account, but still got the same error.
{
"Version": "2012-10-17",
"Id": "Policy1543283",
"Statement": [
{
"Sid": "Stmt1412820423",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::emr_aws_account:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket_account_b"
}
]
}
What else could I try?
I have setup an s3 backend for terraform state following this excellent answer by Austin Davis. I followed the suggestion by Matt Lavin to add a policy encrypting the bucket.
Unfortunately that bucket policy means that the terraform state list now throws the
Failed to load state: AccessDenied: Access Denied status code: 403, request id: XXXXXXXXXXXXXXXX, host id: XXXX...
I suspect I'm missing either passing or configuring something on the terraform side to encrypt the communication or an additional policy entry to be able to read the encrypted state.
This is the policy added to the tf-state bucket:
{
"Version": "2012-10-17",
"Id": "RequireEncryption",
"Statement": [
{
"Sid": "RequireEncryptedTransport",
"Effect": "Deny",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
},
"Principal": "*"
},
{
"Sid": "RequireEncryptedStorage",
"Effect": "Deny",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
},
"Principal": "*"
}
]
}
I would start by removing that bucket policy, and just enable the newer default bucket encryption setting on the S3 bucket. If you still get access denied after doing that, then the IAM role you are using when you run Terraform is missing some permissions.
I use the cloudtrail bucket to make Athena queries and I keep getting this error:
Your query has the following error(s):
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: A57070510EFFB74B; S3 Extended Request ID: v56qfenqDD8d5oXUlkgfExqShUqlxwRwTQHR1S0PmHpp7WH+cz0x8D2pPLPkRoGz2o428hmOV1U=), S3 Extended Request ID: v56qfenqDD8d5oXUlkgfExqShUqlxwRwTQHR1S0PmHpp7WH+cz0x8D2pPLPkRoGz2o428hmOV1U= (Path: s3://cf-account-foundation-cloudtrailstack-trailbucket-707522222211/AWSLogs/707522222211/CloudTrail/ca-central-1/2019/01/11/707522222211_CloudTrail_ca-central-1_20190111T0015Z_XE4JGGZLQTNS334S.json.gz)
This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 56a188d5-9a10-4c30-a701-42c243c154c6.
The query is:
SELECT * FROM "default"."cloudtrail_table_logs" limit 10;
This is the S3 bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSCloudTrailAclCheck20150319",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::cf-account-foundation-cloudtrailstack-trailbucket-707522222211"
},
{
"Sid": "AWSCloudTrailWrite20150319",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::cf-account-foundation-cloudtrailstack-trailbucket-707522222211/AWSLogs/707522222211/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
The S3 bucket is in the region eu-central-1 (frankfurt) same as the athena table from which I make the queries.
I have administrator permissions on my IAM User.
I get the same error when I manually try to open a file in this bucket:
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>5D49DF767D01F32C</RequestId><HostId>9Vd/MvDy5/AJYExs6BXoZbuMxxjxTCfFqzaMTQDwyrgyVZpdL+AgDihiZu3k17PWEYOJ19I8sbQ=</HostId></Error>
I don't know what is going on here. One more precision is that the bucket has SSE-KMS encryption but that does not mean we can't make queries into it.
Same error even when I put the bucket on public.
Anyone has a clue?
I have read/write/admin access to an S3 bucket I created. I can create object in there and delete them as expected.
Other folders exist on the bucket that were transferred there from another AWS account. I can't download any items from these folders.
When I click on the files there is info stating "Server side encryption Access denied". When I attempt to remove this encryption it fails with the message:
Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 93A26842904FFB2D; S3 Extended Request ID: OGQfxPPcd6OonP/CrCqfCIRQlMmsc8DwmeA4tygTGuEq18RbIx/psLiOfEdZHWbItpsI+M1yksQ=)
I'm confused as to what the issue is. I am the root user/owner of the bucket and would have though I would be able to change the permissions/encryption of this material?
Thanks
You must ensure that you remain the owner of the files in the S3 bucket and not the other AWS accounts that upload to it.
Example S3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allowNewDataToBeUploaded",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::$THE_EXTERNAL_ACCOUNT_NUMBER:root"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::$THE_BUCKET_NAME/*"
},{
"Sid": "ensureThatWeHaveOwnershipOfAllDataUploaded",
"Effect": "Deny",
"Principal": {
"AWS": "arn:aws:iam::$THE_EXTERNAL_ACCOUNT_NUMBER:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::$THE_BUCKET_NAME/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
The external account must also use the x-amz-acl header in their request:
ObjectMetadata metaData = new ObjectMetadata();
metaData.setContentLength(byteArrayLength);
metaData.setHeader("x-amz-acl", "bucket-owner-full-control");
s3Client.putObject(new PutObjectRequest(bucketNameAndFolder, fileKey, fileContentAsInputStream, metaData));
Additional reading:
https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
AWS S3 Server side encryption Access denied error
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/
https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUTacl.html
This is a interesting problem. I've seen this before when the KMS key that is required to decrypt the files isn't available/accessible. You can try moving the KMS key from the old account to the new account or making the key accessible from the old account.
https://aws.amazon.com/blogs/security/share-custom-encryption-keys-more-securely-between-accounts-by-using-aws-key-management-service/