We recently migrated from AWS SDK 1.x to 2.x for Java and put operation for AWS S3 is not happening. S3 bucket is encryption enabled ( using AWS KMS key).
Below is the code I am trying to use but I am getting error mentioned below that
Server Side Encryption with AWS KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms (Service: S3, Status Code: 400, Request ID: xxx, Extended Request ID: xxx/rY9ydIzxi3NROPiM=)
Update : I figured it out myself. So anyone who wants to benefit to connect to AWS S3 bucket using KMS key and AWS SDK 2.x for Java use the below code it should work
Map<String, String> metadata = new HashMap<>();
metadata.put("x-amz-server-side-encryption", "aws:kms");
PutObjectRequest request = PutObjectRequest.builder()
//.bucketKeyEnabled(true)
.bucket(bucketName)
.key(Key)
.metadata(metadata)
.serverSideEncryption(ServerSideEncryption.AWS_KMS)
.ssekmsKeyId("arn:aws:kms:xxxxx")
.build();
File outputFile = new File("filename");
try (PrintWriter pw = new PrintWriter(outputFile)) {
data.stream()
.forEach(pw::println);
}
awsS3Client.putObject(request, RequestBody.fromBytes(String.join(System.lineSeparator(),
data).getBytes(StandardCharsets.UTF_8)));
Related
I'm trying to hit the localstack s3 service with aws sdk and it works well with aws cli but the aws sdk is behaving weird by adding the bucketname to the front of the url mentioning unable to connect.
[![INTELLIJ debug][1]][1]
Code is as below
public void testS3() {
final String localStackS3URL = "http://localhost:4566";
final String REGION = "us-east-1";
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration(localStackS3URL, REGION);
final AmazonS3 client = AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(endpoint)
.build();
if(!client.doesBucketExistV2("test")){
client.createBucket("test");
}
}
Can anyone help me what is wrong here ? It works with aws cli but the aws sdk is prefixing the bucket name strangely.
[![cmd aws cli][2]][2]
Thanks in advance
[1]: https://i.stack.imgur.com/wMI8D.png
[2]: https://i.stack.imgur.com/L0jLV.png
try adding the http client parameter while building the S3 client it worked for me
httpClient(UrlConnectionHttpClient.builder().build())
I would like to download a video file from AWS S3 using Cognito identity pool for authentication. I created a identity pool role in AWS and added it to my code. Unfortunately I get the error 403 (access denied).
My code looks like this:
CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
activity,
"eu-xxxxxx-x:xxxxxxx-xxx-xxxx-xx-xxxx",
Regions.EU_XXXXXX_X // Region
);
s3Client = new AmazonS3Client(credentialsProvider, RegionUtils.getRegion("eu-xxxxxx-x"));
TransferUtility transferUtility =
TransferUtility.builder()
.context(activity)
.s3Client(s3Client)
.build();
TransferNetworkLossHandler.getInstance(activity);
TransferObserver downloadObserver =
transferUtility.download(BUCKETNAME,
"videos/example.mp4",
new File(activity.getFilesDir(), "example.mp4"));
Do you have any idea what configuration is required on AWS to make a download possible?
Actually Iam trying to set email subscription to SNS topic in AWS through springboot and publish the message to SNS topic. But facing issue in configuring the AWS access key and secret key. Do I need to create a separate IAM user for accessing SNS or it is fine to have a IAM user already created with administrator access to SNS topic. Below is the configuration file I created.
#Configuration
public class AmazonSNSConfiguration {
#Bean
#Primary
public AmazonSNSClient amazonSNSClient() {
return (AmazonSNSClient) AmazonSNSClientBuilder
.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(
"**********",
"***********"
)
)
)
.build();
}
}
When working with Spring BOOT and the Java SNS V2 API, you can create an IAM role that has a policy to use SNS.
Also, there is no reason to Hard Code the key creds in the Java code. You can create an SNSClient object like this:
Region region = Region.US_WEST_2;
SnsClient snsClient = SnsClient.builder()
.region(region)
.build();
This code assumes you have set the creds in a file named credentials located in .aws. See this doc (under heading Configure credentials) for more information:
Get started with the AWS SDK for Java 2.x
Here is an AWS developer tutorial that steps you through How To create a Spring BOOT app that uses the SNS service to create a Sub/Pub app:
Creating a Publish/Subscription Spring Boot Application
Background
I am attempting to upload a file to an AWS S3 bucket in Jenkins. I am using the steps/closures provided by the AWS Steps plugin. I am using an Access Key ID and an Access Key Secret and storing it as a username and password, respectively, in Credential Manager.
Code
Below is the code I am using in a declarative pipeline script
sh('echo "test" > someFile')
withAWS(credentials:'AwsS3', region:'us-east-1') {
s3Upload(file:'someFile', bucket:'ec-sis-integration-test', acl:'BucketOwnerFullControl')
}
sh('rm -f someFile')
Here is a screenshot of the credentials as they are stored globally in Credential Manager.
Issue
Whenever I run the pipeline I get the following error
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 5N9VEJBY5MDZ2K0W; S3 Extended Request ID: AJmuP635cME8m035nA6rQVltCCJqHDPXsjVk+sLziTyuAiSN23Q1j5RtoQwfHCDXAOexPVVecA4=; Proxy: null), S3 Extended Request ID: AJmuP635cME8m035nA6rQVltCCJqHDPXsjVk+sLziTyuAiSN23Q1j5RtoQwfHCDXAOexPVVecA4=
Does anyone know why this isn't working?
Trouble Shooting
I have verified the Access Key ID and Access Key Secret combination works by testing it out through a small Java application I wrote. Additionally I set the id/secret via Java system properties ( through the script console ), but still get the same error.
System.setProperty("aws.accessKeyId", "<KEY_ID>")
System.setProperty("aws.secretKey", "<KEY_SECRET>")
I also tried to change the credential manager type from username/password to aws credentials as seen below. It made no difference
it might be a bucket and object ownership issue. check if the credentials you use allow you to upload to the bucket ec-sis-integration-test.
I have access only to the specific folder in S3 bucket.
For S3 client builder I was using the following code for uploading to S3 bucket to the specified folder.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(region)
.build();
(I am running the code from server which has access to S3 so credentials not required,I was able to view the buckets and folders from cli)
putObjectRequest = new PutObjectRequest(awsBucketName, fileName, multipartfile.getInputStream(), null)
I even tried giving bucketname along with prefix because I have access only for the specific folder.I was getting access denied status 403.
So for S3 client builder, I am trying to use endpoint configuration rather than just specifying the region. I got the following error.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withEndpointConfiguration(endpointConfiguration)
.withClientConfiguration(new ClientConfiguration().withProtocol(Protocol.HTTP))
.build();
com.amazonaws.SdkClientException: Unable to execute HTTP request: null
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1114)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1064)
What should I do or how should I verify if it correctly maps to the bucket and folder.
I got it when I used Presigned URL(refer aws presigned url documentation which has the example code as well for java) to upload to a folder only for which you have access (that is you dont have access for the bucket)