Springboot Communication with S3 bucket - amazon-web-services

i have got a problem i can't resolve for some hours, so i decide to post it with the hope someone can help me there.
So basically i'm trying to connect my springboot to an s3 bucket i created on the AWS console manager and store some images i supply.
Here is my code java :
#Service
public class AmazonClient {
private AmazonS3 s3client;
#Value("${amazonProperties.endpointUrl}")
private String endpointUrl;
#Value("${amazonProperties.bucketName}")
private String bucketName;
#Value("${amazonProperties.accessKey}")
private String accessKey;
#Value("${amazonProperties.secretKey}")
private String secretKey;
#Value("${amazonProperties.region}")
private String region;
#PostConstruct
private void initializeAmazon() {
System.out.println(this.accessKey+'\n'+this.secretKey+'\n'+Regions.fromName(this.region));
BasicAWSCredentials credentials = new BasicAWSCredentials(this.accessKey, this.secretKey);
AmazonS3 amazonS3Client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.fromName(region))
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.build();
this.s3client = amazonS3Client;
System.out.println(this.s3client);
}
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
public String uploadFile(MultipartFile multipartFile) {
String fileUrl = "";
try {
File file = convertMultiPartToFile(multipartFile);
String fileName = generateFileName(multipartFile);
fileUrl = endpointUrl + "/" + bucketName + "/" + fileName;
uploadFileTos3bucket(fileName, file);
file.delete();
} catch (Exception e) {
e.printStackTrace();
}
return fileUrl;
}
}
Getting com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: TDH3Q44GGPGGY2DD; S3 Extended Request ID: oQOOA9bu3bhoHNudJEOFkto1aXEjxKVfblnFWAJ2eJmFTG6mnKSHycHeQpbLP4kYITG9pQxNx9o=; Proxy: null), S3 Extended Request ID: oQOOA9bu3bhoHNudJEOFkto1aXEjxKVfblnFWAJ2eJmFTG6mnKSHycHeQpbLP4kYITG9pQxNx9o=
error when i call the upload method throught postman post request intercepted by controler...
The thing is i can't create an IAM user because i got a restricted AWS free 100$ account given by my school to work on. The permissions S3fullaccess is given to a predifined role assigned to current user. Consequently, aws CLI command aws s3 ls well list the bucket with accurate credential configuration. However, cant put any object in the bucket ....
I'm driving crazy because it seems like everything is working on the springboot part : the println(accesskey, secretkey and region are well fetched from my application.yml following:
amazonProperties:
endpointUrl: https://s3.amazonaws.com
accessKey: XXXXXX
secretKey: XXXXXX
bucketName: image-bucket-cpe
region: us-east-1
).
If someone can help me, i will be forever beholden to him.
Thanks in advance

Related

Access Denied when listing object using amazon s3 client

From the aws lambda I want to list objects inside s3 bucket. When testing the function locally I'm getting access denied error.
public async Task<string> FunctionHandler(string input, ILambdaContext context)
{
var secretKey = "***";
var uid = "***";
var bucketName = "my-bucket-name";
AmazonS3Client s3Client = new AmazonS3Client(uid, secretKey);
ListObjectsRequest listObjectsRequest = new ListObjectsRequest();
listObjectsRequest.BucketName = bucketName;
var listObjectsResponse = await s3Client.ListObjectsAsync(listObjectsRequest);
// exception is thrown
...
}
Amazon.S3.AmazonS3Exception: Access Denied at
Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleExceptionStream(IRequestContext
requestContext, IWebResponseData httpErrorResponse,
HttpErrorResponseException exception, Stream responseStream) at
Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleExceptionAsync(IExecutionContext
executionContext, HttpErrorResponseException exception) ....
The bucket I'm using in this example "my-bucket-name" is Publicly accessible and it has
Any idea?
First of all, IAM policies are a preferred way how to control access to S3 buckets.
For S3 permissions it is always very important to distinguish between bucket level actions and object level actions and also - who is calling that action. In your code I can see that you do use ListObjects, which is a bucket level action, so that is OK.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
What did catch my I is the following:
var secretKey = "***";
var uid = "***";
var bucketName = "my-bucket-name";
AmazonS3Client s3Client = new AmazonS3Client(uid, secretKey);
That means that you are using for your access an AWS role. But even in your screenshot you can see that "Authenticated users group (anyone with an AWS account)" does not have any permissions assigned.
If you already have a role I would suggest to give the read-bucket permissions to that particular role (user) via an IAM policy. But adding read ACL to your AWS users should help as well.

Connect to S3 bucket without giving region name

I need to download objects from S3 bucket and I have below information to access the S3 bucket.
Access Key, Secret Key and Bucket End point. There is no region name.
I was using minio package to access the bucket and was able to access it using below code :
public void getS3BucketObject() throws IOException, NoSuchAlgorithmException,
InvalidKeyException, ServerException, InsufficientDataException, InternalException, InvalidResponseException,
XmlParserException, ErrorResponseException {
//creating minioClient to access S3 bucket
minioClient =
MinioClient.builder()
.endpoint(s3BucketEndpoint)
.credentials(s3BucketAccessKey, s3BucketSecretKey)
.build();
//check for bucket existance
boolean found =
minioClient.bucketExists(BucketExistsArgs.builder().bucket(s3BucketName).build());
if (!found) {
System.out.println("Bucket doesn't exist ");
} else {
System.out.println("Bucket exists ");
}
}
But, I do need to use AWS SDK instead of minio but I am not sure as I don't have region information and not sure how to pass endpoint in the configuration setting, though I tried below.
final BasicAWSCredentials credentials = new BasicAWSCredentials(s3BucketAccessKey, s3BucketSecretKey);
final AmazonS3 s3client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials));
boolean found = s3client.doesBucketExistV2(s3BucketName);
if (found){
System.out.println("The bucket is available ");
}else {
System.out.println("The bucket doesn't exist");
}
Have you tried using withRegion(AWS_S3_REGION) option while creating s3client using builder?
AmazonS3 s3client = AmazonS3ClientBuilder.standard()
.withRegion("us-east-1")
.build();
Ref. https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3ClientBuilder.html
AmazonS3ClientBuilder.defaultClient() fails to account for region?

How to use DefaultAWSCredentialsProviderChain() in java code to fetch credentials from instance profile and allow access to s3 bucket

I am working on a requirement where i want to connect to s3 bucket using springboot application.
When i am connecting through my local environment i am using seeting loadCredentials(true) which uses Amazon STS which fetches the temperoriy credentials using a role and allow access to s3 bucket.
When i am deploying to the qa/prd envirment i am setting loadCredentials(false) which uses DefaultAWSCredentialsProviderChain() class to fetch the credential from aws instance profile(role is assigned to ec2 instance) and allow access to s3 bucket. My code is
#Configuration
public class AmazonS3Config
{
static String clientRegion = "ap-south-1";
static String roleARN = "arn:aws:iam::*************:role/awss3acess";
static String roleSessionName = "bucket_storage_audit";
String bucketName = "testbucket";
//Depending on environment is set to true(for local environment) or false(for qa and prd environment)
private static AWSCredentialsProvider loadCredentials(boolean isLocal) {
final AWSCredentialsProvider credentialsProvider;
if (isLocal) {
AWSSecurityTokenService stsClient = AWSSecurityTokenServiceAsyncClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
AssumeRoleRequest assumeRoleRequest = new AssumeRoleRequest().withDurationSeconds(3600)
.withRoleArn(roleARN)
.withRoleSessionName(roleSessionName);
AssumeRoleResult assumeRoleResult = stsClient.assumeRole(assumeRoleRequest);
Credentials creds = assumeRoleResult.getCredentials();
credentialsProvider = new AWSStaticCredentialsProvider(
new BasicSessionCredentials(creds.getAccessKeyId(),
creds.getSecretAccessKey(),
creds.getSessionToken())
);
} else {
System.out.println("inside default");
credentialsProvider = new DefaultAWSCredentialsProviderChain();
}
return credentialsProvider;
}
// Amazon s3client Bean return an instance of s3client
. #Bean
public AmazonS3 s3client() {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.fromName(clientRegion))
.withCredentials(loadCredentials(false))
.build();
return s3Client;
}
}
My question since the credentials of instance profile rotate after every 12 hours my application will fail after 12 hours.
What will i do to avoid this from happening in my code.
You can directly use ProfileCredentialsProvider instead of DefaultAWSCredentialsProviderChain as there is no need in your case to chain the credsproviders.
and about your question, AWSCredentialsProvider has refresh() method that will reread the config file.when an Authentication exception Occurs while using S3Client you can retry again and call refresh() first.

PreSignedUrl could not be authenticated. (Service: AmazonRDS; Status Code: 400; Error Code - while copy RDS Snapshot to different region

I have a lambda function which copies the RDS Snapshot from Eu-West-3 to Eu-Central-1 region.
Here is my code:
public class CopySnapshot implements RequestHandler<String, String> {
public String handleRequest(String input, Context context) {
AmazonRDS client = AmazonRDSClientBuilder.standard().build();
DescribeDBSnapshotsRequest request = new DescribeDBSnapshotsRequest()
.withDBInstanceIdentifier(System.getenv("DB_IDENTIFIER"))
.withSnapshotType(System.getenv("SNAPSHOT_TYPE"))
.withIncludeShared(true)
.withIncludePublic(false);
DescribeDBSnapshotsResult response = client.describeDBSnapshots(request);
System.out.println("Found the snapshot "+response);
// Get the latest snapshot
List<DBSnapshot> list = response.getDBSnapshots();
if(list.size() > 0)
{
DBSnapshot d = list.get(list.size()-1);
String snapshotArn=d.getDBSnapshotArn();
System.out.println(snapshotArn);
AmazonRDS client_dr_region = AmazonRDSClientBuilder
.standard()
.withRegion(Regions.EU_CENTRAL_1)
.build();
SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yy-MM-dd-HH-mm");
CopyDBSnapshotRequest copyDbSnapshotRequest = new CopyDBSnapshotRequest()
.withSourceDBSnapshotIdentifier(snapshotArn)
.withSourceRegion("eu-west-3")
.withKmsKeyId(System.getenv("OTHER_KMS_KEY_ID"))
.withTargetDBSnapshotIdentifier("dr-snapshot-copy"+"-"+simpleDateFormat.format(new Date()));
DBSnapshot response_snapshot_copy = client_dr_region
.copyDBSnapshot(copyDbSnapshotRequest)
.withKmsKeyId(System.getenv("OTHER_KMS_KEY_ID"))
.withSourceRegion("eu-west-3");
System.out.println("Snapshot request submitted successfully "+response_snapshot_copy);
return "Snapshot copy request successfully submitted";
}
else
return "No Snapshot found";
}
}
While executing the code it shows below error:
{
"errorMessage": "PreSignedUrl could not be authenticated. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 7f794176-a21f-448e-acb6-8a5832925cab)",
"errorType": "com.amazonaws.services.rds.model.AmazonRDSException",
"stackTrace": [
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1726)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1381)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1127)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:784)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:745)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)",
"com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)",
"com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)",
"com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)",
"com.amazonaws.services.rds.AmazonRDSClient.doInvoke(AmazonRDSClient.java:9286)",
"com.amazonaws.services.rds.AmazonRDSClient.invoke(AmazonRDSClient.java:9253)",
"com.amazonaws.services.rds.AmazonRDSClient.invoke(AmazonRDSClient.java:9242)",
"com.amazonaws.services.rds.AmazonRDSClient.executeCopyDBSnapshot(AmazonRDSClient.java:1262)",
"com.amazonaws.services.rds.AmazonRDSClient.copyDBSnapshot(AmazonRDSClient.java:1234)",
"fr.aws.rds.CopySnapshot.handleRequest(CopySnapshot.java:59)",
"fr.aws.rds.CopySnapshot.handleRequest(CopySnapshot.java:19)"
]
}
From env variable I am fetching the KMS ID of EU-Central-1 with is the destination region for copying snapshot.
The lambda has full permission (for trial purpose) on KMS but it does not work.
Added an inline policy to the specific lambda role, with describe, create grant using the key (full ARN mentioned) but still shows same error.
The key is enabled but not sure why such error.
Many thanks for your valuable feedback.
This I have resolved it using, one more attribute added to it - sourceregion.
CopyDBSnapshotRequest copyDbSnapshotRequest = new CopyDBSnapshotRequest()
.withSourceDBSnapshotIdentifier(snapshotArn)
.withSourceRegion(System.getenv("SOURCE_REGION"))
.withKmsKeyId(System.getenv("OTHER_KMS_KEY_ID"))
.withTargetDBSnapshotIdentifier("dr-snapshot-copy"+"-"+simpleDateFormat.format(new Date()));
DBSnapshot response_snapshot_copy = client_dr_region
.copyDBSnapshot(copyDbSnapshotRequest)
.withKmsKeyId(System.getenv("OTHER_KMS_KEY_ID"))
.withSourceRegion(System.getenv("SOURCE_REGION"));
and voila, it worked

Location constraint exception while trying to access bucket from aws ec2 client

I am trying to create bucket from java web application. My tomcat is configured on AWS EC2 instance. It is giving following error, while it tries to connect to AWS S3:
com.amazonaws.services.s3.model.AmazonS3Exception:
The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.
(Service: Amazon S3; Status Code: 400;..).
This is the code sample:
public class FileOperationsUtil {
private final BasicAWSCredentials awsCreds = new BasicAWSCredentials("xyz", "zyz");
private final AmazonS3 s3Client = new AmazonS3Client(awsCreds); private final String bucketName = "grex-prod";
//public static final Region ap-south-1;
public void uploadFile(InputStream fileInputStream,
String fileUploadLocation, String fileName) throws IOException {
s3Client.setRegion(Region.getRegion(Regions.AP_SOUTH_1));
// Region apsouth1 = Region.getRegion(Regions.ap-south-1); // s3Client.setRegion(apsouth1); // s3Client.setRegion(Region.getRegion(Regions.ap-south-1));
//s3Client.create_bucket(bucket, CreateBucketConfiguration={'LocationConstraint': 'ap-northeast-2'})
s3Client.createBucket(bucketName);
File fileToUpload = new File(fileUploadLocation);
fileToUpload.mkdirs();
// Full file path
String fullFilePath = (fileUploadLocation + fileName);
ObjectMetadata meta = new ObjectMetadata();
// meta.setContentLength(contents.length);
meta.setContentType("image/png");
// Upload files to a specific AWS s3 bucket
s3Client.putObject(new PutObjectRequest("grex-prod", fullFilePath,
fileInputStream, meta)
.withCannedAcl(CannedAccessControlList.Private));
}
public void deleteFolder(String oldFullFilePath) {
// System.out.println("inside");
ObjectListing objects = s3Client.listObjects(bucketName, oldFullFilePath);
for (S3ObjectSummary objectSummary : objects.getObjectSummaries()) {
s3Client.deleteObject(bucketName, objectSummary.getKey());}
s3Client.deleteObject(bucketName, oldFullFilePath);}
In your example above:
s3Client.setRegion(Region.getRegion(Regions.AP_SOUTH_1));
// Region apsouth1 = Region.getRegion(Regions.ap-south-1); // s3Client.setRegion(apsouth1); // s3Client.setRegion(Region.getRegion(Regions.ap-south-1));
//s3Client.create_bucket(bucket, CreateBucketConfiguration={'LocationConstraint': 'ap-northeast-2'})
Both "region" and "LocationConstraint" should match. If you want to create the bucket in "ap-south-1" then both should be set to that value.
The error you received was due to the two values not matching, in other words you connected to one region (likely ap-south-1), and then tried to create a bucket which is intended to exist in another region (ap-northeast-2).
By excluding "LocationConstraint", the location where the bucket is created is entirely based on the "Region" which you are connected to. By using "LocationConstraint" you can ensure you are not trying to create the bucket in a region other than the one you intended.
There are some rules while using "locationConstraint":
region "us-east-1" is compatible with all regions in "locationConstraint"
With any other region, region and locationConstraint has to be same.
In short:
With "us-east-1" only, You can use any region as locationConstraint to create bucket in desired region.(region='us-east-1' locationConstraint=)
With all other region region and locationConstraint should be the same(region='us-west-2', locationConstraint='us-west-2')
any other region will throw "code:IllegalLocationConstraintException"