I'm using the code suggested on amazon documentation for uploading files to amazon buckets. The code is running on some machines, but on others, it doesn't pass the build() line.
Here is the code:
private static AWSBucketManager instance = null;
private final AmazonS3 s3;
private String clientRegion= Settings.getSettingValue("AWS_REGION");
private String secretKey = Settings.getSettingValue("AWS_SECRETKEY");
private String accesssKey = Settings.getSettingValue("AWS_ACCESSKEY");
private AWSBucketManager()
{
LoggingService.writeToLog("Login to aws bucket with basic creds",LogModule.Gateway, LogLevel.Info);
BasicAWSCredentials creds = new BasicAWSCredentials(accesssKey, secretKey);
LoggingService.writeToLog("Success Login to aws bucket with basic creds",LogModule.Gateway, LogLevel.Info);
s3 = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new AWSStaticCredentialsProvider(creds))
.build();
LoggingService.writeToLog("Login successfully to aws bucket with basic creds",LogModule.Gateway, LogLevel.Info);
}
public static AWSBucketManager getInstance()
{
if(instance == null)
{
instance = new AWSBucketManager();
}
return instance;
}
any idea what is going wrong? or how I can debug it with logs?
Problem occurred because my code is obfuscated with proguard.
adding the following rules solved this issue:
-keep class org.apache.commons.logging.** { *; }
-keepattributes Signature,Annotation
-keep class com.amazonaws.** { *; }
Related
I was unable to find any documentation on how to empty an Amazon S3 bucket programmatically in Java. Any help will be appreciated.
To delete Amazon S3 objects using Java, you can use the AWS SDK for Java V2. To empty a bucket, first get a list of all objects in the Bucket using this code:
public static void listBucketObjects(S3Client s3, String bucketName ) {
try {
ListObjectsRequest listObjects = ListObjectsRequest
.builder()
.bucket(bucketName)
.build();
ListObjectsResponse res = s3.listObjects(listObjects);
List<S3Object> objects = res.contents();
for (ListIterator iterVals = objects.listIterator(); iterVals.hasNext(); ) {
S3Object myValue = (S3Object) iterVals.next();
System.out.print("\n The name of the key is " + myValue.key());
System.out.print("\n The object is " + calKb(myValue.size()) + " KBs");
System.out.print("\n The owner is " + myValue.owner());
}
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
For each object, you can delete it using this code:
public static void deleteBucketObjects(S3Client s3, String bucketName, String objectName) {
ArrayList<ObjectIdentifier> toDelete = new ArrayList<ObjectIdentifier>();
toDelete.add(ObjectIdentifier.builder().key(objectName).build());
try {
DeleteObjectsRequest dor = DeleteObjectsRequest.builder()
.bucket(bucketName)
.delete(Delete.builder().objects(toDelete).build())
.build();
s3.deleteObjects(dor);
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
System.out.println("Done!");
}
You can find these examples and many others in AWS Github here:
Amazon S3 Java code examples
I have a Spring Boot application that has a POST end-point that accepts 2 types of files. Based on the file category, I need to write them to S3 buckets which are in different regions. Example: Category 1 file should be written to Frankfurt (eu-central-1) and Category 2 file should be written to Ohio (us-east-2) S3 buckets. Spring boot accepts a static region (cloud.aws.region.static=eu-central-1) through property configuration and the connection is established when starting the Spring boot so the AmazoneS3 Client Bean is already created with a connection to Frankfurt itself.
I need to containerize this entire setup and deploy it in a K8 Pod.
What is the recommendation for establishing connections and writing objects to different regional buckets? How do I need to implement this? Looking for a dynamic region finding solution rather statically created Bean per region.
Below is a working piece of code that connects to Frankfurt bucket and PUT the object.
#Service
public class S3Service {
#Autowired
private AmazonS3 amazonS3Client;
public void putObject(MultipartFile multipartFile) {
ObjectMetadata objectMetaData = new ObjectMetadata();
objectMetaData.setContentType(multipartFile.getContentType());
objectMetaData.setContentLength(multipartFile.getSize());
try {
PutObjectRequest putObjectRequest = new PutObjectRequest("example-bucket", multipartFile.getOriginalFilename(), multipartFile.getInputStream(), objectMetaData);
this.amazonS3Client.putObject(putObjectRequest);
} catch (IOException e) {
/* Handle Exception */
}
}
}
Updated Code (20/08/2021)
#Component
public class AmazoneS3ConnectionFactory {
private static final Logger LOGGER = LoggerFactory.getLogger(AmazoneS3ConnectionFactory.class);
#Value("${example.aws.s3.regions}")
private String[] regions;
#Autowired
private DefaultListableBeanFactory beanFactory;
#Autowired
private AWSCredentialsProvider credentialProvider;
#PostConstruct
public void init() {
for(String region: this.regions) {
String amazonS3BeanName = region + "_" + "amazonS3";
if (!this.beanFactory.containsBean(amazonS3BeanName)) {
AmazonS3ClientBuilder builder = AmazonS3ClientBuilder.standard().withPathStyleAccessEnabled(true)
.withCredentials(this.credentialProvider).withRegion(region).withChunkedEncodingDisabled(true);
AmazonS3 awsS3 = builder.build();
this.beanFactory.registerSingleton(amazonS3BeanName, awsS3);
LOGGER.info("Bean " + amazonS3BeanName + " - Not exist. Created a bean and registered the same");
}
}
}
/**
* Returns {#link AmazonS3} for a region. Uses the default {#link AWSCredentialsProvider}
*/
public AmazonS3 getConnection(String region) {
String amazonS3BeanName = region + "_" + "amazonS3";
return (AmazonS3Client)this.beanFactory.getBean(amazonS3BeanName, AmazonS3.class);
}
}
My Service layer will call the "getConnection()" and get the AmazonS3 Object to operate on it.
The only option that I am aware is to create different S3Client with S3ClientBuilder, one for each different region. You would need to register them as Spring Beans with different names so that you can later autowire them.
Update (19/08/2021)
The following should work (sorry for the Kotlin code but it is faster to write):
Class that may contain your configuration for each region.
class AmazonS3Properties(val accessKeyId: String,
val secretAccessKey: String,
val region: String,
val bucket: String)
Configuration for S3 that will create 2 S3Clients and stored the buckets for each region (later needed).
#Configuration
class AmazonS3Configuration(private val s3Properties: Map<String, AmazonS3Properties>) {
lateinit var buckets: Map<String, String>
#PostConstruct
fun init() {
buckets = s3Properties.mapValues { it.bucket }
}
#Bean(name = "regionA")
fun regionA(): S3Client {
val regionAProperties = s3Properties["region-a"]
val awsCredentials = AwsBasicCredentials.create(regionAProperties.accessKeyId, regionAProperties.secretAccessKey)
return S3Client.builder().region(Region.of(regionAProperties.region)).credentialsProvider { awsCredentials }.build()
}
#Bean(name = "regionB")
fun regionB(): S3Client {
val regionBProperties = s3Properties["region-b"]
val awsCredentials = AwsBasicCredentials.create(regionBProperties.accessKeyId, regionBProperties.secretAccessKey)
return S3Client.builder().region(Region.of(regionBProperties.region)).credentialsProvider { awsCredentials }.build()
}
}
Service that will target one of the regions (Region A)
#Service
class RegionAS3Service(private val amazonS3Configuration: AmazonS3Configuration,
#field:Qualifier("regionA") private val amazonS3Client: S3Client) {
fun save(region: String, byteArrayOutputStream: ByteArrayOutputStream) {
val inputStream = ByteArrayInputStream(byteArrayOutputStream.toByteArray())
val contentLength = byteArrayOutputStream.size().toLong()
amazonS3Client.putObject(PutObjectRequest.builder().bucket(amazonS3Configuration.buckets[region]).key("whatever-key").build(), RequestBody.fromInputStream(inputStream, contentLength))
}
}
I am using ASP.NET Core and AWSSDK.S3 nuget package.
I am able to upload file by providing accessKeyID, secretKey, bucketName and region
Like this:
var credentials = new BasicAWSCredentials(accessKeyID, secretKey);
using (var client = new AmazonS3Client(credentials, RegionEndpoint.USEast1))
{
var request = new PutObjectRequest
{
AutoCloseStream = true,
BucketName = bucketName,
InputStream = storageStream,
Key = fileName
}
}
But I am given only an URL to upload file
11.11.11.111:/aa-bb-cc-dd-useast1
How to upload file through the URL? I am new to AWS,I will be grateful to get some help.
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string filePath = "*** provide the full path name of the file to upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadFileAsync().Wait();
}
private static async Task UploadFileAsync()
{
try
{
var fileTransferUtility =
new TransferUtility(s3Client);
// Option 1. Upload a file. The file name is used as the object key name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");
}
}
}
}
https://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileDotNet.html
You can use the provided access point in place of the bucket name.
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/S3/TPutObjectRequest.html
I cannot access S3 through Java Code, but can through AWS CLI.
I am using Credentials from AWS SDK for MINIO
// import statements
public class S3Application {
private static final AWSCredentials credentials;
private static String bucketName;
static {
//put your accesskey and secretkey here
credentials = new BasicAWSCredentials(
"Q3AM3UQ867SPQQA43P2F",
"zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
);
}
public static void main(String[] args) throws IOException {
//set-up the client
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://play.min.io:9000","us-east-1"))
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.build();
AWSS3Service awsService = new AWSS3Service(s3Client);
}
}
This is my log for the above mentioned code.
Exception in thread "main" com.amazonaws.SdkClientException: Unable to execute HTTP request: Connection reset
...
Caused by: java.net.SocketException: Connection reset
...
... 13 more
Process finished with exit code 1
You have might have to set PathStyle access to true.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#withPathStyleAccessEnabled-java.lang.Boolean-
Code like this might work.
// import statements
public class S3Application {
private static final AWSCredentials credentials;
private static String bucketName;
static {
//put your accesskey and secretkey here
credentials = new BasicAWSCredentials(
"Q3AM3UQ867SPQQA43P2F",
"zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
);
}
public static void main(String[] args) throws IOException {
//set-up the client
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://play.min.io:9000","us-east-1"))
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withPathStyleAccessEnabled(true)
.build();
AWSS3Service awsService = new AWSS3Service(s3Client);
}
}
I'm trying to execute query in AWS Athena using Java API:
public class AthenaClientFactory
{
String accessKey = "access";
String secretKey = "secret";
BasicAWSCredentials awsCredentials = new
BasicAWSCredentials(accessKey, secretKey);
private final AmazonAthenaClientBuilder builder = AmazonAthenaClientBuilder.standard()
.withRegion(Regions.US_WEST_1)
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.withClientConfiguration(new ClientConfiguration().withClientExecutionTimeout(10));
public AmazonAthena createClient()
{
return builder.build();
}
}
private static String submitAthenaQuery(AmazonAthena client) {
QueryExecutionContext queryExecutionContext = new QueryExecutionContext().withDatabase("my_db");
ResultConfiguration resultConfiguration = new ResultConfiguration().withOutputLocation("my_bucket");
StartQueryExecutionRequest startQueryExecutionRequest = new StartQueryExecutionRequest()
.withQueryString("select * from my_db limit 3;")
.withQueryExecutionContext(queryExecutionContext)
.withResultConfiguration(resultConfiguration);
StartQueryExecutionResult startQueryExecutionResult = client.startQueryExecution(startQueryExecutionRequest);
return startQueryExecutionResult.getQueryExecutionId();
}
public void run() throws InterruptedException {
AthenaClientFactory factory = new AthenaClientFactory();
AmazonAthena client = factory.createClient();
String queryExecutionId = submitAthenaQuery(client);
}
But I get an exception from startQueryExecutionResult.
The exception is:
Client execution did not complete before the specified timeout
configuration.
Has anyone encountered something similar?
The problem was in withClientExecutionTimeout(10).
Increasing this number to 5000 solved the issue