Exception when trying to connect to AWS Athena using JAVA API - amazon-web-services

I'm trying to execute query in AWS Athena using Java API:
public class AthenaClientFactory
{
String accessKey = "access";
String secretKey = "secret";
BasicAWSCredentials awsCredentials = new
BasicAWSCredentials(accessKey, secretKey);
private final AmazonAthenaClientBuilder builder = AmazonAthenaClientBuilder.standard()
.withRegion(Regions.US_WEST_1)
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.withClientConfiguration(new ClientConfiguration().withClientExecutionTimeout(10));
public AmazonAthena createClient()
{
return builder.build();
}
}
private static String submitAthenaQuery(AmazonAthena client) {
QueryExecutionContext queryExecutionContext = new QueryExecutionContext().withDatabase("my_db");
ResultConfiguration resultConfiguration = new ResultConfiguration().withOutputLocation("my_bucket");
StartQueryExecutionRequest startQueryExecutionRequest = new StartQueryExecutionRequest()
.withQueryString("select * from my_db limit 3;")
.withQueryExecutionContext(queryExecutionContext)
.withResultConfiguration(resultConfiguration);
StartQueryExecutionResult startQueryExecutionResult = client.startQueryExecution(startQueryExecutionRequest);
return startQueryExecutionResult.getQueryExecutionId();
}
public void run() throws InterruptedException {
AthenaClientFactory factory = new AthenaClientFactory();
AmazonAthena client = factory.createClient();
String queryExecutionId = submitAthenaQuery(client);
}
But I get an exception from startQueryExecutionResult.
The exception is:
Client execution did not complete before the specified timeout
configuration.
Has anyone encountered something similar?

The problem was in withClientExecutionTimeout(10).
Increasing this number to 5000 solved the issue

Related

How to setTimeout for JsonStreamWrite api

I am having a Java Spring boot application which uses JsonStreamWrite of Google BigQuery Storage Write API to write data to BigQuery.
I wanted to timeout the stream write if it takes more than a minute to insert into BigQuery.
Here is my sample code
public void createWriteStream(String table, JsonArr arr) throws IOException, Descriptors.DescriptorValidationException, InterruptedException {
BigQueryWriteClient bqClient = BigQueryWriteClient.create();
WriteStream stream = WriteStream.newBuilder().setType(WriteStream.Type.COMMITTED).build();
TableName tableName = TableName.of("ProjectId", "DataSet", table);
CreateWriteStreamRequest createWriteStreamRequest =
CreateWriteStreamRequest.newBuilder()
.setParent(tableName.toString())
.setWriteStream(stream)
.build();
WriteStream writeStream = bqClient.createWriteStream(createWriteStreamRequest);
JsonStreamWriter jsonStreamWriter = JsonStreamWriter
.newBuilder(writeStream.getName(), writeStream.getTableSchema())
.build();
jsonStreamWriter.append(jsonArr);
}
Does bigquery provide any such configuration to timeout the insert ?
Try this code:
public void createWriteStream(String table, JsonArr arr) throws IOException, Descriptors.DescriptorValidationException, InterruptedException {
BigQueryWriteSettings.Builder bigQueryWriteSettingsBuilder = BigQueryWriteSettings.newBuilder();
bigQueryWriteSettingsBuilder
.createWriteStreamSettings()
.setRetrySettings(
bigQueryWriteSettingsBuilder
.createWriteStreamSettings()
.getRetrySettings()
.toBuilder()
.setTotalTimeout(Duration.ofMinute(1))
.build());
BigQueryWriteSettings bigQueryWriteSettings = bigQueryWriteSettingsBuilder.build();
BigQueryWriteClient bqClient = BigQueryWriteClient.create(bigQueryWriteSettingsBuilder);
WriteStream stream = WriteStream.newBuilder().setType(WriteStream.Type.COMMITTED).build();
TableName tableName = TableName.of("ProjectId", "DataSet", table);
CreateWriteStreamRequest createWriteStreamRequest =
CreateWriteStreamRequest.newBuilder()
.setParent(tableName.toString())
.setWriteStream(stream)
.build();
WriteStream writeStream = bqClient.createWriteStream(createWriteStreamRequest);
JsonStreamWriter jsonStreamWriter = JsonStreamWriter
.newBuilder(writeStream.getName(), writeStream.getTableSchema())
.build();
jsonStreamWriter.append(jsonArr);
}
I referred to this documentation on how I implemented the BigQueryWriteSettings.

DynamoDB client with auto refresh credentials

I'm creating DynamoDB client using IAM credentials obtained via STS assume role.
#Provides
public DynamoDbEnhancedClient DdbClientProvider() {
final AWSSecurityTokenServiceClientBuilder stsClientBuilder = AWSSecurityTokenServiceClientBuilder.standard()
.withClientConfiguration(new ClientConfiguration());
final AssumeRoleRequest assumeRoleRequest = new AssumeRoleRequest().withRoleSessionName("some.session.name");
assumeRoleRequest.setRoleArn("arnRole");
final AssumeRoleResult assumeRoleResult = stsClientBuilder.build().assumeRole(assumeRoleRequest);
final Credentials creds = assumeRoleResult.getCredentials();
final AwsSessionCredentials sessionCredentials = AwsSessionCredentials.create(creds.getAccessKeyId()
, creds.getSecretAccessKey(), creds.getSessionToken());
final AwsCredentialsProviderChain credsProvider = AwsCredentialsProviderChain.builder()
.credentialsProviders(StaticCredentialsProvider.create(sessionCredentials))
.build();
final DynamoDbClient ddbClient = DynamoDbClient.builder().region(Region.US_EAST_1)
.credentialsProvider(credsProvider).build();
final DynamoDbEnhancedClient ddbEnhancedclient =
DynamoDbEnhancedClient.builder().dynamoDbClient(ddbClient).build();
return ddbEnhancedClient;
}
The main lambda handler looks like below:
public void LambdaMainHandler {
DynamoDbEnhancedClient ddbClient;
#Inject
public LambdaMainHandler(final DynamoDbEnhancedClient client) {
this.ddbClient = client;
}
public LambdaResponse processRequest(final LambdaRequest request) {
QueryResponse queryResponse = client.query("...")
return LambdaResponse.builder().setContent(queryResponse.getByteBuffer()).build();
}
}
I'm using the DDB client in LambdaMain constructor.
Since this is running in Lambda behind APIGateway, how do I make sure the credentials are refreshed when they expire while executing LambdaMain handler?

How to connect different Regional S3 Bucket from a Spring Boot Application?

I have a Spring Boot application that has a POST end-point that accepts 2 types of files. Based on the file category, I need to write them to S3 buckets which are in different regions. Example: Category 1 file should be written to Frankfurt (eu-central-1) and Category 2 file should be written to Ohio (us-east-2) S3 buckets. Spring boot accepts a static region (cloud.aws.region.static=eu-central-1) through property configuration and the connection is established when starting the Spring boot so the AmazoneS3 Client Bean is already created with a connection to Frankfurt itself.
I need to containerize this entire setup and deploy it in a K8 Pod.
What is the recommendation for establishing connections and writing objects to different regional buckets? How do I need to implement this? Looking for a dynamic region finding solution rather statically created Bean per region.
Below is a working piece of code that connects to Frankfurt bucket and PUT the object.
#Service
public class S3Service {
#Autowired
private AmazonS3 amazonS3Client;
public void putObject(MultipartFile multipartFile) {
ObjectMetadata objectMetaData = new ObjectMetadata();
objectMetaData.setContentType(multipartFile.getContentType());
objectMetaData.setContentLength(multipartFile.getSize());
try {
PutObjectRequest putObjectRequest = new PutObjectRequest("example-bucket", multipartFile.getOriginalFilename(), multipartFile.getInputStream(), objectMetaData);
this.amazonS3Client.putObject(putObjectRequest);
} catch (IOException e) {
/* Handle Exception */
}
}
}
Updated Code (20/08/2021)
#Component
public class AmazoneS3ConnectionFactory {
private static final Logger LOGGER = LoggerFactory.getLogger(AmazoneS3ConnectionFactory.class);
#Value("${example.aws.s3.regions}")
private String[] regions;
#Autowired
private DefaultListableBeanFactory beanFactory;
#Autowired
private AWSCredentialsProvider credentialProvider;
#PostConstruct
public void init() {
for(String region: this.regions) {
String amazonS3BeanName = region + "_" + "amazonS3";
if (!this.beanFactory.containsBean(amazonS3BeanName)) {
AmazonS3ClientBuilder builder = AmazonS3ClientBuilder.standard().withPathStyleAccessEnabled(true)
.withCredentials(this.credentialProvider).withRegion(region).withChunkedEncodingDisabled(true);
AmazonS3 awsS3 = builder.build();
this.beanFactory.registerSingleton(amazonS3BeanName, awsS3);
LOGGER.info("Bean " + amazonS3BeanName + " - Not exist. Created a bean and registered the same");
}
}
}
/**
* Returns {#link AmazonS3} for a region. Uses the default {#link AWSCredentialsProvider}
*/
public AmazonS3 getConnection(String region) {
String amazonS3BeanName = region + "_" + "amazonS3";
return (AmazonS3Client)this.beanFactory.getBean(amazonS3BeanName, AmazonS3.class);
}
}
My Service layer will call the "getConnection()" and get the AmazonS3 Object to operate on it.
The only option that I am aware is to create different S3Client with S3ClientBuilder, one for each different region. You would need to register them as Spring Beans with different names so that you can later autowire them.
Update (19/08/2021)
The following should work (sorry for the Kotlin code but it is faster to write):
Class that may contain your configuration for each region.
class AmazonS3Properties(val accessKeyId: String,
val secretAccessKey: String,
val region: String,
val bucket: String)
Configuration for S3 that will create 2 S3Clients and stored the buckets for each region (later needed).
#Configuration
class AmazonS3Configuration(private val s3Properties: Map<String, AmazonS3Properties>) {
lateinit var buckets: Map<String, String>
#PostConstruct
fun init() {
buckets = s3Properties.mapValues { it.bucket }
}
#Bean(name = "regionA")
fun regionA(): S3Client {
val regionAProperties = s3Properties["region-a"]
val awsCredentials = AwsBasicCredentials.create(regionAProperties.accessKeyId, regionAProperties.secretAccessKey)
return S3Client.builder().region(Region.of(regionAProperties.region)).credentialsProvider { awsCredentials }.build()
}
#Bean(name = "regionB")
fun regionB(): S3Client {
val regionBProperties = s3Properties["region-b"]
val awsCredentials = AwsBasicCredentials.create(regionBProperties.accessKeyId, regionBProperties.secretAccessKey)
return S3Client.builder().region(Region.of(regionBProperties.region)).credentialsProvider { awsCredentials }.build()
}
}
Service that will target one of the regions (Region A)
#Service
class RegionAS3Service(private val amazonS3Configuration: AmazonS3Configuration,
#field:Qualifier("regionA") private val amazonS3Client: S3Client) {
fun save(region: String, byteArrayOutputStream: ByteArrayOutputStream) {
val inputStream = ByteArrayInputStream(byteArrayOutputStream.toByteArray())
val contentLength = byteArrayOutputStream.size().toLong()
amazonS3Client.putObject(PutObjectRequest.builder().bucket(amazonS3Configuration.buckets[region]).key("whatever-key").build(), RequestBody.fromInputStream(inputStream, contentLength))
}
}

Cannot access S3 through Java Code. [But can through AWS CLI]

I cannot access S3 through Java Code, but can through AWS CLI.
I am using Credentials from AWS SDK for MINIO
// import statements
public class S3Application {
private static final AWSCredentials credentials;
private static String bucketName;
static {
//put your accesskey and secretkey here
credentials = new BasicAWSCredentials(
"Q3AM3UQ867SPQQA43P2F",
"zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
);
}
public static void main(String[] args) throws IOException {
//set-up the client
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://play.min.io:9000","us-east-1"))
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.build();
AWSS3Service awsService = new AWSS3Service(s3Client);
}
}
This is my log for the above mentioned code.
Exception in thread "main" com.amazonaws.SdkClientException: Unable to execute HTTP request: Connection reset
...
Caused by: java.net.SocketException: Connection reset
...
... 13 more
Process finished with exit code 1
You have might have to set PathStyle access to true.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#withPathStyleAccessEnabled-java.lang.Boolean-
Code like this might work.
// import statements
public class S3Application {
private static final AWSCredentials credentials;
private static String bucketName;
static {
//put your accesskey and secretkey here
credentials = new BasicAWSCredentials(
"Q3AM3UQ867SPQQA43P2F",
"zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
);
}
public static void main(String[] args) throws IOException {
//set-up the client
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://play.min.io:9000","us-east-1"))
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withPathStyleAccessEnabled(true)
.build();
AWSS3Service awsService = new AWSS3Service(s3Client);
}
}

Amazon Elasticsearch service 403-forbidden error

I am having trouble fetching result from my amazon elastic search cluster using the amazon java SDK and an IAm user credential. Now the issue is that when the PATH string is equal to "/" then I am able to fetch the result correctly but when I try with a different path for e.g "/private-search" then I get a 403 forbidden error. Even when for the path that has public access I am getting a 403 forbidden error for this IAm user but it works if I remove "signer.sign(requestToSign, credentials);" line in performSigningSteps method(for public resource only).
My policy in AWS gives this IAM user access to everything in my elastic search service. And also what can I do to avoid hard-coding the access key and secret key in source code?
private static final String SERVICE_NAME = "es";
private static final String REGION = "region-name";
private static final String HOST = "host-name";
private static final String ENDPOINT_ROOT = "http://" + HOST;
private static final String PATH = "/private-search";
private static final String ENDPOINT = ENDPOINT_ROOT + PATH;
private static String accessKey = "IAmUserAccesskey"
private static String secretKey = "IAmUserSecretkey"
public static void main(String[] args) {
// Generate the request
Request<?> request = generateRequest();
// Perform Signature Version 4 signing
performSigningSteps(request);
// Send the request to the server
sendRequest(request);
}
private static Request<?> generateRequest() {
Request<?> request = new DefaultRequest<Void>(SERVICE_NAME);
request.setContent(new ByteArrayInputStream("".getBytes()));
request.setEndpoint(URI.create(ENDPOINT));
request.setHttpMethod(HttpMethodName.GET);
return request;
}
private static void performSigningSteps(Request<?> requestToSign) {
AWS4Signer signer = new AWS4Signer();
signer.setServiceName(requestToSign.getServiceName());
signer.setRegionName(REGION);
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
signer.sign(requestToSign, credentials);
}
private static void sendRequest(Request<?> request) {
ExecutionContext context = new ExecutionContext();
ClientConfiguration clientConfiguration = new ClientConfiguration();
AmazonHttpClient client = new AmazonHttpClient(clientConfiguration);
MyHttpResponseHandler<Void> responseHandler = new MyHttpResponseHandler<Void>();
MyErrorHandler errorHandler = new MyErrorHandler();
Void response = client.execute(request, responseHandler, errorHandler, context);
}
public static class MyHttpResponseHandler<T> implements HttpResponseHandler<AmazonWebServiceResponse<T>> {
#Override
public AmazonWebServiceResponse<T> handle(com.amazonaws.http.HttpResponse response) throws Exception {
InputStream responseStream = response.getContent();
String responseString = convertStreamToString(responseStream);
System.out.println(responseString);
AmazonWebServiceResponse<T> awsResponse = new AmazonWebServiceResponse<T>();
return awsResponse;
}
#Override
public boolean needsConnectionLeftOpen() {
return false;
}
}
public static class MyErrorHandler implements HttpResponseHandler<AmazonServiceException> {
#Override
public AmazonServiceException handle(com.amazonaws.http.HttpResponse response) throws Exception {
System.out.println("In exception handler!");
AmazonServiceException ase = new AmazonServiceException("exception.");
ase.setStatusCode(response.getStatusCode());
ase.setErrorCode(response.getStatusText());
return ase;
}
#Override
public boolean needsConnectionLeftOpen() {
return false;
}
}
public static String convertStreamToString(InputStream is) throws IOException {
// To convert the InputStream to String we use the
// Reader.read(char[] buffer) method. We iterate until the
// Reader return -1 which means there's no more data to
// read. We use the StringWriter class to produce the string.
if (is != null) {
Writer writer = new StringWriter();
char[] buffer = new char[1024];
try {
Reader reader = new BufferedReader(new InputStreamReader(is, "UTF-8"));
int n;
while ((n = reader.read(buffer)) != -1) {
writer.write(buffer, 0, n);
}
}
finally {
is.close();
}
return writer.toString();
}
return "";
}