I'm following the AWS documentation about how to connect to redshift [generating user credentials][1]
But the get-cluster-credentials API requires a cluster id parameter, which i don't have for a serverless endpoint. What id should I use?
EDIT:
[![enter image description here][2]][2]
This is the screen of a serverless endpoint dashboard. There is no cluster ID.
[1]: https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html
[2]: https://i.stack.imgur.com/VzvIs.png
Look at this Guide (a newer one) that talks about Connecting to Amazon Redshift Serverless. https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-connecting.html
See this information that answers your question:
Connecting to the serverless endpoint with the Data API
You can also use the Amazon Redshift Data API to connect to serverless endpoint. Leave off the cluster-identifier parameter in your AWS CLI calls to route your query to serverless endpoint.
UPDATE
I wanted to test this to make sure that a successful connection can be made. I followed this doc to setup a Serverless instance.
Get started with Amazon Redshift Serverless
I loaded sample data and now have this.
Now I attemped to connect to it using software.amazon.awssdk.services.redshiftdata.RedshiftDataClient.
The Java V2 code:
try {
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.database(database)
.sql(sqlStatement)
.build();
ExecuteStatementResponse response = redshiftDataClient.executeStatement(statementRequest);
return response.id();
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
Notice there is no cluster id or user. Only a database name (sample_data_dev). The call worked perfectly.
HEre is the full code example that successfully queries data from a serverless instance using the AWS SDK for Java V2.
package com.example.redshiftdata;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.redshiftdata.model.*;
import software.amazon.awssdk.services.redshiftdata.RedshiftDataClient;
import software.amazon.awssdk.services.redshiftdata.model.DescribeStatementRequest;
import java.util.List;
/**
* To run this Java V2 code example, ensure that you have setup your development environment, including your credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class RetrieveDataServerless {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" RetrieveData <database> <sqlStatement> \n\n" +
"Where:\n" +
" database - the name of the database (for example, sample_data_dev). \n" +
" sqlStatement - the sql statement to use. \n" ;
String database = "sample_data_dev" ;
String sqlStatement = "Select * from tickit.sales" ;
Region region = Region.US_WEST_2;
RedshiftDataClient redshiftDataClient = RedshiftDataClient.builder()
.region(region)
.build();
String id = performSQLStatement(redshiftDataClient, database, sqlStatement);
System.out.println("The identifier of the statement is "+id);
checkStatement(redshiftDataClient,id );
getResults(redshiftDataClient, id);
redshiftDataClient.close();
}
public static void checkStatement(RedshiftDataClient redshiftDataClient,String sqlId ) {
try {
DescribeStatementRequest statementRequest = DescribeStatementRequest.builder()
.id(sqlId)
.build() ;
// Wait until the sql statement processing is finished.
boolean finished = false;
String status = "";
while (!finished) {
DescribeStatementResponse response = redshiftDataClient.describeStatement(statementRequest);
status = response.statusAsString();
System.out.println("..."+status);
if (status.compareTo("FINISHED") == 0) {
break;
}
Thread.sleep(1000);
}
System.out.println("The statement is finished!");
} catch (RedshiftDataException | InterruptedException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
public static String performSQLStatement(RedshiftDataClient redshiftDataClient,
String database,
String sqlStatement) {
try {
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.database(database)
.sql(sqlStatement)
.build();
ExecuteStatementResponse response = redshiftDataClient.executeStatement(statementRequest);
return response.id();
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
public static void getResults(RedshiftDataClient redshiftDataClient, String statementId) {
try {
GetStatementResultRequest resultRequest = GetStatementResultRequest.builder()
.id(statementId)
.build();
GetStatementResultResponse response = redshiftDataClient.getStatementResult(resultRequest);
// Iterate through the List element where each element is a List object.
List<List<Field>> dataList = response.records();
// Print out the records.
for (List list: dataList) {
for (Object myField:list) {
Field field = (Field) myField;
String value = field.stringValue();
if (value != null)
System.out.println("The value of the field is " + value);
}
}
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
}
Related
I'm trying to trigger a dataflow job to process a csv file then save the data into postgresql.
the pipeline is written in Java
this is my pipeline code:
public class DataMappingService {
DataflowPipelineOptions options;
Pipeline pipeline;
private String jdbcUrl;
private String DB_NAME;
private String DB_PRIVATE_IP;
private String DB_USERNAME;
private String DB_PASSWORD;
private String PROJECT_ID;
private String SERVICE_ACCOUNT;
private static final String SQL_INSERT = "INSERT INTO upfit(upfitter_id, model_number, upfit_name, upfit_description, manufacturer, length, width, height, dimension_unit, weight, weight_unit, color, price, stock_number) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
public DataMappingService() {
DB_NAME = System.getenv("DB_NAME");
DB_PRIVATE_IP = System.getenv("DB_PRIVATEIP");
DB_USERNAME = System.getenv("DB_USERNAME");
DB_PASSWORD = System.getenv("DB_PASSWORD");
PROJECT_ID = System.getenv("GOOGLE_PROJECT_ID");
SERVICE_ACCOUNT = System.getenv("SERVICE_ACCOUNT_EMAIL");
jdbcUrl = "jdbc:postgresql://" + DB_PRIVATE_IP + ":5432/" + DB_NAME;
System.out.println("jdbcUrl: " + jdbcUrl);
System.out.println("dbUsername: " + DB_USERNAME);
System.out.println("dbPassword: " + DB_PASSWORD);
System.out.println("dbName: " + DB_NAME);
System.out.println("projectId: " + PROJECT_ID);
System.out.println("service account: " + SERVICE_ACCOUNT);
options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
options.setRunner(DataflowRunner.class);
options.setProject(PROJECT_ID);
options.setServiceAccount(SERVICE_ACCOUNT);
options.setWorkerRegion("us-east4");
options.setTempLocation("gs://upfit_data_flow_bucket/temp");
options.setStagingLocation("gs://upfit_data_flow_bucket/binaries");
options.setRegion("us-east4");
options.setSubnetwork("regions/us-east-4/subnetworks/us-east4-public");
options.setMaxNumWorkers(3);
}
public void processData(String gcsFilePath) {
try {
pipeline = Pipeline.create(options);
System.out.println("pipelineOptions: " +pipeline.getOptions());
pipeline.apply("ReadLines", TextIO.read().from(gcsFilePath))
.apply("SplitLines", new SplitLines())
.apply("SplitRecords", new SplitRecord())
.apply("ReadUpfits", new ReadUpfits());
.apply("write upfits", JdbcIO.<Upfit>write()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create(
"org.postgresql.Driver", jdbcUrl)
.withUsername(DB_USERNAME)
.withPassword(DB_PASSWORD))
.withStatement(SQL_INSERT)
.withPreparedStatementSetter(new JdbcIO.PreparedStatementSetter<Upfit>() {
private static final long serialVersionUID = 1L;
#Override
public void setParameters(Upfit element, PreparedStatement query) throws SQLException {
query.setInt(1, element.getUpfitterId());
query.setString(2, element.getModelNumber());
query.setString(3, element.getUpfitName());
query.setString(4, element.getUpfitDescription());
query.setString(5, element.getManufacturer());
query.setString(6, element.getLength());
query.setString(7, element.getWidth());
query.setString(8, element.getHeight());
query.setString(9, element.getDimensionsUnit());
query.setString(10, element.getWeight());
query.setString(11, element.getWeightUnit());
query.setString(12, element.getColor());
query.setString(13, element.getPrice());
query.setInt(14, element.getStockAmmount());
}
}
));
pipeline.run();
} catch (Exception e) {
e.printStackTrace();
}
}
}
I have created a user managed service account as the documentation says: https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#specifying_a_user-managed_worker_service_account and I'm providing the service account email in the pipeline options.
The service account has the following roles:
roles/dataflow.worker
roles/storage.admin
iam.serviceAccounts.actAs
roles/dataflow.admin
Service Account Token Creator
when I upload the csv file, the pipeline is triggered but I'm getting the following error:
Workflow failed. Causes: Subnetwork https://www.googleapis.com/compute/v1/projects/project-name/regions/us-east-4/subnetworks/us-east4-public is not accessible to Dataflow Service account or does not exist
I know that the subnetwork exists so I'm assuming its a permission error. The vpc network is created by my organization as we're not allowed to create our own.
I want to receive the role information for a role name. For example getting the exact ARN identifier.
Somehow this code below is not working. Sadly there is no error message in cloudwatch
import software.amazon.awssdk.services.iam.*;
import com.amazonaws.services.identitymanagement.model._
import com.amazonaws.services.identitymanagement.{AmazonIdentityManagementClient, AmazonIdentityManagement, AmazonIdentityManagementClientBuilder}
// ....
val iamClient = AmazonIdentityManagementClient
.builder()
.withRegion("eu-central-1")
.build()
val roleRequest = new GetRoleRequest();
roleRequest.setRoleName("InfrastructureStack-StandardRoleD-HBLE12VPTWQ")
val result = iamClient.getRole(roleRequest) // <-- Nothing happens after this line
println("wont execute this println statement")
Other services like CognitoIdentityProvider are working perfectly fine.
I also tried the builder pattern for the GetRoleRequest and IamClient.
I got this IAM V2 code working fine. As stated in my comment, setup your dev environment to use AWS SDK for Java V2.
package com.example.iam;
import software.amazon.awssdk.services.iam.model.*;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.iam.IamClient;
public class GetRole {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" <policyArn> \n\n" +
"Where:\n" +
" policyArn - a policy ARN that you can obtain from the AWS Management Console. \n\n" ;
// if (args.length != 1) {
// System.out.println(USAGE);
//// System.exit(1);
// }
String roleName = "DynamoDBAutoscaleRole" ; //args[0];
Region region = Region.AWS_GLOBAL;
IamClient iam = IamClient.builder()
.region(region)
.build();
getRoleInformation(iam, roleName);
System.out.println("Done");
iam.close();
}
public static void getRoleInformation(IamClient iam, String roleName) {
try {
GetRoleRequest roleRequest = GetRoleRequest.builder()
.roleName(roleName)
.build();
GetRoleResponse response = iam.getRole(roleRequest) ;
System.out.println("The ARN of the role is " +response.role().arn());
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
}
Output:
In my project there is a need for creating share link for external users without aws user from my researching found out a couple ways for doing so
Bucket policy based on tag
Lambda that creates sign url every time some user request the file
The question is what is the best practice for doing so
I need the download to be available until the user sharing the file stopes it
Thank guys for any answers
Using the AWS SDK, you can use Amazon S3 Pre-sign functionality. You can perform this task in any of the supported programming languages (Java, JS, Python, etc).
The following code shows how to sign an object via the Amazon S3 Java V2 API.
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.time.Duration;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.presigner.model.GetObjectPresignRequest;
import software.amazon.awssdk.services.s3.presigner.model.PresignedGetObjectRequest;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.utils.IoUtils;
// snippet-end:[presigned.java2.getobjectpresigned.import]
/**
* To run this AWS code example, ensure that you have setup your development environment, including your AWS credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class GetObjectPresignedUrl {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" GetObjectPresignedUrl <bucketName> <keyName> \n\n" +
"Where:\n" +
" bucketName - the Amazon S3 bucket name. \n\n"+
" keyName - a key name that represents a text file. \n\n";
if (args.length != 2) {
System.out.println(USAGE);
System.exit(1);
}
String bucketName = args[0];
String keyName = args[1];
Region region = Region.US_WEST_2;
S3Presigner presigner = S3Presigner.builder()
.region(region)
.build();
getPresignedUrl(presigner, bucketName, keyName);
presigner.close();
}
// snippet-start:[presigned.java2.getobjectpresigned.main]
public static void getPresignedUrl(S3Presigner presigner, String bucketName, String keyName ) {
try {
GetObjectRequest getObjectRequest =
GetObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.build();
GetObjectPresignRequest getObjectPresignRequest = GetObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(10))
.getObjectRequest(getObjectRequest)
.build();
// Generate the presigned request
PresignedGetObjectRequest presignedGetObjectRequest =
presigner.presignGetObject(getObjectPresignRequest);
// Log the presigned URL
System.out.println("Presigned URL: " + presignedGetObjectRequest.url());
HttpURLConnection connection = (HttpURLConnection) presignedGetObjectRequest.url().openConnection();
presignedGetObjectRequest.httpRequest().headers().forEach((header, values) -> {
values.forEach(value -> {
connection.addRequestProperty(header, value);
});
});
// Send any request payload that the service needs (not needed when isBrowserExecutable is true)
if (presignedGetObjectRequest.signedPayload().isPresent()) {
connection.setDoOutput(true);
try (InputStream signedPayload = presignedGetObjectRequest.signedPayload().get().asInputStream();
OutputStream httpOutputStream = connection.getOutputStream()) {
IoUtils.copy(signedPayload, httpOutputStream);
}
}
// Download the result of executing the request
try (InputStream content = connection.getInputStream()) {
System.out.println("Service returned response: ");
IoUtils.copy(content, System.out);
}
} catch (S3Exception e) {
e.getStackTrace();
} catch (IOException e) {
e.getStackTrace();
}
// snippet-end:[presigned.java2.getobjectpresigned.main]
}
}
I want to fetch all the failed executions and need to re-trigger them dynamically.
PS: In stepfunction definition I had proper retry mechanism, now I want to rerun the failed executions dynamically.
I need to implement it in java. Please help me with the approach.
Thanks in advance.
You can use the AWS Step Functions API to get a list of excutions:
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sfn/SfnClient.html#listExecutions-
Then you can get a list of ExecutionListItem by calling the executions() method that belongs to the ListExecutionsResponse object (returned by the listExecutions method)
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sfn/model/ExecutionListItem.html
Using this object - you can do two things:
1 - check status - https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sfn/model/ExecutionStatus.html
2 - get state machine ARN value - https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sfn/model/ExecutionListItem.html#stateMachineArn--
Using the state machine ARN value, you can execute a state machine with the AWS Step Functions Java API V2:
import org.json.simple.JSONObject;
import org.json.simple.parser.JSONParser;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.sfn.SfnClient;
import software.amazon.awssdk.services.sfn.model.*;
import java.io.FileReader;
import java.io.IOException;
import java.util.UUID;
// snippet-end:[stepfunctions.java2.start_execute.import]
public class StartExecution {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" StartExecution <stateMachineArn> <jsonFile>\n\n" +
"Where:\n" +
" stateMachineArn - the ARN of the state machine.\n\n" +
" jsonFile - A JSON file that contains the values to pass to the worflow.\n" ;
if (args.length != 2) {
System.out.println(USAGE);
System.exit(1);
}
String stateMachineArn = args[0];
String jsonFile = args[1];
Region region = Region.US_EAST_1;
SfnClient sfnClient = SfnClient.builder()
.region(region)
.build();
String exeArn = startWorkflow(sfnClient,stateMachineArn, jsonFile);
System.out.println("The execution ARN is" +exeArn);
sfnClient.close();
}
// snippet-start:[stepfunctions.java2.start_execute.main]
public static String startWorkflow(SfnClient sfnClient, String stateMachineArn, String jsonFile) {
String json = getJSONString(jsonFile);
// Specify the name of the execution by using a GUID value.
UUID uuid = UUID.randomUUID();
String uuidValue = uuid.toString();
try {
StartExecutionRequest executionRequest = StartExecutionRequest.builder()
.input(json)
.stateMachineArn(stateMachineArn)
.name(uuidValue)
.build();
StartExecutionResponse response = sfnClient.startExecution(executionRequest);
return response.executionArn();
} catch (SfnException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return "";
}
private static String getJSONString(String path) {
try {
JSONParser parser = new JSONParser();
JSONObject data = (JSONObject) parser.parse(new FileReader(path));//path to the JSON file.
String json = data.toJSONString();
return json;
} catch (IOException | org.json.simple.parser.ParseException e) {
e.printStackTrace();
}
return "";
}
// snippet-end:[stepfunctions.java2.start_execute.main]
}
My purpose is to list all records available in all hosted zones. I created a listResourceRecordSetsRequest builder object and pass it through the response method to retrieve all the resource records but for some reason I get an error.
My code which is wrong and causing the error:
The line which starts with def response = .... is what's giving an error
ArrayList<ResourceRecordSet> getResourceRecords(HostedZone hostedZone){
def request = ListResourceRecordSetsRequest.builder().hostedZoneId(hostedZone.id()).maxItems("1000").build()
def response = route53Client.listResourceRecordSets(request as Consumer<ListResourceRecordSetsRequest.Builder>)
return response.resourceRecordSets()
}
Error:
groovy.lang.MissingMethodException: No signature of method: software.amazon.awssdk.services.route53.model.ListResourceRecordSetsRequest.accept() is applicable for argument types: (software.amazon.awssdk.services.route53.model.ListResourceRecordSetsRequest$BuilderImpl) values: [software.amazon.awssdk.services.route53.model.ListResourceRecordSetsRequest$BuilderImpl#56eafaa0]
I appreciate the help in advance!
What AWS SDK are you using - V2? I just tested this in AWS Java V2 API and it works fine.
Full Java V2 example.
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.route53.Route53Client;
import software.amazon.awssdk.services.route53.model.*;
import java.util.List;
public class ListResourceRecordSets {
public static void main(String[] args) {
Region region = Region.AWS_GLOBAL;
Route53Client route53Client = Route53Client.builder()
.region(region)
.build();
listResourceRecord(route53Client);
route53Client.close();
}
public static void listResourceRecord(Route53Client route53Client) {
try {
ListResourceRecordSetsRequest request = ListResourceRecordSetsRequest.builder()
.hostedZoneId("XXXXX5442QZ69T8EEX5SZ")
.maxItems("12")
.build();
ListResourceRecordSetsResponse listResourceRecordSets = route53Client.listResourceRecordSets(request);
List<ResourceRecordSet> records = listResourceRecordSets.resourceRecordSets();
for (ResourceRecordSet record : records) {
System.out.println("The Record name is: " + record.name());
}
} catch (Route53Exception e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
}