I cannot seem to update pre-existing UserAttributes on a user "73". I am not sure if this behaviour is to be expected.
Map<String, List<String>> userAttributes = new HashMap<>();
userAttributes.put("Inference", Arrays.asList("NEGATIVE"));
userAttributes.put("Gender", Arrays.asList("M"));
userAttributes.put("ChannelPreference", Arrays.asList("EMAIL"));
userAttributes.put("TwitterHandle", Arrays.asList("Nutter"));
userAttributes.put("Age", Arrays.asList("435"));
EndpointUser endpointUser = new EndpointUser().withUserId("73");
endpointUser.setUserAttributes(userAttributes);
EndpointRequest endpointRequest = new EndpointRequest().withUser(endpointUser);
UpdateEndpointResult updateEndpointResult = pinpoint.updateEndpoint(new UpdateEndpointRequest()
.withEndpointRequest(endpointRequest).withApplicationId("380c3902d4ds47bfb6f9c6749c6dc8bf").withEndpointId("a1fiy2gy+eghmsadj1vqew6+aa"));
System.out.println(updateEndpointResult.getMessageBody());
#David.Webster,
You can update user-attributes of Amazon Pinpoint endpoint using the below Java code snippet which I have tested to be working :
public static void main(String[] args) throws IOException {
// Try to update the endpoint.
try {
System.out.println("===============================================");
System.out.println("Getting Started with Amazon Pinpoint"
+"using the AWS SDK for Java...");
System.out.println("===============================================\n");
// Initializes the Amazon Pinpoint client.
AmazonPinpoint pinpointClient = AmazonPinpointClientBuilder.standard()
.withRegion(Regions.US_EAST_1).build();
// Creates a new user definition.
EndpointUser jackchan = new EndpointUser().withUserId("73");
// Assigns custom user attributes.
jackchan.addUserAttributesEntry("name", Arrays.asList("Jack", "Chan"));
jackchan.addUserAttributesEntry("Inference", Arrays.asList("NEGATIVE"));
jackchan.addUserAttributesEntry("ChannelPreference", Arrays.asList("EMAIL"));
jackchan.addUserAttributesEntry("TwitterHandle", Arrays.asList("Nutter"));
jackchan.addUserAttributesEntry("gender", Collections.singletonList("M"));
jackchan.addUserAttributesEntry("age", Collections.singletonList("435"));
// Adds the user definition to the EndpointRequest that is passed to the Amazon Pinpoint client.
EndpointRequest jackchanIphone = new EndpointRequest()
.withUser(jackchan);
// Updates the specified endpoint with Amazon Pinpoint.
UpdateEndpointResult result = pinpointClient.updateEndpoint(new UpdateEndpointRequest()
.withEndpointRequest(jackchanIphone)
.withApplicationId("4fd13a407f274f10b4ec06cbc71738bd")
.withEndpointId("095A8688-7D79-43CE-BDCE-7DF713332BC3"));
System.out.format("Update endpoint result: %s\n", result.getMessageBody().getMessage());
} catch (Exception ex) {
System.out.println("EndpointUpdate Failed");
System.err.println("Error message: " + ex.getMessage());
ex.printStackTrace();
}
}
}
Hope this helps
Related
I want to add data into kinesis using Sprint Boot Application and React. I am a complete beginner when it comes to Kinesis, AWS, etc. so a beginner friendly guide would be appriciated.
To add data records into an Amazon Kinesis data stream from a Spring BOOT app, you can use the AWS SDK for Java V2 and specifically the Amazon Kinesis Java API. You can use the software.amazon.awssdk.services.kinesis.KinesisClient.
Because you are a beginner, I recommend that you read the AWS SDK Java V2 Developer Guide to become familiar with how to work with this Java API. See Developer guide - AWS SDK for Java 2.x.
Here is a code example that shows you how to add data records using this Service Client. See Github that has the other required classes here.
package com.example.kinesis;
//snippet-start:[kinesis.java2.putrecord.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.SdkBytes;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.kinesis.KinesisClient;
import software.amazon.awssdk.services.kinesis.model.PutRecordRequest;
import software.amazon.awssdk.services.kinesis.model.KinesisException;
import software.amazon.awssdk.services.kinesis.model.DescribeStreamRequest;
import software.amazon.awssdk.services.kinesis.model.DescribeStreamResponse;
//snippet-end:[kinesis.java2.putrecord.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class StockTradesWriter {
public static void main(String[] args) {
final String usage = "\n" +
"Usage:\n" +
" <streamName>\n\n" +
"Where:\n" +
" streamName - The Amazon Kinesis data stream to which records are written (for example, StockTradeStream)\n\n";
if (args.length != 1) {
System.out.println(usage);
System.exit(1);
}
String streamName = args[0];
Region region = Region.US_EAST_1;
KinesisClient kinesisClient = KinesisClient.builder()
.region(region)
.credentialsProvider(ProfileCredentialsProvider.create())
.build();
// Ensure that the Kinesis Stream is valid.
validateStream(kinesisClient, streamName);
setStockData( kinesisClient, streamName);
kinesisClient.close();
}
// snippet-start:[kinesis.java2.putrecord.main]
public static void setStockData( KinesisClient kinesisClient, String streamName) {
try {
// Repeatedly send stock trades with a 100 milliseconds wait in between
StockTradeGenerator stockTradeGenerator = new StockTradeGenerator();
// Put in 50 Records for this example
int index = 50;
for (int x=0; x<index; x++){
StockTrade trade = stockTradeGenerator.getRandomTrade();
sendStockTrade(trade, kinesisClient, streamName);
Thread.sleep(100);
}
} catch (KinesisException | InterruptedException e) {
System.err.println(e.getMessage());
System.exit(1);
}
System.out.println("Done");
}
private static void sendStockTrade(StockTrade trade, KinesisClient kinesisClient,
String streamName) {
byte[] bytes = trade.toJsonAsBytes();
// The bytes could be null if there is an issue with the JSON serialization by the Jackson JSON library.
if (bytes == null) {
System.out.println("Could not get JSON bytes for stock trade");
return;
}
System.out.println("Putting trade: " + trade);
PutRecordRequest request = PutRecordRequest.builder()
.partitionKey(trade.getTickerSymbol()) // We use the ticker symbol as the partition key, explained in the Supplemental Information section below.
.streamName(streamName)
.data(SdkBytes.fromByteArray(bytes))
.build();
try {
kinesisClient.putRecord(request);
} catch (KinesisException e) {
e.getMessage();
}
}
private static void validateStream(KinesisClient kinesisClient, String streamName) {
try {
DescribeStreamRequest describeStreamRequest = DescribeStreamRequest.builder()
.streamName(streamName)
.build();
DescribeStreamResponse describeStreamResponse = kinesisClient.describeStream(describeStreamRequest);
if(!describeStreamResponse.streamDescription().streamStatus().toString().equals("ACTIVE")) {
System.err.println("Stream " + streamName + " is not active. Please wait a few moments and try again.");
System.exit(1);
}
}catch (KinesisException e) {
System.err.println("Error found while describing the stream " + streamName);
System.err.println(e);
System.exit(1);
}
}
// snippet-end:[kinesis.java2.putrecord.main]
}
I have been recently involved in a project where I have to leverage the QuickSight APIs and update a dashboard programmatically. I can perform all the other actions but I am unable to update the dashboard from a template. I have tried a couple of different ideas, but all in vain.
Is there anyone who has already worked with the UpdateDashboard API or point me to some detailed documentation where I can understand if I am actually missing anything?
Thanks.
I got this to work using the AWS QuickSight Java V2 API. TO make this work, you need to follow the quick start instructions here:
https://docs.aws.amazon.com/quicksight/latest/user/getting-started.html
You need to get these values:
account - your account number
dashboardId - the dashboard id value
dataSetArn -- the data set ID value
analysisArn - the analysis Arn value
Once you go through the above topics - you will have all of these resource and ready to call UpdateDashboard . Here is the Java example that updates a Dashboard.
package com.example.quicksight;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.quicksight.QuickSightClient;
import software.amazon.awssdk.services.quicksight.model.*;
/*
Before running this code example, follow the Getting Started with Data Analysis in Amazon QuickSight located here:
https://docs.aws.amazon.com/quicksight/latest/user/getting-started.html
This code example uses resources that you created by following that topic such as the DataSet Arn value.
*/
public class UpdateDashboard {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage: UpdateDashboard <account> <dashboardId> <>\n\n" +
"Where:\n" +
" account - the account to use.\n\n" +
" dashboardId - the dashboard id value to use.\n\n" +
" dataSetArn - the ARN of the dataset.\n\n" +
" analysisArn - the ARN of an existing analysis";
String account = "<account id>";
String dashboardId = "<dashboardId>";
String dataSetArn = "<dataSetArn>";
String analysisArn = "<Analysis Arn>";
QuickSightClient qsClient = QuickSightClient.builder()
.region(Region.US_EAST_1)
.build();
try {
DataSetReference dataSetReference = DataSetReference.builder()
.dataSetArn(dataSetArn)
.dataSetPlaceholder("Dataset placeholder2")
.build();
// Get a template ARN to use.
String arn = getTemplateARN(qsClient, account, dataSetArn, analysisArn);
DashboardSourceTemplate sourceTemplate = DashboardSourceTemplate.builder()
.dataSetReferences(dataSetReference)
.arn(arn)
.build();
DashboardSourceEntity sourceEntity = DashboardSourceEntity.builder()
.sourceTemplate(sourceTemplate)
.build();
UpdateDashboardRequest dashboardRequest = UpdateDashboardRequest.builder()
.awsAccountId(account)
.dashboardId(dashboardId)
.name("UpdateTest")
.sourceEntity(sourceEntity)
.themeArn("arn:aws:quicksight::aws:theme/SEASIDE")
.build();
UpdateDashboardResponse response = qsClient.updateDashboard(dashboardRequest);
System.out.println("Dashboard " + response.dashboardId() + " has been updated");
} catch (QuickSightException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
private static String getTemplateARN(QuickSightClient qsClient, String account, String dataset, String analysisArn) {
String arn = "";
try {
DataSetReference setReference = DataSetReference.builder()
.dataSetArn(dataset)
.dataSetPlaceholder("Dataset placeholder2")
.build();
TemplateSourceAnalysis templateSourceAnalysis = TemplateSourceAnalysis.builder()
.dataSetReferences(setReference)
.arn(analysisArn)
.build();
TemplateSourceEntity sourceEntity = TemplateSourceEntity.builder()
.sourceAnalysis(templateSourceAnalysis)
.build();
CreateTemplateRequest createTemplateRequest = CreateTemplateRequest.builder()
.awsAccountId(account)
.name("NewTemplate")
.sourceEntity(sourceEntity)
.templateId("a9a277fb-7239-4890-bc7a-8a3e82d67a37") // Specify a GUID value
.build();
CreateTemplateResponse response = qsClient.createTemplate(createTemplateRequest);
arn = response.arn();
} catch (QuickSightException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return arn;
}
}
I have a S3 bucket xxx. I wrote one lambda function to access data from s3 bucket and writing those details to a RDS PostgreSQL instance. I can do it with my code. I added one trigger to the lambda function for invoking the same when a file falls on s3.
But from my code I can only read file having name 'sampleData.csv'. consider my code given below
public class LambdaFunctionHandler implements RequestHandler<S3Event, String> {
private AmazonS3 s3 = AmazonS3ClientBuilder.standard().build();
public LambdaFunctionHandler() {}
// Test purpose only.
LambdaFunctionHandler(AmazonS3 s3) {
this.s3 = s3;
}
#Override
public String handleRequest(S3Event event, Context context) {
context.getLogger().log("Received event: " + event);
String bucket = "xxx";
String key = "SampleData.csv";
System.out.println(key);
try {
S3Object response = s3.getObject(new GetObjectRequest(bucket, key));
String contentType = response.getObjectMetadata().getContentType();
context.getLogger().log("CONTENT TYPE: " + contentType);
// Read the source file as text
AmazonS3 s3Client = new AmazonS3Client();
String body = s3Client.getObjectAsString(bucket, key);
System.out.println("Body: " + body);
System.out.println();
System.out.println("Reading as stream.....");
System.out.println();
BufferedReader br = new BufferedReader(new InputStreamReader(response.getObjectContent()));
// just saving the excel sheet data to the DataBase
String csvOutput;
try {
Class.forName("org.postgresql.Driver");
Connection con = DriverManager.getConnection("jdbc:postgresql://ENDPOINT:5432/DBNAME","USER", "PASSWORD");
System.out.println("Connected");
// Checking EOF
while ((csvOutput = br.readLine()) != null) {
String[] str = csvOutput.split(",");
String name = str[1];
String query = "insert into schema.tablename(name) values('"+name+"')";
Statement statement = con.createStatement();
statement.executeUpdate(query);
}
System.out.println("Inserted Successfully!!!");
}catch (Exception ase) {
context.getLogger().log(String.format(
"Error getting object %s from bucket %s. Make sure they exist and"
+ " your bucket is in the same region as this function.", key, bucket));
// throw ase;
}
return contentType;
} catch (Exception e) {
e.printStackTrace();
context.getLogger().log(String.format(
"Error getting object %s from bucket %s. Make sure they exist and"
+ " your bucket is in the same region as this function.", key, bucket));
throw e;
}
}
From my code you can see that I mentioned key="SampleData.csv"; is there any way to get the key inside a bucket without specifying a specific file name?
These couple of links would be of help.
http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingObjectKeysUsingJava.html
You can list objects using prefix and delimiter to find the key you are looking for without passing a specific filename.
If you need to get the event details on S3, you can actually enable the s3 event notifier to lambda function. Refer the link
You can enable this by,
Click on 'Properties' inside your bucket
Click on 'Events '
Click 'Add notification'
Give a name and select the type of event (eg. Put, delete etc.)
Give prefix and suffix if necessary or else leave blank which consider all events
Then 'Sent to' Lambda function and provide the Lambda ARN.
Now the event details will be sent lambda function as a json format. You can fetch the details from that json. The input will be like this:
{"Records":[{"eventVersion":"2.0","eventSource":"aws:s3","awsRegion":"ap-south-1","eventTime":"2017-11-23T09:25:54.845Z","eventName":"ObjectRemoved:Delete","userIdentity":{"principalId":"AWS:AIDAJASDFGZTLA6UZ7YAK"},"requestParameters":{"sourceIPAddress":"52.95.72.70"},"responseElements":{"x-amz-request-id":"A235BER45D4974E","x-amz-id-2":"glUK9ZyNDCjMQrgjFGH0t7Dz19eBrJeIbTCBNI+Pe9tQugeHk88zHOY90DEBcVgruB9BdU0vV8="},"s3":{"s3SchemaVersion":"1.0","configurationId":"sns","bucket":{"name":"example-bucket1","ownerIdentity":{"principalId":"AQFXV36adJU8"},"arn":"arn:aws:s3:::example-bucket1"},"object":{"key":"SampleData.csv","sequencer":"005A169422CA7CDF66"}}}]}
You can access the key as objectname = event['Records'][0]['s3']['object']['key'](Oops, this is for python)
and then sent this info to RDS.
I am trying to publish a message on a topic using aws-java-sdk-iot using AWSIotDataClient's publish method on a tomcat server (7.0.70) on EC2 instance in us-east-1 region.
I tried with 2 different versions of the aws-java-sdk-iot (1.11.255 and 1.10.77)
Ref:
https://github.com/aws/aws-sdk-java/tree/master/aws-java-sdk-iot
When the publish method is called I see the CPU utilization goes to 80% and I see OutOfMemory Errors in my application logs and catalina.out logs.
My tomcat memory settings are -Xms2048m -Xmx2048m
It would be great if any one can point me in the right direction or if I am missing something when constructing AWSIoTDataClient or doing something wrong.
Thanks in Advance.
Below is the snippet of the code when using 1.11.255 version of aws-java-sdk-iot:
The code is in a singleton class: sendMessageToTpoic is the method which gets invoked to send the message on topic.
public Boolean sendMessageToTopic(String messagePayLoad, String topic) throws IoTPublishException{
Boolean result = false;
logger.info("sendMessageToTopic MessagePayload: "+messagePayLoad+", topic: "+topic);
try {
PublishRequest request = new PublishRequest();
ByteBuffer byteBufferMsg = stringToByteBuffer(messagePayLoad);
request.setPayload(byteBufferMsg);
request.setQos(1);
request.setSdkClientExecutionTimeout(10000);
request.setSdkRequestTimeout(10000);
request.setTopic(topic);
if(awsIotDataClient==null){
awsIotDataClient= config();
}
awsIotDataClient.publish(request);
logger.info("Publishing the request on the topic"+request);
result =true;
} catch (Exception e) {
logger.error("Error sending message to the aws-iot-gateway"+messagePayLoad,e);
throw new IoTPublishException(
"UNABLE_TO_POST_MESSAGE_IN_IOT_TOPIC",
"Unable to publish to topic: " + topic + ", for message: "
+ messagePayLoad, e);
}
logger.info("Exit sendMessageToTopic: result: "+result,",messagePayLoad: "+messagePayLoad+", topic:"+topic);
return result;
}
public AWSCredentialsProvider populateCreds() throws AmazonClientException{
AWSCredentials creds =null;
try {
if(instanceProvider==null){
try {
instanceProvider = new InstanceProfileCredentialsProvider(true);
creds = instanceProvider.getCredentials();
logger.debug("Creds : {} ",creds);
}catch (AmazonClientException e) {
logger.error("Error-AmazonClientException when loading AWS Creds from Instance Meta-Data ",e);
}catch (Exception e) {
logger.error("Error-GenericException when loading AWS Creds from Instance Meta-Data ",e);
}
if(instanceProvider ==null || creds==null){
profileProvider = new ProfileCredentialsProvider();
logger.info("Loaded creds from aws.credentials file in default directory");
}
}
} catch (Exception e) {
logger.error("Error-GenericException in populateCreds ",e);
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (~/.aws/credentials), and is in valid format.",
e);
}
return ( (instanceProvider!=null && creds!=null)?instanceProvider:profileProvider);
}
protected AWSIotDataClient config() throws Exception{
try {
//Loading Via populateCreds: InstanceCredentialsProvider or ProfileCredentialsProvider
awsIotDataClient = (AWSIotDataClient) AWSIotDataClientBuilder.standard()
.withCredentials(populateCreds())
.withRegion(Regions.US_EAST_1)
.build();
} catch (Exception e) {
logger.error("Error-GenericException creating the AWSIotDataClient in config: ",e);
throw e;
}
return awsIotDataClient;
}
Update: 02/12/2018: It was a tomcat maxpermsize issue but not with the AWS SDK.
I have an Amazon EC2 with Linux Instance set up and running for my Java Web Application to consume REST requests. The problem is that I am trying to use Google Cloud Vision in this application to recognize violence/nudity in users pictures.
Accessing the EC2 in my Terminal, I set the GOOGLE_APPLICATION_CREDENTIALS by the following command, which I found in the documentation:
export GOOGLE_APPLICATION_CREDENTIALS=<my_json_path.json>
Here comes my first problem: When I restart my server, and ran 'echo $GOOGLE_APPLICATION_CREDENTIALS' the variable is gone. Ok, I set it to the bash_profile and bashrc and now it is ok.
But, when I ran my application, consuming the above code, to get the adult and violence status of my picture, I got the following error:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
My code is the following:
Controller:
if(SafeSearchDetection.isSafe(user.getId())) {
if(UserDB.updateUserProfile(user)==false){
throw new SQLException("Failed to Update");
}
} else {
throw new IOException("Explicit Content");
}
SafeSearchDetection.isSafe(int idUser):
String path = IMAGES_PATH + idUser + ".jpg";
try {
mAdultMedicalViolence = detectSafeSearch(path);
if(mAdultMedicalViolence.get(0) > 3)
return false;
else if(mAdultMedicalViolence.get(1) > 3)
return false;
else if(mAdultMedicalViolence.get(2) > 3)
return false;
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
return true;
detectSafeSearch(String path):
List<AnnotateImageRequest> requests = new ArrayList<AnnotateImageRequest>();
ArrayList<Integer> adultMedicalViolence = new ArrayList<Integer>();
ByteString imgBytes = ByteString.readFrom(new FileInputStream(path));
Image img = Image.newBuilder().setContent(imgBytes).build();
Feature feat = Feature.newBuilder().setType(Type.SAFE_SEARCH_DETECTION).build();
AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
ImageAnnotatorClient client = ImageAnnotatorClient.create();
BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
List<AnnotateImageResponse> responses = response.getResponsesList();
for (AnnotateImageResponse res : responses) {
if (res.hasError()) {
System.out.println("Error: "+res.getError().getMessage()+"\n");
return null;
}
SafeSearchAnnotation annotation = res.getSafeSearchAnnotation();
adultMedicalViolence.add(annotation.getAdultValue());
adultMedicalViolence.add(annotation.getMedicalValue());
adultMedicalViolence.add(annotation.getViolenceValue());
}
for(int content : adultMedicalViolence)
System.out.println(content + "\n");
return adultMedicalViolence;
My REST application was built above a Tomcat8. After no success running the command:
System.getenv("GOOGLE_APPLICATION_CREDENTIALS")
I realize that my problem was in the Environment Variables to Tomcat installation. To correct this, I just created a new file setenv.sh in my /bin with the content:
GOOGLE_APPLICATION_CREDENTIALS=<my_json_path.json>
And it worked!