How to create a log-based alert using the Monitoring API? - google-cloud-platform

On the documentation it say's there is a link for an example of creating a log-based alert using the Monitoring API but there's only an example how to create it using the UI.
I am having trouble building a new alertPolicy using the Java Monitoring API because there does not seem to be a Condition builder for Log-based alerts (using the Log query filter). I don't think I should be using the Absent, Threshold, or MonitoringQueryLanguage options. How can I build the correct condition?

The three condition types MetricAbsence, MetricThreshold and MonitoringQueryLanguageCondition are Conditions for metric-based alerting policies. Since we want to create a log based alert policy, according to the Condition for log-based alerting policies, the condition type must be "LogMatch".
AlertPolicy.Condition.Builder class has methods to set condition with metric-based alerting condition types but not with log-based condition type i.e., LogMatch. The class "AlertPolicy.Condition.Builder" doesn't seem to have a method to do so. Note that the log-based alert is pre-GA (Preview) feature per public doc. So various client libraries may not support this yet.
Although, when creating a new Alert Policy using projects.alertPolicies.create, we can add a condition of type "conditionMatchedLog".
So, it is recommended to use UI or above API, but not the client library until it supports the condition.

As Ashish pointed out, this is not currently available in the client library. Hopefully it will be added soon but for the time being for anyone who wants to add a log based alert condition (conditionMatchedLog) using the Java Client library, this is what the code looks like.
public static UnknownFieldSet createConditionMatchedLog(String filter) {
// filter = "\nseverity = \"NOTICE\""; example filter
UnknownFieldSet.Field logMatchString =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(ByteString.copyFromUtf8(filter))
.build();
UnknownFieldSet logMatchFilter =
UnknownFieldSet.newBuilder()
.addField(1, logMatchString)
.build();
UnknownFieldSet.Field logMatchFilterCombine =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(logMatchFilter.toByteString())
.build();
UnknownFieldSet logMatchCondition =
UnknownFieldSet.newBuilder()
.addField(20, logMatchFilterCombine)
.build();
return logMatchCondition;
}
When constructing the actual Alert, you also need to define the Alert Strategy.
public static UnknownFieldSet createAlertStrategyFieldSet(long periodTime) {
//Making the alertStrategy field since you need those for log based alert policies
UnknownFieldSet.Field period =
UnknownFieldSet.Field.newBuilder()
.addVarint(periodTime) //must be an long
.build();
UnknownFieldSet periodSet =
UnknownFieldSet.newBuilder()
.addField(1, period)
.build();
UnknownFieldSet.Field periodSetField =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(periodSet.toByteString())
.build();
UnknownFieldSet periodSetFieldUp =
UnknownFieldSet.newBuilder()
.addField(1, periodSetField)
.build();
UnknownFieldSet.Field periodSetField2 =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(periodSetFieldUp.toByteString())
.build();
UnknownFieldSet periodSetFieldUp2 =
UnknownFieldSet.newBuilder()
.addField(1, periodSetField2)
.build();
UnknownFieldSet.Field periodSetField3 =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(periodSetFieldUp2.toByteString())
.build();
UnknownFieldSet AlertStrategy =
UnknownFieldSet.newBuilder()
.addField(21, periodSetField3)
.build();
return AlertStrategy;
}
Putting it together
UnknownFieldSet conditionMatchedLog = createConditionMatchedLog(filter);
// Construct Condition object
AlertPolicy.Condition alertPolicyCondition =
AlertPolicy.Condition.newBuilder()
.setDisplayName(med.getConditions().get(index).getDisplayName())
.setUnknownFields(conditionMatchedLog)
.build();
//Making the alertStrategy field since you need those for log based alert policies
UnknownFieldSet alertStrategy = createAlertStrategyFieldSet(convertTimePeriodFromString(med.getAlertStrategy().getNotificationRateLimit().getPeriod()));
// Build the log based alert policy
AlertPolicy alertPolicy =
AlertPolicy.newBuilder()
.setDisplayName(med.getDisplayName())
.addConditions(alertPolicyCondition)
.setCombiner(AlertPolicy.ConditionCombinerType.OR)
.setEnabled(BoolValue.newBuilder().setValue(true).build())
.setUnknownFields(alertStrategy)
.build();

Related

Update attributes using UpdateItemEnhancedRequest - DynamoDbEnhancedClient

As per DynamoDb Java Sdk2, the Update operation can be performed as
DynamoDbTable<Customer> mappedTable = enhancedClient.table("Customer", TableSchema.fromBean(Customer.class));
Key key = Key.builder()
.partitionValue(keyVal)
.build();
Customer customerRec = mappedTable.getItem(r->r.key(key));
customerRec.setEmail(email);
mappedTable.updateItem(customerRec);
Shouldn't this will cause two calls to dynamoDB
What if after fetching the record and before updateItem call, another thread updated the record, so we have to put it into a transaction as well
Although there is another way by using UpdateItemEnhancedRequest
final var request = UpdateItemEnhancedRequest.builder(Customer.class)
.item(updatedCustomerObj)
.ignoreNulls(Boolean.TRUE)
.build();
mappedTable.updateItem(request);
but this would require using ignoreNulls(TRUE) and will not handle updates in case null value is to be set.
What should be the optimized way for update operation using enhanced client
For 1:
I don't like this part of the SDK but the "non-enhanced" client can update without first querying for the entity.
UpdateItemRequest updateItemRequest = UpdateItemRequest
.builder()
.tableName("Customer")
.key(
Map.of(
"PK", AttributeValue.builder().s("pk").build(),
"SK", AttributeValue.builder().s("sk").build()
))
.attributeUpdates(
Map.of(
"email", AttributeValueUpdate.builder()
.action(AttributeAction.PUT)
.value(AttributeValue.builder().s("new-email#gmail.com").build())
.build()
))
.build();
client.updateItem(updateItemRequest);
For 2:
Yes, you are absolutely right that can happen and will lead to data inconsistency. To avoid this you should use conditional write/update with a version. Luckily we have a Java annotation for this
#DynamoDBVersionAttribute
public Long getVersion() { return version; }
public void setVersion(Long version) { this.version = version;}
More info here

How to make a PubSub triggered Cloud Function with message ordering using terraform?

I am trying to create a Cloud Function that is triggered from a PubSub subscription, but I need to have the message ordering enabled. I know to use the event_trigger block in the google_cloudfunctions_function block, when creating a function linked to a subscription. However this does not like the enable_message_ordering as described under PubSub. When using the subscription push config, I don't know how I can get link the endpoint to the function.
So is there a way I can link the function to a subscription with message ordering enabled?
Can I just use the internal URL to the function as the push config URL?
You can't use background functions triggered by PubSub and message ordering (or filtering).
You have do deploy a HTTP functions (take care, the signature of the fonction change, and the the format of the PubSub message also change slightly).
Then create a PubSub PUSH subscriptions, use the Cloud Functions URL. The best is also to add a Service Account on PubSub to allow only it to call your Functions.
For completeness I wanted to add the terraform that I used to do this. In case others are looking.
# This is the HTTP function that processes the events from PubSub, note it is set as a HTTP trigger
resource "google_cloudfunctions_function" "processEvent" {
name = "processEvent"
runtime = var.RUNTIME
environment_variables = {
GCP_PROJECT_ID = var.GCP_PROJECT_ID
LOG_LEVEL = var.LOG_LEVEL
}
available_memory_mb = var.AVAILABLE_MEMORY
timeout = var.TIMEOUT
source_archive_bucket = var.SOURCE_ARCHIVE_BUCKET
source_archive_object = google_storage_bucket_object.processor-archive.name
trigger_http = true
entry_point = "processEvent"
}
# define the topic
resource "google_pubsub_topic" "event-topic" {
name = "event-topic"
}
# We need to create the subscription specifically as we need to enable message ordering
resource "google_pubsub_subscription" "processEvent_subscription" {
name = "processEvent_subscription"
topic = google_pubsub_topic.event-topic.name
ack_deadline_seconds = 20
push_config {
push_endpoint = "https://${var.REGION}-${var.GCP_PROJECT_ID}.cloudfunctions.net/${google_cloudfunctions_function.processEvent.name}"
oidc_token {
# a new IAM service account is need to allow the subscription to trigger the function
service_account_email = "cloudfunctioninvoker#${var.GCP_PROJECT_ID}.iam.gserviceaccount.com"
}
}
enable_message_ordering = true
}

How to set up Amazon DynamoDB Client on AWS (JAVA)

Trying to set up a client for my Amazon DynamoDB in Java 8 and am running into this error when I try to run my lambda function locally. I am trying to connect to Amazon DynamoDB and I already have set up in AWS Management Console.
Error trying to commit audit record:com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: InvalidSignatureException;
I am still new to AWS and trying to understand how it works. I am sure the credentials I provided matched the ones I have.
AmazonDynamoDB client = AmazonDynamoDBClient.builder()
.withRegion("us-east-2")
.withCredentials(new AWSStaticCredentialsProvider(new
BasicAWSCredentials("key","private key")))
.build();
DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable("tableName")
Maybe you can try out changing according to example in AWS docs, without explicitly setting credential provider.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CodeSamples.Java.html
This is my code for creating a table and this is working:
BasicAWSCredentials awsCreds = new BasicAWSCredentials("access_key_id", "secret_key_id");
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
// AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
// .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://localhost:8000", "us-west-2"))
// .build();
DynamoDB dynamoDB = new DynamoDB(client);
String tableName = "Songs";
try {
System.out.println("Attempting to create table; please wait...");
Table table = dynamoDB.createTable(tableName,
Arrays.asList(
new KeySchemaElement("year", KeyType.HASH), // Partition
// key
new KeySchemaElement("title", KeyType.RANGE)), // Sort key
Arrays.asList(new AttributeDefinition("year", ScalarAttributeType.N),
new AttributeDefinition("title", ScalarAttributeType.S)),
new ProvisionedThroughput(10L, 10L));
table.waitForActive();
System.out.println("Success. Table status: " + table.getDescription().getTableStatus());
} catch (Exception e) {
System.err.println("Unable to create table: ");
System.err.println(e.getMessage());
}

Using AmazonSQSExtendedClient with AmazonSQSAsync

So the way I understand it is that through using AmazonSQSExtendedClient, the payloads that are over 256kb will get stored into a temp bucket (see the guide here). I'm having a bit of trouble implementing this with AmazonSQSAsync. Because of the way that current system is set up, I'm currently using AmazonSQSAsync, like so:
#Bean
#Primary
AmazonSQSAsync amazonSQSAsync() {
return AmazonSQSAsyncClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
endpoint_url,
signin_region))
.build()
}
Now I want to replace this with the extended client to handle the payloads that are over 256kb. I have something like this so far:
#Bean
#Primary
AmazonSQS amazonSQSWithExtendedClient() {
final AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient()
final BucketLifecycleConfiguration.Rule expirationRule =
new BucketLifecycleConfiguration.Rule()
expirationRule.withExpirationInDays(14).withStatus("Enabled")
final BucketLifecycleConfiguration lifecycleConfig =
new BucketLifecycleConfiguration().withRules(expirationRule)
s3.setBucketLifecycleConfiguration(temp_bucket, lifecycleConfig)
// Keep this part
final ExtendedClientConfiguration extendedClientConfig =
new ExtendedClientConfiguration()
.withLargePayloadSupportEnabled(s3, temp_bucket)
final AmazonSQS sqsExtended = new AmazonSQSExtendedClient(AmazonSQSClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
endpoint_url,
signin_region))
.build(), extendedClientConfig)
return sqsExtended
}
Is there any way that I can make the extended client work with AmazonSQSAsync or at least have some workaround?
If you configure your JMS bean, you can solve this kind of problem.
Do something like that: https://craftingjava.com/blog/large-sqs-messages-jms-spring-boot/

S3 CopyObjectRequest between regions

I'm trying to copy some objects between 2 S3 buckets that are in different regions.
I have this:
static void PutToDestination(string filename)
{
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsAccessKeyId, awsSecretKey);
var client = new Amazon.S3.AmazonS3Client(credentials, Amazon.RegionEndpoint.GetBySystemName(awsS3RegionNameSource));
CopyObjectRequest request = new CopyObjectRequest();
request.SourceKey = filename;
request.DestinationKey = filename;
request.SourceBucket = awsS3BucketNameSource;
request.DestinationBucket = awsS3BucketNameDest;
try
{
CopyObjectResponse response = client.CopyObject(request);
}
catch (AmazonS3Exception x)
{
Console.WriteLine(x);
}
}
I get "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint."
There doesn't seem to be a way to set separate endpoints for source and destination.
Is there a different method I should look at?
Thanks
Did you try this? https://github.com/aws/aws-sdk-java-v2/issues/2212 (Provide destination region when building the S3:Client). Worked for me :)
I think, if you don't specify the region explicitly, then Copy between cross region should work.
See documentation.
However, the accelaration will not work and copy will be
COPY=GET+PUT
Here is explanation excerpt from documentation that says-
Important
Amazon S3 Transfer Acceleration does not support cross-region copies.
In your code, you are specifying the region like below.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
Instead initialize S3client without region like below.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider()).build();