How to retrieve EventDetails in EventBridge? - amazon-web-services

Using AWS .NET SDK, I tried to put event with EventBridge and then track it with CloudWatch.
How I put event:
using (var eventClient = new AmazonEventBridgeClient(credentials, RegionEndpoint.USEast1))
{
PutEventsResponse result = await eventClient.PutEventsAsync( new PutEventsRequest
{
Entries = new List<PutEventsRequestEntry>
{
new PutEventsRequestEntry
{
DetailType = "TestEvent",
EventBusName = "default",
Source = "mySource",
Detail = JsonConvert.SerializeObject(new TestClass{ Message = "myMessage"}),
Time = DateTime.UtcNow
}
}
});
}
And what I see in logs
Can somebody explain me, why I don't see the Detail and DetailType I defined? Maybe I do something wrong?
Thank you in advance

Okey, I finally found solution. All, what I need, was to configure input to my rule target
Here I chose Part of the matched event and defined what I want to receive. But, this variant isn't available for CloudWatch Logs Target, so this answer isn't complete at all.

Related

GCP API - Determining what role an resource instance has been created with

For the project I'm on, I am tasked with creating a testing app that uses Terraform to create a resource instance and then test that it was created properly. The purpose is testing the Terraform Script result by validating certain characteristics of the resource created. That's the broad outline.
For several of these scripts a resource is assigned a role. It could be a PubSub subscription, DataCatalog, etc.
Example Terraform code for a Spanner Database assigning roles/spanner.databaseAdmin:
resource "google_spanner_database_iam_member" "user_database_admin" {
for_each = toset(var.iam_user_database_admins)
project = var.project
instance = var.instance_id
database = google_spanner_database.spanner_database.name
role = "roles/spanner.databaseAdmin"
member = "user:${each.key}"
}
So my question is this: Is there a way using a .NET GCP API to make a call to determine that the role was assigned? I can test for permissions via a TestIamPermissions method off of the client object and that's what I'm currently doing. But that gives me a sometimes long list of possible permissions. Is there a way to say "does this spanner database have the roles/spanner.databaseAdmin assigned?"
Here's an example of code testing for permissions on a PubSub Subscription:
TestIamPermissionsRequest subscriptionRequest = new TestIamPermissionsRequest
{
ResourceAsResourceName = SubscriptionName.FromProjectSubscription(projectId, subscriptionId),
Permissions = {
"pubsub.subscriptions.get",
"pubsub.subscriptions.delete",
"pubsub.subscriptions.update"
}
};
TestIamPermissionsResponse subscriptionResponse = publisher.TestIamPermissions(subscriptionRequest);
Seems like there ought to be a cleaner way to do this, but being somewhat new to GCP, I haven't found a way yet. Suggestions will be welcome.
Thought I should close this question off with what I eventually discovered. The proper question isn't what role is assigned an instance of a resource, but what users have been allowed to use the resource and with what role.
The proper call is GetIamPolicy which is available in the APIs for all of the resources that I've been working with. The problem was that I wasn't seeing anything due to no user accounts being assigned to the resource. I updated the Terraform script to assign a user to the resource with the required roles. When calling GetIamPolicy, it returns an array in the Bindings that lists roles and users that are assigned. This was the information I needed. Going down the path of using TestIamPermissions was unneeded.
Here's an example my use of this:
bool roleFound = false;
bool userFound = false;
bool exception = false;
try
{
Policy policyResponse = Client.GetIamPolicy(Resource);
var bindings = policyResponse.Bindings;
foreach (var item in bindings)
{
if (AcceptedRoles.Contains(item.Role))
roleFound = true;
foreach (var user in item.Members)
{
string testUser = user;
if (user.Substring(0, 5) == "user:")
{
testUser = user.Substring(5);
}
else if (user.Substring(0, 6) == "group:")
{
testUser = user.Substring(6);
}
if (Settings.UserTestList.Contains(testUser))
userFound = true;
}
}
}
catch (Grpc.Core.RpcException)
{
exception = true;
}
Assert.True(roleFound);
Assert.True(userFound);
Assert.False(exception);

GCP Cloud Tasks: shorten period for creating a previously created named task

We are developing a GCP Cloud Task based queue process that sends a status email whenever a particular Firestore doc write-trigger fires. The reason we use Cloud Tasks is so a delay can be created (using scheduledTime property 2-min in the future) before the email is sent, and to control dedup (by using a task-name formatted as: [firestore-collection-name]-[doc-id]) since the 'write' trigger on the Firestore doc can be fired several times as the document is being created and then quickly updated by backend cloud functions.
Once the task's delay period has been reached, the cloud-task runs, and the email is sent with updated Firestore document info included. After which the task is deleted from the queue and all is good.
Except:
If the user updates the Firestore doc (say 20 or 30 min later) we want to resend the status email but are unable to create the task using the same task-name. We get the following error:
409 The task cannot be created because a task with this name existed too recently. For more information about task de-duplication see https://cloud.google.com/tasks/docs/reference/rest/v2/projects.locations.queues.tasks/create#body.request_body.FIELDS.task.
This was unexpected as the queue is empty at this point as the last task completed succesfully. The documentation referenced in the error message says:
If the task's queue was created using Cloud Tasks, then another task
with the same name can't be created for ~1hour after the original task
was deleted or executed.
Question: is there some way in which this restriction can be by-passed by lowering the amount of time, or even removing the restriction all together?
The short answer is No. As you've already pointed, the docs are very clear regarding this behavior and you should wait 1 hour to create a task with same name as one that was previously created. The API or Client Libraries does not allow to decrease this time.
Having said that, I would suggest that instead of using the same Task ID, use different ones for the task and add an identifier in the body of the request. For example, using Python:
from google.cloud import tasks_v2
from google.protobuf import timestamp_pb2
import datetime
def create_task(project, queue, location, payload=None, in_seconds=None):
client = tasks_v2.CloudTasksClient()
parent = client.queue_path(project, location, queue)
task = {
'app_engine_http_request': {
'http_method': 'POST',
'relative_uri': '/task/'+queue
}
}
if payload is not None:
converted_payload = payload.encode()
task['app_engine_http_request']['body'] = converted_payload
if in_seconds is not None:
d = datetime.datetime.utcnow() + datetime.timedelta(seconds=in_seconds)
timestamp = timestamp_pb2.Timestamp()
timestamp.FromDatetime(d)
task['schedule_time'] = timestamp
response = client.create_task(parent, task)
print('Created task {}'.format(response.name))
print(response)
#You can change DOCUMENT_ID with USER_ID or something to identify the task
create_task(PROJECT_ID, QUEUE, REGION, DOCUMENT_ID)
Facing a similar problem of requiring to debounce multiple instances of Firestore write-trigger functions, we worked around the default Cloud Tasks task-name based dedup mechanism (still a constraint in Nov 2022) by building a small debounce "helper" using Firestore transactions.
We're using a helper collection _syncHelper_ to implement a delayed throttle for side effects of write-trigger fires - in the OP's case, send 1 email for all writes within 2 minutes.
In our case we are using Firebease Functions task queue utils and not directly interacting with Cloud Tasks but thats immaterial to the solution. The key is to determine the task's execution time in advance and use that as the "dedup key":
async function enqueueTask(shopId) {
const queueName = 'doSomething';
const now = new Date();
const next = new Date(now.getTime() + 2 * 60 * 1000);
try {
const shouldEnqueue = await getFirestore().runTransaction(async t=>{
const syncRef = getFirestore().collection('_syncHelper_').doc(<collection_id-doc_id>);
const doc = await t.get(syncRef);
let data = doc.data();
if (data?.timestamp.toDate()> now) {
return false;
}
await t.set(syncRef, { timestamp: Timestamp.fromDate(next) });
return true;
});
if (shouldEnqueue) {
let queue = getFunctions().taskQueue(queueName);
await queue.enqueue({
timestamp: next.toISOString(),
},
{ scheduleTime: next }); }
} catch {
...
}
}
This will ensure a new task is enqueued only if the "next execution" time has passed.
The execution operation (also a cloud function in our case) will remove the sync data entry if it hasn't been changed since it was executed:
exports.doSomething = functions.tasks.taskQueue({
retryConfig: {
maxAttempts: 2,
minBackoffSeconds: 60,
},
rateLimits: {
maxConcurrentDispatches: 2,
}
}).onDispatch(async data => {
let { timestamp } = data;
await sendYourEmailHere();
await getFirestore().runTransaction(async t => {
const syncRef = getFirestore().collection('_syncHelper_').doc(<collection_id-doc_id>);
const doc = await t.get(syncRef);
let data = doc.data();
if (data?.timestamp.toDate() <= new Date(timestamp)) {
await t.delete(syncRef);
}
});
});
This isn't a bullet proof solution (if the doSomething() execution function has high latency for example) but good enough for 99% of our use cases.

AWS CodeBuild Branch filter option removed

We are using AWS CodeBuild Branch filter option to trigger a build only when a PUSH to Master is made. However, The 'Branch filter' option has been apparently removed recently and 'Webhook event filter group' are added. They should provide more functionality I expect, but I cannot see how to make the 'Branch filter'.
Can someone help?
I couldn't see this change flagged anywhere, but it worked for me setting Event Type as PUSH and HEAD_REF to be
refs/heads/branch-name
as per
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-github-pull-request.html
You need to use filter groups, instead of branch_filters.
Example in terraform (0.12+);
For feature branches ;
resource "aws_codebuild_webhook" "feature" {
project_name = aws_codebuild_project.feature.name
filter_group {
filter {
type = "EVENT"
pattern = "PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED, PULL_REQUEST_REOPENED"
}
filter {
type = "HEAD_REF"
pattern = "^(?!^/refs/heads/master$).*"
exclude_matched_pattern = false
}
}
}
For master branch.
resource "aws_codebuild_webhook" "master" {
project_name = aws_codebuild_project.master.name
filter_group {
filter {
type = "EVENT"
pattern = "PUSH"
}
filter {
type = "HEAD_REF"
pattern = "^refs/heads/master$"
exclude_matched_pattern = false
}
}
}
So they both requires an aws_codebuild_project per each. Thus you will have 2 CodeBuild projects per repository.
branch_filter does not work in CodeBuild, although it is still configurable via UI or API. filter_groups are the one that has the required logic.

How to retrieve AWS Cloudwatch metrics using AWSSDK.CloudWatch?

I'm trying to retrieve data about my load balancers using the AWSSDK.CloudWatch package, but having no luck in actually getting any values out of it. It seems no matter what, the Values property of the MetricData in the response is an empty array.
AmazonCloudWatchClient client = new AmazonCloudWatchClient("MyAccessKeyId", "MySecretAccessKey", Amazon.RegionEndpoint.MyRegion);
GetMetricDataRequest request = new GetMetricDataRequest()
{
StartTime = DateTime.UtcNow.AddHours(-12),
EndTime = DateTime.UtcNow,
MetricDataQueries = new List<MetricDataQuery>()
{
new MetricDataQuery()
{
Id = "MyMetric",
MetricStat = new MetricStat()
{
Metric = new Metric()
{
Namespace = "AWS/ELB",
MetricName = "HealthyHostCount",
Dimensions = new List<Dimension>()
{
new Dimension()
{
Name = "LoadBalancerName",
Value = "MyLoadBalancerName"
}
}
},
Period = 300,
Stat = "Sum",
Unit = "None"
}
}
},
ScanBy = ScanBy.TimestampDescending,
MaxDatapoints = 1000
};
GetMetricDataResponse response = client.GetMetricData(request);
I'm struggling to find any relevant examples of this. I'd prefer to be able to obtain this value per-load balancer.
There are many things that could cause your query to return no data. This is how I would approach debugging this:
Was the response 200 OK? If not, something is wrong with the query itself, missing required parameter, credentials are not valid or policy does not allow GetMetricData calls.
Is the metric name correct? Full metric name must be correct, that includes namespace, metric name and all of the dimensions. CloudWatch will not distinguish between no data case and no metric case, you will just get no data back. This is a potential issue in your request, if your hosts are in a target group you may need to specify the target group dimension.
Is the region endpoint correct? Metrics are separated by region and you have to call the correct region endpoint.
Are the credentials from the correct account?
Is the unit correct? If you are not sure about the unit, don't specify it. This is the second thing that could be an issue with your request, this metric could have the unit Count. Try it without specifying the unit.
Is the time range correct? Was the data being published for the time range you are requesting?

How iterate for s3 file keys via CompletableFuture in aws sdk 2.0?

Consider example about sync version and old aws sdk:
public void syncIterateObjects() {
AmazonS3 s3Client = null;
String marker = null;
do {
ObjectListing objects = s3Client.listObjects(
new ListObjectsRequest()
.withBucketName("bucket")
.withPrefix("prefix")
.withMarker(marker)
.withDelimiter("/")
.withMaxKeys(100)
);
marker = objects.getNextMarker();
} while (marker != null);
}
Everything is clear - do/while do the work. Consider async example and awsd sdk 2.0
public void asyncIterateObjects() {
S3AsyncClient client = S3AsyncClient.builder().build()
final CompletableFuture<ListObjectsV2Response> response = client.listObjectsV2(ListObjectsV2Request.builder()
.delimiter("/")
.prefix("bucket")
.bucket("prefix")
.build())
.thenApply(Function.identity());
// what to do next ???
}
Ok I got CompletableFuture, but how run cycle to pass marker (nextContinuationToken in aws sdk 2.0) between previous and next Future?
You have only one future, notice the type is a future list of objects.
now you have to decide if you want to get the future or apply further transformations to it before getting it. After you get the future you can use the same method you used before with the while