How to retrieve list of encoded files and paths after a done job in MediaConvert? - amazon-web-services

As stated in the title, nothing in their API seems to provide a list of encoded files after a job is complete, crucial in case of HLS encoding since I need to move them from S3 to another cloud provider.

MediaConvert emits CloudWatch Events [1] for job status changes. You can implement this workflow by capturing jobs that go into a COMPLETE status and triggering a lambda function to gather required S3 paths. The COMPLETE CloudWatch event provide you the playlistFilePaths and outputFilePaths that will contain the S3 path your main and variant playlist.
CloudWatch event pattern to capture all completed jobs.
{
"source": [
"aws.mediaconvert"
],
"detail": {
"status": [
"COMPLETE"
]
}
}
An example for the CloudWatch event payload can be found in the documentation [1]
== Resources ==
[1] https://docs.aws.amazon.com/mediaconvert/latest/ug/apple-hls-group.html

Related

google cloud platform -- creating alert policy -- how to specify message variable in alerting documentation markdown?

So I've created a logging alert policy on google cloud that monitors the project's logs and sends an alert if it finds a log that matches a certain query. This is all good and fine, but whenever it does send an email alert, it's barebones. I am unable to include anything useful in the email alert such as the actual message, the user must instead click on "View incident" and go to the specified timeframe of when the alert happened.
Is there no way to include the message? As far as I can tell viewing the gcp Using Markdown and variables in documentation templates doc on this.
I'm only really able to use ${resource.label.x} which isn't really all that useful because it already includes most of that stuff by default in the alert.
Could I have something like ${jsonPayload.message}? It didn't work when I tried it.
Probably (!) not.
To be clear, the alerting policies track metrics (not logs) and you've created a log-based metric that you're using as the basis for an alert.
There's information loss between the underlying log (that contains e.g. jsonPayload) and the metric that's produced from it (which probably does not). You can create Log-based metrics labels using expressions that include the underlying log entry fields.
However, per the example in Google's docs, you'd want to consider a limited (enum) type for these values (e.g. HTTP status although that may be too broad too) rather than a potentially infinite jsonPayload.
It is possible. Suppose you need to pass "jsonPayload.message" present in your GCP log to documentation section in your policy. You need to use "label_extractor" feature to extract your log message.
I will share a policy creation JSON file template wherein you can pass "jsonPayload.message" in the documentation section in your policy.
policy_json = {
"display_name": "<policy_name>",
"documentation": {
"content": "I have the extracted the log message:${log.extracted_label.msg}",
"mime_type": "text/markdown"
},
"user_labels": {},
"conditions": [
{
"display_name": "<condition_name>",
"condition_matched_log": {
"filter": "<filter_condition>",
"label_extractors": {
"msg": "EXTRACT(jsonPayload.message)"
}
}
}
],
"alert_strategy": {
"notification_rate_limit": {
"period": "300s"
},
"auto_close": "604800s"
},
"combiner": "OR",
"enabled": True,
"notification_channels": [
"<notification_channel>"
]
}

AWS Eventbridge: scheduling a CodeBuild job with environment variable overrides

When I launch an AWS CodeBuild project from the web interface, I can choose "Start Build" to start the build project with its normal configuration. Alternatively I can choose "Start build with overrides", which lets me specify, amongst others, custom environment variables for the build job.
From AWS EventBridge (events -> Rules -> Create rule), I can create a scheduled event to trigger the codebuild job, and this works. How though in EventBridge do I specify environment variable overrides for a scheduled CodeBuild job?
I presume it's possible somehow by using "additional settings" -> "Configure target input", which allows specification and templating of event JSON. I'm not sure though how how to work out, beyond blind trial and error, what this JSON should look like (to override environment variables in my case). In other words, where do I find the JSON spec for events sent to CodeBuild?
There are an number of similar questions here: e.g. AWS EventBridge scheduled events with custom details? and AWS Cloudwatch (EventBridge) Event Rule for AWS Batch with Environment Variables , but I can't find the specifics for CodeBuild jobs. I've tried the CDK docs at e.g. https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_events_targets.CodeBuildProjectProps.html , but am little wiser. I've also tried capturing the events output by EventBridge, to see what the event WITHOUT overrides looks like, but have not managed. Submitting the below (and a few variations: e.g. as "detail") as an "input constant" triggers the job, but the environment variables do not take effect:
{
"ContainerOverrides": {
"Environment": [{
"Name": "SOME_VAR",
"Value": "override value"
}]
}
}
There is also CodeBuild API reference at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax. EDIT: this seems to be the correct reference (as per my answer below).
The rule target's event input template should match the structure of the CodeBuild API StartBuild action input. In the StartBuild action, environment variable overrides have a key of "environmentVariablesOverride" and value of an array of EnvironmentVariable objects.
Here is a sample target input transformer with one constant env var and another whose value is taken from the event payload's detail-type:
Input path:
{ "detail-type": "$.detail-type" }
Input template:
{"environmentVariablesOverride": [
{"name":"MY_VAR","type":"PLAINTEXT","value":"foo"},
{"name":"MY_DYNAMIC_VAR","type":"PLAINTEXT","value":<detail-type>}]
}
I got this to work using an "input constant" like this:
{
"environmentVariablesOverride": [{
"name": "SOME_VAR",
"type": "PLAINTEXT",
"value": "override value"
}]
}
In other words, you can ignore the fields in the sample events in EventBridge, and the overrides do not need to be specified in a "detail" field.
I used the Code Build "StartBuild" API docs at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax to find this format. I would presume (but have not tested) that other fields show here would work similarly (and that the API reference for other services would work similarly when using EventBridge: can anyone confirm?).

Using CloudWatch Event : How to Pass JSON Object to CodeBuild as an Environment Variable

Summary: I can't specify a JSON object using CloudWatch target Input Transformer, in order to pass the object contents as an environment variable to a CodeBuild project.
Background:
I trigger an AWS CodeBuild job when an S3 bucket receives any new object. I have enabled CloudTrail for S3 operations so that I can use a CloudWatch rule that has my S3 bucket as an Event Source, with the CodeBuild project as a Target.
If I setup the 'Configure input' part of the Target, using Input Transformer, I can get single 'primitive' values from the event using the format below:
Input path textbox:
{"zip_file":"$.detail.requestParameters.key"}
Input template textbox:
{"environmentVariablesOverride": [ {"name":"ZIP_FILE", "value":<zip_file>}]}
And this works fine if I use 'simple' single strings.
However, for example, if I wish to obtain the entire 'resources' key, which is a JSON object, I need to have knowledge of each of the keys within, and the object structure, and manually recreate the structure for each key/value pair.
For example, the resources element in the Event is:
"resources": [
{
"type": "AWS::S3::Object",
"ARN": "arn:aws:s3:::mybucket/myfile.zip"
},
{
"accountId": "1122334455667799",
"type": "AWS::S3::Bucket",
"ARN": "arn:aws:s3:::mybucket"
}
],
I want the code in the buildspec in CodeBuild to do the heavy lifting and parse the JSON data.
If I specify in the input path textbox:
{"zip_file":"$.detail.resources"}
Then CodeBuild project never gets triggered.
Is there a way to get the entire JSON object, identified by a specific key, as an environment variable?
Check this...CodeBuild targets support all the parameters allowed by StartBuild API. You need to use environmentVariablesOverride in your JSON string.
{"environmentVariablesOverride": [ {"name":"ZIPFILE", "value":<zip_file>}]}
Please,avoid using '_' in the environment name.

Filtering Bitbucket webhooks for AWS CodePipeline

I'm trying to set up JSONPath filtering on incoming Bitbucket webhooks so the pipeline will only start when a push is made to the branch the pipeline is watching. The relevant parts of the webhook request body are:
{
"push": {
"changes": [
{
"new": {
"name": "repo_name"
}
}
]
}
}
which I filter with $.push.changes[0].new.name and check to see if the filter result matches {Branch}. I've also tried explicitly setting the branch name rather than letting CodePipeline resolve it.
I've confirmed this correctly filters the request with jsonpath.com, but using this no executions are triggered. I am able to get executions to trigger by using a JSONPath that just matches the repository name, but this is not ideal as executions will start when pushes are made to other branches too. What am I doing wrong?

AWS s3 -trigger on object created, function gets invoked continuously

I've created a lambda function to read a file (input.csv) from s3 bucket and make some changes into it and save that file(output.csv) in same bucket.
Note: i have not deleted input.csv file in bucket.
The lambda function is triggered with object-created(All) event. But the function is called continuously like infinite number of times as input file is present in bucket.
Is is supposed to happen like this ? or Is it fault?
This is your fault :)
You have set up a recursive trigger - each time you update the file, you're actually writing a new copy of it, which triggers the event, etc.
This was a key warning in the initial demo when Lambda was released (an image is uploaded to S3, lambda is triggered to create a thumbnail - if that thumbnail is written to the same bucket, it will trigger again, etc)
As #chris has pointed out, you've triggered a recursive loop by having events triggered by an S3 PUT event, which in turns performs another PUT, calling the trigger again and again.
To avoid this problem, the simplest method is to use two S3 buckets - one for files to be placed prior to processing, and another for files to be placed post-processing.
If you don't want to use two S3 buckets, you can modify your trigger condition to include FilterRules (docs). This allows you to control the trigger such that it would only get executed when an object is placed in a certain "folder" in S3 (of course folders don't really exist in S3, they're just key prefixes).
Here's an example:
{
"LambdaFunctionConfigurations": [
{
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "Prefix",
"Value": "queue/"
}
]
}
},
"LambdaFunctionArn": <lambda_func_arn>,
"Id": "<lambda_func_name>:app.lambda_handler",
"Events": [
"s3:ObjectCreated:*"
]
}
]
}