EC2Config Cloudwatch logs streaming not working - amazon-web-services

I was hoping someone could help with this, I'm trying to stream logs from a Windows Server 2012 with EC2config service installed.
I have followed the following documentation:
https://aws.amazon.com/blogs/devops/using-cloudwatch-logs-with-amazon-ec2-running-microsoft-windows-server/
Unfortunately nothing is streaming to cloudwatch logs.
Here is the Json I'm using:
{
"EngineConfiguration": {
"PollInterval": "00:00:15",
"Components": [
{
"Id": "ApplicationEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Application",
"Levels": "1"
}
},
{
"Id": "SystemEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "System",
"Levels": "7"
}
},
{
"Id": "SecurityEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Security",
"Levels": "7"
}
},
{
"Id": "ETW",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Microsoft-Windows-WinINet/Analytic",
"Levels": "7"
}
},
{
"Id": "IISLog",
"FullName": "AWS.EC2.Windows.CloudWatch.IISLogOutput,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\inetpub\\logs\\LogFiles\\W3SVC1"
"AccessKey": "",
"SecretKey": "",
"Region": "eu-west-1",
"LogGroup": "Web-Logs",
"LogStream": "IIStest"
}
},
{
"Id": "CustomLogs",
"FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\CustomLogs\\",
"TimestampFormat": "MM/dd/yyyy HH:mm:ss",
"Encoding": "UTF-8",
"Filter": "",
"CultureName": "en-US",
"TimeZoneKind": "Local"
}
},
{
"Id": "PerformanceCounter",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "Memory",
"CounterName": "Available MBytes",
"InstanceName": "",
"MetricName": "Memory",
"Unit": "Megabytes",
"DimensionName": "",
"DimensionValue": ""
}
},
{
"Id": "CloudWatchLogs",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"AccessKey": "",
"SecretKey": "",
"Region": "eu-west-1",
"LogGroup": "Win2Test",
"LogStream": "logging-test"
}
},
{
"Id": "CloudWatch",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters":
{
"AccessKey": "",
"SecretKey": "",
"Region": "eu-west-1",
"NameSpace": "Windows/Default"
}
}
],
"Flows": {
"Flows":
[
"(ApplicationEventLog,SystemEventLog),CloudWatchLogs",
"IISLog"
]
}
}
}
At this moment in time i only want to stream the IIS logs, from my understanding the Cloudwatch Log group and stream should automatically create.

The issue with Flows section is the second component of Flow definition is missing :
instead of
"Flows": {
"Flows":
[
"(ApplicationEventLog,SystemEventLog),CloudWatchLogs",
"IISLog"
]
}
It should be
[
"(ApplicationEventLog,SystemEventLog),CloudWatchLogs",
"IISLog,CloudWatchLogs"
]
The Flows section defines the source and target of the components from Components section first being what/how to get and second is how to send.
e.g. consider following snippet here ApplicationEventLog and SystemEventLog will be sent to CloudWatch (refers to "Id" : "CloudWatch" defined in Components instead of AWS CloudWatch).
The second line defines second flow i.e. PerformanceCounter sent to CloudWatch1
"Flows": {
"Flows":
[
"(ApplicationEventLog,SystemEventLog),CloudWatchLogs",
"PerformanceCounter,CloudWatch1"
]
}
hope this explains how it resolved the issue.

Looks like i made a few mistakes on the JSON file itself, specifically the FLOW area.
Got this working now :)

Related

AWS Lambda VScode launch.json

Recently AWS introduced launch configurations support for SAM debugging in the AWS Toolkit for VS Code
Ref : https://aws.amazon.com/blogs/developer/introducing-launch-configurations-support-for-sam-debugging-in-the-aws-toolkit-for-vs-code/
It means we cant use templates.json file, instead need to use launch.json to send in your event to lambda.
I want to send a test event to lambda function (a SQS message).
Before introducing launch configuration templates.json had it like this (and it worked fine):
"templates": {
"xxxxxxxx/template.yaml": {
"handlers": {
"xxxxxxxxx.lambdaHandler": {
"event": {
"Records": [
{
"messageId": "xxxxxxxxxxxxxxxx",
"receiptHandle": "xxxxxxxxxxxxxxxx",
"body": "{\"operation\": \"publish\", \"data\": { \"__typename\": \"xxxxxxxxxxxxxxxx\", \"id\": \"xxxxxxxxxxxxxxxx\" }}",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "xxxxxxxxxxxxxxxx",
"SequenceNumber": "xxxxxxxxxxxxxxxx",
"MessageGroupId": "xxxxxxxxxxxxxxxx",
"SenderId": "xxxxxxxxxxxxxxxx:LambdaFunctionTest",
"MessageDeduplicationId": "xxxxxxxxxxxxxxxx",
"ApproximateFirstReceiveTimestamp": "xxxxxxxxxxxxxxxx"
},
"messageAttributes": {
"environment": {
"DataType": "String",
"stringValue": "Dev"
}},
"md5OfBody": "xxxxxxxxxxxxxxxx",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-east-1:xxxxxxxxxxxxxxxx:xxx.fifo",
"awsRegion": "us-east-1"
}
]
},
"environmentVariables": {}
}
............
But in launch.json , i pasted the Records in the following way and it is not excepting, see also attached jpg screenshot.
{
"configurations": [
{
"type": "aws-sam",
"request": "direct-invoke",
"name": "xxxxxxxx)",
"invokeTarget": {
"target": "code",
"projectRoot": "xxxxxxxx",
"lambdaHandler": "xxxxxxxx.lambdaHandler"
},
"lambda": {
"runtime": "nodejs12.x",
"payload": {
"json": {
"Records": [
{
"messageId": "xxxxxxxxxxxxxxxx",
"receiptHandle": "xxxxxxxxxxxxxxxx",
"body": "{\"operation\": \"publish\", \"data\": { \"__typename\": \"xxxxxxxxxxxxxxxx\", \"id\": \"xxxxxxxxxxxxxxxx\" }}",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "xxxxxxxxxxxxxxxx",
"SequenceNumber": "xxxxxxxxxxxxxxxx",
"MessageGroupId": "xxxxxxxxxxxxxxxx",
"SenderId": "xxxxxxxxxxxxxxxx:LambdaFunctionTest",
"MessageDeduplicationId": "xxxxxxxxxxxxxxxx",
"ApproximateFirstReceiveTimestamp": "xxxxxxxxxxxxxxxx"
},
"messageAttributes": {
"environment": {
"DataType": "String",
"stringValue": "Dev"
}},
"md5OfBody": "xxxxxxxxxxxxxxxx",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-east-1:xxxxxxxxxxxxxxxx:xxx.fifo",
"awsRegion": "us-east-1"
}
]
},
},
}
},
enter image description here
Blockquote

Taxonomy of event- driven Lambda messages?

I'm building event driven AWS stacks with Lambda+APIGateway+SQS+SNS+S3+DynamoDB.
One of my constant frustrations is that, if you bind any of the above to Lambda (either through event notifications or event source mappings), the formats of the event messages received by the Lambda are completely different - so a message sent by S3 is completely different to one sent by SQS which is completely different to one sent by DynamoDB etc.
Normally I have to set up a Cloudformation stack with an event source + event source mapping + Lambda, then push a message onto the event source to see what message actually results. What a giant pain.
Is there not a single combined resource out there which lists the different schema formats of different event messages ? Hoping someone can point me in the right direction.
Lambda console provides some example events in Configure test event. Here are the examples from the console for the services you mentioned.
APIGateway (aws proxy)
{
"body": "eyJ0ZXN0IjoiYm9keSJ9",
"resource": "/{proxy+}",
"path": "/path/to/resource",
"httpMethod": "POST",
"isBase64Encoded": true,
"queryStringParameters": {
"foo": "bar"
},
"multiValueQueryStringParameters": {
"foo": [
"bar"
]
},
"pathParameters": {
"proxy": "/path/to/resource"
},
"stageVariables": {
"baz": "qux"
},
"headers": {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Encoding": "gzip, deflate, sdch",
"Accept-Language": "en-US,en;q=0.8",
"Cache-Control": "max-age=0",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "US",
"Host": "1234567890.execute-api.us-east-1.amazonaws.com",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Custom User Agent String",
"Via": "1.1 08f323deadbeefa7af34d5feb414ce27.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "cDehVQoZnx43VYQb9j2-nvCh-9z396Uhbp027Y2JvkCPNLmGJHqlaA==",
"X-Forwarded-For": "127.0.0.1, 127.0.0.2",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"multiValueHeaders": {
"Accept": [
"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
],
"Accept-Encoding": [
"gzip, deflate, sdch"
],
"Accept-Language": [
"en-US,en;q=0.8"
],
"Cache-Control": [
"max-age=0"
],
"CloudFront-Forwarded-Proto": [
"https"
],
"CloudFront-Is-Desktop-Viewer": [
"true"
],
"CloudFront-Is-Mobile-Viewer": [
"false"
],
"CloudFront-Is-SmartTV-Viewer": [
"false"
],
"CloudFront-Is-Tablet-Viewer": [
"false"
],
"CloudFront-Viewer-Country": [
"US"
],
"Host": [
"0123456789.execute-api.us-east-1.amazonaws.com"
],
"Upgrade-Insecure-Requests": [
"1"
],
"User-Agent": [
"Custom User Agent String"
],
"Via": [
"1.1 08f323deadbeefa7af34d5feb414ce27.cloudfront.net (CloudFront)"
],
"X-Amz-Cf-Id": [
"cDehVQoZnx43VYQb9j2-nvCh-9z396Uhbp027Y2JvkCPNLmGJHqlaA=="
],
"X-Forwarded-For": [
"127.0.0.1, 127.0.0.2"
],
"X-Forwarded-Port": [
"443"
],
"X-Forwarded-Proto": [
"https"
]
},
"requestContext": {
"accountId": "123456789012",
"resourceId": "123456",
"stage": "prod",
"requestId": "c6af9ac6-7b61-11e6-9a41-93e8deadbeef",
"requestTime": "09/Apr/2015:12:34:56 +0000",
"requestTimeEpoch": 1428582896000,
"identity": {
"cognitoIdentityPoolId": null,
"accountId": null,
"cognitoIdentityId": null,
"caller": null,
"accessKey": null,
"sourceIp": "127.0.0.1",
"cognitoAuthenticationType": null,
"cognitoAuthenticationProvider": null,
"userArn": null,
"userAgent": "Custom User Agent String",
"user": null
},
"path": "/prod/path/to/resource",
"resourcePath": "/{proxy+}",
"httpMethod": "POST",
"apiId": "1234567890",
"protocol": "HTTP/1.1"
}
}
SQS
{
"Records": [
{
"messageId": "19dd0b57-b21e-4ac1-bd88-01bbb068cb78",
"receiptHandle": "MessageReceiptHandle",
"body": "Hello from SQS!",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "1523232000000",
"SenderId": "123456789012",
"ApproximateFirstReceiveTimestamp": "1523232000001"
},
"messageAttributes": {},
"md5OfBody": "7b270e59b47ff90a553787216d55d91d",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-east-1:123456789012:MyQueue",
"awsRegion": "us-east-1"
}
]
}
SNS
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "arn:aws:sns:us-east-1:{{{accountId}}}:ExampleTopic",
"Sns": {
"Type": "Notification",
"MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
"TopicArn": "arn:aws:sns:us-east-1:123456789012:ExampleTopic",
"Subject": "example subject",
"Message": "example message",
"Timestamp": "1970-01-01T00:00:00.000Z",
"SignatureVersion": "1",
"Signature": "EXAMPLE",
"SigningCertUrl": "EXAMPLE",
"UnsubscribeUrl": "EXAMPLE",
"MessageAttributes": {
"Test": {
"Type": "String",
"Value": "TestString"
},
"TestBinary": {
"Type": "Binary",
"Value": "TestBinary"
}
}
}
}
]
}
S3 (put)
{
"Records": [
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "1970-01-01T00:00:00.000Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"responseElements": {
"x-amz-request-id": "EXAMPLE123456789",
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "testConfigRule",
"bucket": {
"name": "example-bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
},
"arn": "arn:aws:s3:::example-bucket"
},
"object": {
"key": "test/key",
"size": 1024,
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901"
}
}
}
]
}
DynamoDB
{
"Records": [
{
"eventID": "c4ca4238a0b923820dcc509a6f75849b",
"eventName": "INSERT",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
"awsRegion": "us-east-1",
"dynamodb": {
"Keys": {
"Id": {
"N": "101"
}
},
"NewImage": {
"Message": {
"S": "New item!"
},
"Id": {
"N": "101"
}
},
"ApproximateCreationDateTime": 1428537600,
"SequenceNumber": "4421584500000000017450439091",
"SizeBytes": 26,
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"eventSourceARN": "arn:aws:dynamodb:us-east-1:123456789012:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899"
},
{
"eventID": "c81e728d9d4c2f636f067f89cc14862c",
"eventName": "MODIFY",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
"awsRegion": "us-east-1",
"dynamodb": {
"Keys": {
"Id": {
"N": "101"
}
},
"NewImage": {
"Message": {
"S": "This item has changed"
},
"Id": {
"N": "101"
}
},
"OldImage": {
"Message": {
"S": "New item!"
},
"Id": {
"N": "101"
}
},
"ApproximateCreationDateTime": 1428537600,
"SequenceNumber": "4421584500000000017450439092",
"SizeBytes": 59,
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"eventSourceARN": "arn:aws:dynamodb:us-east-1:123456789012:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899"
},
{
"eventID": "eccbc87e4b5ce2fe28308fd9f2a7baf3",
"eventName": "REMOVE",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
"awsRegion": "us-east-1",
"dynamodb": {
"Keys": {
"Id": {
"N": "101"
}
},
"OldImage": {
"Message": {
"S": "This item has changed"
},
"Id": {
"N": "101"
}
},
"ApproximateCreationDateTime": 1428537600,
"SequenceNumber": "4421584500000000017450439093",
"SizeBytes": 38,
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"eventSourceARN": "arn:aws:dynamodb:us-east-1:123456789012:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899"
}
]
}

Alexa smart home skill development: Device discovery not working

I am currently developing a smart home skill for my blinds, however I am unable to discover device. Is there a way for me to validate my Discovery response message? I'm thinking this is some logical error in the JSON.
I'm using a Lambda function to perform the requests to my API using node-fetch and async/await, thus I have tagged all JS function as async, this could be another potential cause of this issue. I don't get any errors in CloudWatch either.
This is the response my Lambda function is sending:
{
"event": {
"header": {
"namespace": "Alexa.Discovery",
"name": "Discover.Response",
"payloadVersion": "3",
"messageId": "0a58ace0-e6ab-47de-b6af-b600b5ab8a7a"
},
"payload": {
"endpoints": [
{
"endpointId": "com-tobisoft-rollos-1",
"manufacturerName": "tobisoft",
"description": "Office Blinds",
"friendlyName": "Office Blinds",
"displayCategories": [
"INTERIOR_BLIND"
],
"capabilities": [
{
"type": "AlexaInterface",
"interface": "Alexa.RangeController",
"instance": "Blind.Lift",
"version": "3",
"properties": {
"supported": [
{
"name": "rangeValue"
}
],
"proactivelyReported": true,
"retrievable": true
},
"capabilityResources": {
"friendlyNames": [
{
"#type": "asset",
"value": {
"assetId": "Alexa.Setting.Opening"
}
}
]
},
"configuration": {
"supportedRange": {
"minimumValue": 0,
"maximumValue": 100,
"precision": 1
},
"unitOfMeasure": "Alexa.Unit.Percent"
},
"semantics": {
"actionMappings": [
{
"#type": "ActionsToDirective",
"actions": [
"Alexa.Actions.Close"
],
"directive": {
"name": "SetRangeValue",
"payload": {
"rangeValue": 100
}
}
},
{
"#type": "ActionsToDirective",
"actions": [
"Alexa.Actions.Open"
],
"directive": {
"name": "SetRangeValue",
"payload": {
"rangeValue": 1
}
}
},
{
"#type": "ActionsToDirective",
"actions": [
"Alexa.Actions.Lower"
],
"directive": {
"name": "AdjustRangeValue",
"payload": {
"rangeValueDelta": 10,
"rangeValueDeltaDefault": false
}
}
},
{
"#type": "ActionsToDirective",
"actions": [
"Alexa.Actions.Raise"
],
"directive": {
"name": "AdjustRangeValue",
"payload": {
"rangeValueDelta": -10,
"rangeValueDeltaDefault": false
}
}
}
],
"stateMappings": [
{
"#type": "StatesToValue",
"states": [
"Alexa.States.Closed"
],
"value": 100
},
{
"#type": "StatesToRange",
"states": [
"Alexa.States.Open"
],
"range": {
"value": 0
}
}
]
}
},
{
"type": "AlexaInterface",
"interface": "Alexa",
"version": "3"
}
]
}
]
}
}
}
Thanks for any help.
Is there a way for me to validate my Discovery response message?
Yes, you could use the Alexa Smart Home Message JSON Schema. This schema can be used for message validation during skill development, it validates Smart Home skills (except the Video Skills API).
This is the response my Lambda function is sending
I've validated your response following this steps, the result: no errors found, the JSON validates against the schema. So, there's probably another thing going on. I suggest getting in touch with Alexa Developer Contact Us

AWS.EC2.Windows.CloudWatch.json Not Logging Free Disk Space

On our instance (i-568a3f14) running Windows Server 2012 Standard we are trying to log the free disk space in CloudWatch. I've put the file AWS.EC2.Windows.CloudWatch.json in the directory C:\Program Files\Amazon\Ec2ConfigService\Settings
The contents of the .json file are as follows :
{
"IsEnabled": true,
"EngineConfiguration": {
"PollInterval": "00:00:05",
"Components": [
{
"Id": "PerformanceCounterDisk",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "LogicalDisk",
"CounterName": "% Free Space",
"InstanceName": "C:",
"MetricName": "FreeDiskPercentage",
"Unit": "Percent",
"DimensionName": "InstanceId",
"DimensionValue": "i-568a3f14"
},
{
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
"Id": "CloudWatch",
"Parameters": {
"AccessKey": "",
"NameSpace": "Windows/Default",
"Region": "eu-west-1a",
"SecretKey": ""
}
}
}
],
"Flows": {
"Flows":
[
"(PerformanceCounterDisk),CloudWatch"
]
}
}
}
I've restarted the Ec2ConfigService, but still, we aren't getting any data logged. What am I missing or what else should I try.

AWS Data Pipeline stuck on Waiting For Runner

My goal is to copy a table in a postgreSQL database running on AWS RDS to a .csv file on Amazone S3. For this I use AWS data pipeline and found the following tutorial however when I follow all steps my pipeline is stuck at: "WAITING FOR RUNNER" see screenshot. The AWS documentation states:
ensure that you set a valid value for either the runsOn or workerGroup
fields for those tasks
however the field "runs on" is set. Any idea why this pipeline is stuck?
and my definition file:
{
"objects": [
{
"output": {
"ref": "DataNodeId_Z8iDO"
},
"input": {
"ref": "DataNodeId_hEUzs"
},
"name": "DefaultCopyActivity01",
"runsOn": {
"ref": "ResourceId_oR8hY"
},
"id": "CopyActivityId_8zaDw",
"type": "CopyActivity"
},
{
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"name": "DefaultResource1",
"id": "ResourceId_oR8hY",
"type": "Ec2Resource",
"terminateAfter": "1 Hour"
},
{
"*password": "xxxxxxxxx",
"name": "DefaultDatabase1",
"id": "DatabaseId_BWxRr",
"type": "RdsDatabase",
"region": "eu-central-1",
"rdsInstanceId": "aqueduct30v05.cgpnumwmfcqc.eu-central-1.rds.amazonaws.com",
"username": "xxxx"
},
{
"name": "DefaultDataFormat1",
"id": "DataFormatId_wORsu",
"type": "CSV"
},
{
"database": {
"ref": "DatabaseId_BWxRr"
},
"name": "DefaultDataNode2",
"id": "DataNodeId_hEUzs",
"type": "SqlDataNode",
"table": "y2018m07d12_rh_ws_categorization_label_postgis_v01_v04",
"selectQuery": "SELECT * FROM y2018m07d12_rh_ws_categorization_label_postgis_v01_v04 LIMIT 100"
},
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://rutgerhofste-data-pipeline/logs",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
},
{
"dataFormat": {
"ref": "DataFormatId_wORsu"
},
"filePath": "s3://rutgerhofste-data-pipeline/test",
"name": "DefaultDataNode1",
"id": "DataNodeId_Z8iDO",
"type": "S3DataNode"
}
],
"parameters": []
}
Usually "WAITING FOR RUNNER" state implies that it is waiting for a resource (such as an EMR cluster). You seem to have not set 'workGroup' field. It means that you have specified "What" to do, but have not specified "who" should do it.