I have such a JSON path in my task parameters section
"foo.$": "$.MapResult[].Payload[].data"
I tested it in AWS console dataflow and it worked fine and returned list of values for "data" key from the Payload list as expected but when I tried to deploy it I got
The value for the field 'foo.$' must be a valid JSONPath or a valid intrinsic function call (at /States/...-Task/Parameters)
OK, I sorted it out and it seems like a bug in StepFunctions probably but anyway.
Despite the fact that it worked in dataflow simulator and it's a valid JSONpath to make it work in StateMachine you need to add a wildcard into brackets so it should be like this
"foo.$": "$.MapResult[*].Payload[*].data"
Related
how can I get last 30 minutes AWS CloudWatch logs which are inserted to the specific LogStream using AWS Command ?
Can you describe what you already tried yourself and what you ran into? Looking at the AWS CLI command reference, it seems that you should be able to run "aws cloudwatch get-log-events ----log-stream-name <name of the stream> --start-time <timestamp>" to get a list of events, starting at given UNIX timestamp, calculating the timestamp should be fairly trivial.
Addition, based on your comment: you'll need to look into the AWS concept of pagination. Most/many AWS API calls (which the CLI also makes for you) retrieve a size/length limited set of data and return a token if there is more data present. You can then make a subsequent call passing that token, which tells the service to return data starting at that token. Repeat this process until you no longer get a token back, at which point you know you have iterated the full dataset.
For this specific CLI command, there is a flag.
--next-token (string)
The token for the next set of items to return. (You received this token from a previous call.
Hope this helps?
I'm trying to run a PUT request using Postman to change the retention rules of a specific build definition, in Azure DevOps, and change the daysToKeep value.
But I keep getting the error:
"The request specifies pipeline ID 1722 but the supplied pipeline has ID 0."
Any idea where do I go wrong?
In order to change\update any parameter on the build definition, first run a GET request.
The result JSON output should be used as the body for the PUT request.
Use this body and change\update the relevant parameter(s) you need...
It is very important to increase the "revision" parameter, located in the root of the JSON output, in 1. (for example, if current is 97, for next run it should be 98).
Special thanks to Tinxuanna for directing me to this solution!
#ShaiO Please take a look at this link List pipelines Azure DevOps. It may help you because it gets a list of the pipelines.
Then you can create a pipeline following the documentation.
I am trying to use the 'newUUID()' aws iot function in the AWS SiteWise service (as part of an alarm action) that returns a random 16-byte UUID to be stored as a partition key for a DynamoDb tables partition key column.
With reference to the attached screenshot, in the 'PartitionKeyValue' trying to
use the value returned by newUUID() function that will be passed to the DynamoDb as part of the action trigger.
Although this gives an error as follows:
"Invalid Request exception: Failed to parse expression due to: Invalid expression. Unrecognized function: newUUID".
I do understand the error, but not sure how can I solve this and use a random UUID generator. Kindly note that I do not want to use a timestamp, because there could be eventualities where multiple events get triggered at the same time and hence the same timestamp.
Any ideas that how can I use this function, or any other information that helps me achieve the above-mentioned.
The docs you refer to say that function is all lowercase newuuid().
Perhaps that will work, but I believe that function is only available in IoT Core SQL Statements. I think with event notifications, you only have these expressions to work with, which is not much. Essentially, you need to get what you need from the alarm event itself.
You may need the alarm event to invoke Lambda, rather than directly write to DynamoDB. Your Lambda function can create a UUID and write the alarm record to DynamoDB using the SDKs.
Can anyone explain why an aws glue Workflow would have empty default run properties and no graph, when accessed from an sdk? When I view the same workflow on the aws console I can see the ui representation of the graph and the run properties.
Yet when I access the same workflow via sdks (tried java and boto3) the Workflow object show empty default run properties and no graph. The accessor methods for these attributes return empty objects or null. For example
with the java sdk
myWorkflow.getGraph() returns null
I know the workflow has a several nodes and I have run and modified the workflow many times via the console.
I've tried to research if this is a permissions issue but I can't find anything to back that up and I don't get an error. Any insights would be appreciated.
So there is a "IncludeGraph" parameter in the getWorkflow request. The default of which is False. So to get the graph returned with your workflow you must set the parameter to true.
in Java:
......yourWorkflowRequest.withIncludeGraph(true)
in boto3:
.get_workflow(Name='the_workflow', IncludeGraph=True)
When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.