How to Set Todays Date as Minimum value in Input.Date Action of Adaptive Card.
When a user select date ,all Backdates & Previous dates need to be blocked and he can select only Dates after Today.
I used:-
"type": "Input.Date",
"label": "Date",
"id": "ipDate",
"isRequired": true,
"errorMessage": "Please enter Date",
"separator": true,
"min": "LocalTimestamp (Date(YYYY-MM-DD))"
But ,It is not Working.
Can anyone Guide what Expression should i use in Min Value???.
I want like this-->
Try utcNow() function.
Here is example, minimal and default date is today.
If you need some TimeSpan you can use addDays etc
Check: https://learn.microsoft.com/en-us/azure/bot-service/adaptive-expressions/adaptive-expressions-prebuilt-functions?view=azure-bot-service-4.0#date-and-time-functions
{
"id": "startDate",
"type": "Input.Date",
"value": "${substring(utcNow(),0,10)}",
"min": "${substring(utcNow(),0,10)}",
"errorMessage": "Date cannot be empty"
},
Related
I need to get APEX Rest Data Source to parse my JSON which has a nested array. I've read that JSON nested arrays are not supported but there must be a way.
I have a REST API that returns data via JSON as per below. On Apex, I've created a REST data source following the tutorial on this Oracle blog link
However, Auto-Discovery does not 'discover' the nested array. It only returns the root level data.
[ {
"order_number": "so1223",
"order_date": "2022-07-01",
"full_name": "Carny Coulter",
"email": "ccoulter2#ovh.net",
"credit_card": "3545556133694494",
"city": "Myhiya",
"state": "CA",
"zip_code": "12345",
"lines": [
{
"product": "Beans - Fava, Canned",
"quantity": 1,
"price": 1.99
},
{
"product": "Edible Flower - Mixed",
"quantity": 1,
"price": 1.50
}
]
},
{
"order_number": "so2244",
"order_date": "2022-12-28",
"full_name": "Liam Shawcross",
"email": "lshawcross5#exblog.jp",
"credit_card": "6331104669953298",
"city": "Humaitá",
"state": "NY",
"zip_code": "98670",
"lines": [
{
"order_id": 5,
"product": "Beans - Green",
"quantity": 2,
"price": 4.33
},
{
"order_id": 1,
"product": "Grapefruit - Pink",
"quantity": 5,
"price": 5.00
}
]
},
]
So in the JSON above, it only 'discovers' order_numbers up to zip_code. The 'lines' array with attributes order_id, product, quantity, & price do not get 'discovered'.
I found this SO question in which Carsten instructs to create the Rest Data Source manually. I've tried changing the Row Selector to "." (a dot) and leaving it blank. That still returns the root level data.
Changing the Row Selector to 'lines' returns only 1 array for each 'lines'
So in the JSON example above, it would only 'discover':
{
"product": "Beans - Fava, Canned",
"quantity": 1,
"price": 1.99
}
{
"order_id": 5,
"product": "Beans - Green",
"quantity": 2,
"price": 4.33
}
and not the complete array..
This is how the Data Profile is set up when creating Data Source manually.
There's another SO question with a similar situation so I followed some steps such as selecting the data type for 'lines' as JSON Document. I feel I've tried almost every selector & data type. But obviously not enough.
The docs are not very helpful on this subject and it's been difficult finding links on Google, Oracle Blogs, or SO.
My end goal would be to have two tables as below auto synchronizing from the API.
orders
id pk
order_number num
order_date date
full_name vc(200)
email vc(200)
credit_card num
city vc(200)
state vc(200)
zip_code num
lines
order_id /fk orders
product vc(200)
quantity num
price num
view orders_view orders lines
As you're correctly stating, REST Data Sources do not support nested arrays - a REST Source can only "extract" one flat table from the JSON response. In your example, the JSON as such is an array ("orders"). The Row Selector in the Data Profile would thus be "." (to select the "root node").
That gives you all the order attributes, but discovery would skip the lines array. However, you can manually add a column to the Data Profile, of the JSON Document data type, and using lines as the selector.
As a result, you'd still get a flat table from the REST Data Source, but that table contains a LINES column, which contains the "JSON Fragment" for the order line items. You could then synchronize the REST Source to a local table ("REST Synchronization"), then you can use some custom code to extract the JSON fragments to a ORDER_LINES child table.
Does that help?
I have a custom metric in Cloudwatch that has the value of 1 or 0. I need to create a pie widget in a dashboard that will represent the percentage of how many 1 and 0 i have in a time period selected. Is this possible only with metric math? If so, how? If not, how else?
Thanks.
If you're only publishing 0s and 1s, then the average statistic will give you the percentage of 1s. You simply add your metric to the graph and set Statistic to Average, and put the id to m1
For pie chart you need two metrics, percentage of 1s and percentage of 0s. Since these are the only 2 values, percentage of zeros will be 1 - percentage of 1s. You can get this by adding a metric math expression 1 - m1.
Pie chart will by default only display the value of the last datapoint. You need to change this by clicking to the Options tab on the edit graph view, Widget type should be Pie, and selecting Time range value shows the value from the entire time range.
Example source of the graph would be:
{
"metrics": [
[ { "expression": "1-m1", "label": "zeros", "id": "e1", "region": "eu-west-1" } ],
[ YOUR METRIC DEFINITION, { "id": "m1", "label": "ones" } ]
],
"period": 60,
"view": "pie",
"stacked": false,
"stat": "Average",
"setPeriodToTimeRange": true,
"sparkline": false,
"region": "YOUR REGION"
}
I want to build a dashboard that displays the percentage of the uptime for each month of an Elastic Beanstalk service in my company.
So I used boto3 get_metric_data to retrieve the Environment Health CloudWatch metrics data and calculate the percentage of non-severe time of my service.
from datetime import datetime
import boto3
SEVERE = 25
client = boto3.client('cloudwatch')
metric_data_queries = [
{
'Id': 'healthStatus',
'MetricStat': {
'Metric': {
'Namespace': 'AWS/ElasticBeanstalk',
'MetricName': 'EnvironmentHealth',
'Dimensions': [
{
'Name': 'EnvironmentName',
'Value': 'ServiceA'
}
]
},
'Period': 300,
'Stat': 'Maximum'
},
'Label': 'EnvironmentHealth',
'ReturnData': True
}
]
response = client.get_metric_data(
MetricDataQueries=metric_data_queries,
StartTime=datetime(2019, 9, 1),
EndTime=datetime(2019, 9, 30),
ScanBy='TimestampAscending'
)
health_data = response['MetricDataResults'][0]['Values']
total_times = len(health_data)
severe_times = health_data.count(SEVERE)
print(f'total_times: {total_times}')
print(f'severe_times: {severe_times}')
print(f'healthy percent: {1 - (severe_times/total_times)}')
Now I'm wondering how to show the percentage on the dashboard on CloudWatch. I mean I want to show something like the following:
Does anyone know how to upload the healthy percent I've calculated to the dashboard of CloudWatch?
Or is there any other tool that is more appropriate for displaying the uptime of my service?
You can do math with CloudWatch metrics:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html
You can create a metric math expression with the metrics you have in metric_data_queries and get the result on the graph. Metric math also works with GetMetricData API, so you could move the calculation you do into MetricDataQuery and get the number you need directly from CloudWatch.
Looks like you need a number saying what percentage of datapoints in the last month the metric value equaled to 25.
You can calculate it like this (this is the source of the graph, you can use in CloudWatch console on the source tab, make sure the region matches your region and the metric name matches your metric):
{
"metrics": [
[
"AWS/ElasticBeanstalk",
"EnvironmentHealth",
"EnvironmentName",
"ServiceA",
{
"label": "metric",
"id": "m1",
"visible": false,
"stat": "Maximum"
}
],
[
{
"expression": "25",
"label": "Value for severe",
"id": "severe_c",
"visible": false
}
],
[
{
"expression": "m1*0",
"label": "Constant 0 time series",
"id": "zero_ts",
"visible": false
}
],
[
{
"expression": "1-AVG(CEIL(ABS(m1-severe_c)/MAX(m1)))",
"label": "Percentage of times value equals severe",
"id": "severe_pct",
"visible": false
}
],
[
{
"expression": "(zero_ts+severe_pct)*100",
"label": "Service Uptime",
"id": "e1"
}
]
],
"view": "singleValue",
"stacked": false,
"region": "eu-west-1",
"period": 300
}
To explain what is going on there (what is the purpose of each element above, by id):
m1 - This is your original metric. Setting stat to Maximum.
severe_c - Constant you want to use for your SEVERE value.
zero_ts - Creating a constant time series with all values equal zero. This is needed because constants can't be graphed and the final value will be constant. So to graph it, we'll just add the constant to this time series of zeros.
severe_pct - this is where you actually calculate the percentage of value that are equal SEVERE.
m1-severe_c - sets the datapoints with value equal SEVERE to 0.
ABS(m1-severe_c) - makes all values positive, keeps SEVERE datapoints at 0.
ABS(m1-severe_c)/MAX(m1) - dividing by maximum value ensures that all values are now between 0 and 1.
CEIL(ABS(m1-severe_c)/MAX(m1)) - snaps all values that are different than 0 to 1, keeps SEVERE at 0.
AVG(CEIL(ABS(m1-severe_c)/MAX(m1)) - Because metric is now all 1s and 0s, with 0 meaning SEVERE, taking the average gives you the percentage of non severe datapoints.
1-AVG(CEIL(ABS(m1-severe_c)/MAX(m1))) - finally you need the percentage of severe values and since values are either severe or not sever, substracting from 1 gives you the needed number.
e1 - The last expression gave you a constant between 0 and 1. You need a time series between 0 and 100. This is the expression that gives you that: (zero_ts+severe_pct)*100. Not that this is the only result that you're returning, all other expressions have "visible": false.
I've been following Miguel C's tutorial on setting up a DynamoDB table in golang but modified my json to look like this instead of using movies. I modified the movie struct into a Fruit struct (so there is no more info) and in my schema I defined the partition key as "Name" and the Sort Key as "Price". But when I run my code it says
"ValidationException: One of the required keys was not given a value"
despite me printing out the input as
map[name:{
S: "bananas"
} price:{
N: "0.25"
}]
which clearly shows that String bananas and Number 0.25 both have values in them.
My Json below looks like this:
[
{
"name": "bananas",
"price": 0.25
},
{
"name": "apples",
"price": 0.50
}
]
Capitalization issue, changed "name" to "Name" and it worked out.
I have one year's 15 minute interval data in my kairosdb. I need to do following things sequentially:
- filter data using a tag
- group filtered data using few tags. I am not specifying values of tags because I want them to automatically grouped by tag values at runtime.
- once grouped on those tags, I want to aggregate sum 15 min interval data into a month.
I wrote this query to run from python script based on information available on kairosdb google code forum. But the aggregated values seem incorrect. Output seem skewed. I want to understand where I am going wrong. I am doing this in python. Here is my json query:
agg_query = {
"start_absolute": 1412136000000,
"end_absolute": 1446264000000,
"metrics":[
{
"tags": {
"insert_date": ["11/17/2015"]
},
"name": "gb_demo",
"group_by": [
{
"name": "time",
"range_size": {
"value": "1",
"unit": "months"
},
"group_count": "12"
},
{
"name": "tag",
"tags": ["usage_kind","building_snapshot_id","usage_point_id","interval"]
}
],
"aggregators": [
{
"name": "sum",
"sampling": {
"value": 1,
"unit": "months"
}
}
]
}
]
}
For reference: Data is something like this:
[[1441065600000,53488],[1441066500000,43400],[1441067400000,44936],[1441068300000,48736],[1441069200000,51472],[1441070100000,43904],[1441071000000,42368],[1441071900000,41400],[1441072800000,28936],[1441073700000,34896],[1441074600000,29216],[1441075500000,26040],[1441076400000,24224],[1441077300000,27296],[1441078200000,37288],[1441079100000,30184],[1441080000000,27824],[1441080900000,27960],[1441081800000,28056],[1441082700000,29264],[1441083600000,33272],[1441084500000,33312],[1441085400000,29360],[1441086300000,28400],[1441087200000,28168],[1441088100000,28944],[1443657600000,42112],[1443658500000,36712],[1443659400000,38440],[1443660300000,38824],[1443661200000,43440],[1443662100000,42632],[1443663000000,42984],[1443663900000,42952],[1443664800000,36112],[1443665700000,33680],[1443666600000,33376],[1443667500000,28616],[1443668400000,31688],[1443669300000,30872],[1443670200000,28200],[1443671100000,27792],[1443672000000,27464],[1443672900000,27240],[1443673800000,27760],[1443674700000,27232],[1443675600000,27824],[1443676500000,27264],[1443677400000,27328],[1443678300000,27576],[1443679200000,27136],[1443680100000,26856]]
This is snapshot of some data from Sep and Oct 2015. When I run this, if I give start timestamp of Sep, it will sum Sep data correctly, but for october it doesn't.
I believe your group by time will create groups by calendar month (January to December), but your sum aggregator will sum values by a running month starting withyour start date... Which seems a bit weird. COuld that be the cause of what you see?
What is the data like? What is the aggregated result like?