I have configured glue job to run 3 times if job fails. is there any way to pass that retry count number1, 2 or 3 like that. i am creating glue jobs through terraform and i wanted to pass the retry count as showed below through default arguments.
1 Failed retry = 1 (pass 1 as parameter if attempting first retry)
2 failed retry = 2 (pass 2 as parameter if attempting first retry)
3 failed retry = 3 (pass 3 as parameter if attempting first retry)
max_retries is an optional parameter which set maximum number of times to retry this job if it fails.
resource "aws_glue_job" "bidb-cdc-data-load-glue-job" {
name = "bidb-cdc-data-load-glue-job"
role_arn = var.glue_s3_redshift_access_role
connections = [ "bidb-dev-datawarehouse-core-redshift-glue-connection" ]
default_arguments = {
"--Secret" = var.redshift_credentials_parameter_store
}
max_retries=1 }
Related
I have this pre-request script and using runner to send bulk requests each second
const moment = require('moment');
postman.setEnvironmentVariable("requestid", moment().format("0223YYYYMMDDHHmmss000000"));
I need the “requestid” to be unique every time.
first request: "022320221115102036000001"
second request: "022320221115102037000002"
third request: "022320221115102038000003"
.
.
.
and so on until let’s say 1000 requests.
Basically, I need to make the last 6 digits dynamic.
Your answer can be found on this postman request I've created for you. There's many ways to achieve this, given the little information provided, I've defaulted to:
Set a baseline prefix (before the last 6 numbers)
Give a start number for the last 6 numbers
If there IS NOT a previous variable stored initialized with the values above
If there IS a previous variable stored just increment it by one.
The variable date is your final result, current is just the increment
You can see here sequential requests:
And here is the code, but I would test this directly on the request I've provided above:
// The initial 6 number to start with
// A number starting with 9xxxxxx will be easier for String/Number converstions
const BASELINE = '900000'
const PREFIX = '022320221115102036'
// The previous used value if any
let current = pm.collectionVariables.get('current')
// If there's a previous number increment that, otherwise use the baseline
if (isNaN(current)) {
current = BASELINE
} else {
current = Number(current) + 1
}
const date = PREFIX + current
// Final number you want to use
pm.collectionVariables.set('current', current)
pm.collectionVariables.set('date', PREFIX + date)
console.log(current)
console.log(date)
I'm trying to run a prediction using a sagemaker endpoint. The input format is comma separated features and | separated observations.
However when I try to iterate over the input data and invoke the end point on every iteration like this :
ENDPOINT_NAME = "my_endpoint"
runtime= boto3.client('runtime.sagemaker')
results = []
for r in request_body.split('|'):
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='text/csv',
Body=r)
result = json.loads(response['Body'].read().decode())
results.append(result)
I get the following error:
ValidationError: an error occurred (ValidationError) when calling the InvokeEndpoint operation: 1 validation error detected: Value at 'body' failed to satisfy constraint: Member must not be null
As a sanity check I ran :
for r in request_body.split('|'):
print(r)
And I get the result I expect to get:
3.0,0.0,4795.0,0.0,1.0,24.0,30.0,25.0,3.0
3.0,2.0,3818.0,0.0,3.0,10.0,22.0,11.0,11.0
5.0,0.0,3565.0,0.0,1.0,79.0,89.0,80.0,-66.0
5.0,-1.0,3227.7,0.0,0.0,16.0,17.0,17.0,1.0
5.0,0.0,3375.0,0.0,2.0,21.0,45.0,22.0,6.0...etc
Which leads me to believe that the logic in extracting the separate observations is sound, but somehow when I execute the call I get this null value error.
The idea is to get ordered predictions so that I can later map them to an id that is not part of the training features and hence not in the dataset.
Thank you in advance.
I had the same issue. Check if you are also passing to the endpoint an empty "r".
request_body.split('|') will generate a list with each of the rows of the dataframe, but it will also include one empty: ''
I am using SiddhiQL for complex event processing. My use case is to check whether two events of a particular type with certain filters occur within 15 mins.
Like for example -
If event1[filter 1] and event2[filter 2] within 15 mins
Insert into output stream
In my case any of the two events can occur first and i need to check whether after receiving the first event, do i receive the second event within 15 mins.
Is this possible in SiddhiQL?
EDIT #1 - I have defined my streams and events below (The below code does not work)
define stream RegulatorStream(deviceID long, roomNo int, tempSet double);
#sink(type = 'log', prefix = "LOGGER")
define stream outputStream (roomNo int,rooomNo int);
from e1 = RegulatorStream[roomNo==23] and e2 = RegulatorStream[e1.deviceID == deviceID AND roomNo ==24] within 5 minutes
select e2.roomNo,123 as rooomNo
insert into outputStream;
In above case, I need to alert when I receive events in my RegulatorStream having roomNo = 23 AND roomNo = 24 within 5 mins in any order with same deviceID.
How can this be achieved in SiddhiQL?
Yes this can achieved with Siddhi Patterns. Please refer the documentation on Siddhi Patterns in https://siddhi.io/en/v5.1/docs/query-guide/#pattern and examples in https://siddhi.io/en/v5.1/docs/examples/logical-pattern/.
You can use OR operation within the pattern to bypass the event occurrence order.
I have an actor which receives WeatherConditions and pushes it (by using OfferAsync) it to source. Currently it is setup to run for each item it receives (it stores it to db).
public class StoreConditionsActor : ReceiveActor
{
public StoreConditionsActor(ITemperatureDataProvider temperatureDataProvider)
{
var materializer = Context.Materializer();
var source = Source.Queue<WeatherConditions>(10, OverflowStrategy.DropTail);
var graph = source
.To(Sink.ForEach<WeatherConditions>(conditions => temperatureDataProvider.Store(conditions)))
.Run(materializer);
Receive<WeatherConditions>(i =>
{
graph.OfferAsync(i);
});
}
}
What I would like to achieve is:
Run it only once every N minutes and store average value of WeatherConditions from all items received in this N minutes time window
If item received matches certain condition (i.e. item temperature is 30% higher than previous item's temperature) run it despite of being "hidden" in time window.
I've been trying ConflateWithSeed, Buffer, Throttle but neither seems to be working (I'm newbie in Akka / Akka Streams so I may be missing something basic)
This answer uses Akka Streams and Scala, but perhaps it will inspire your Akka.NET solution.
The groupedWithin method could meet your first requirement:
val queue =
Source.queue[Int](10, OverflowStrategy.dropTail)
.groupedWithin(10, 1 second)
.map(group => group.sum / group.size)
.toMat(Sink.foreach(println))(Keep.left)
.run()
Source(1 to 10000)
.throttle(10, 1 second)
.mapAsync(1)(queue.offer(_))
.runWith(Sink.ignore)
In the above example, up to 10 integers per second are offered to the SourceQueue, which groups the incoming elements in one-second bundles and calculates the respective averages of each bundle.
As for your second requirement, you could use sliding to compare an element with the previous element. The below example passes an element downstream only if it is at least 30% greater than the previous element:
val source: Source[Int, _] = ???
source
.sliding(2, 1)
.collect {
case Seq(a, b) if b >= 1.3 * a => b
}
.runForeach(println)
I am wondering if it's possible to get the total count of a list in terraform, I've looked at the tf website and don't see anything for the total count just the use of count.index.
An example list
var "testList"
type = "list"
default [
{
type = "test1"
},
{
type = "test2"
}
]
So here I want to get 2 as the total count of testlist
Thanks
you can use terraform built-in function length() to get the count.
count = "${length(var.testList)}"
For details, please go through terraform documents:
terraform length list