How to launch a long-running SQLite query in Go? - unit-testing

I have a code that, when needed, interrupts a Sqlite query using a context with a deadline. My problem is to write a unit test for it: I need to launch a query that I know will run for a long time, ideally in an infinite loop, and check that it is interrupted. I use https://github.com/mattn/go-sqlite3 to access Sqlite 3 from Go.
For example, this:
with recursive rec as
(select 1 as n union all select n + 1 from rec)
select n from rec;
returns 1 immediately instead of looping, as it does in the SQLite console (is there something to do to enable CTE’s?). I also found no sleep function or anything similar.

Related

AWS Step Function Map with a Wait state in it

I want to use Step function Map, and have this logic that for each item in the input array
Lambda: checks a status of something:
CHOICE: true -> move on | false -> wait X hours and check for status again.
Does this mean if we set a max concurrency of 10 & have all 10 concurrently running maps in the wait stage because the status of something is false for all 10, my step function is essentially stuck and won't be able to continue on until one or more status of something comes back true?
It's probably the case that you want to avoid using Max Concurrency with Wait state in it, but I just want to confirm this is the case.
My alternative approach is to simply stage it to be processed again later and not block it.
Thanks.

TopologyTestDriver with streaming groupByKey.windowedBy.reduce not working like kafka server [duplicate]

I'm trying to play with Kafka Stream to aggregate some attribute of People.
I have a kafka stream test like this :
new ConsumerRecordFactory[Array[Byte], Character]("input", new ByteArraySerializer(), new CharacterSerializer())
var i = 0
while (i != 5) {
testDriver.pipeInput(
factory.create("input",
Character(123,12), 15*10000L))
i+=1;
}
val output = testDriver.readOutput....
I'm trying to group the value by key like this :
streamBuilder.stream[Array[Byte], Character](inputKafkaTopic)
.filter((key, _) => key == null )
.mapValues(character=> PersonInfos(character.id, character.id2, character.age) // case class
.groupBy((_, value) => CharacterInfos(value.id, value.id2) // case class)
.count().toStream.print(Printed.toSysOut[CharacterInfos, Long])
When i'm running the code, I got this :
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 1
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 2
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 3
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 4
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 5
Why i'm getting 5 rows instead of just one line with CharacterInfos and the count ?
Doesn't groupBy just change the key ?
If you use the TopologyTestDriver caching is effectively disabled and thus, every input record will always produce an output record. This is by design, because caching implies non-deterministic behavior what makes itsvery hard to write an actual unit test.
If you deploy the code in a real application, the behavior will be different and caching will reduce the output load -- which intermediate results you will get, is not defined (ie, non-deterministic); compare Michael Noll's answer.
For your unit test, it should actually not really matter, and you can either test for all output records (ie, all intermediate results), or put all output records into a key-value Map and only test for the last emitted record per key (if you don't care about the intermediate results) in the test.
Furthermore, you could use suppress() operator to get fine grained control over what output messages you get. suppress()—in contrast to caching—is fully deterministic and thus writing a unit test works well. However, note that suppress() is event-time driven, and thus, if you stop sending new records, time does not advance and suppress() does not emit data. For unit testing, this is important to consider, because you might need to send some additional "dummy" data to trigger the output you actually want to test for. For more details on suppress() check out this blog post: https://www.confluent.io/blog/kafka-streams-take-on-watermarks-and-triggers
Update: I didn't spot the line in the example code that refers to the TopologyTestDriver in Kafka Streams. My answer below is for the 'normal' KStreams application behavior, whereas the TopologyTestDriver behaves differently. See the answer by Matthias J. Sax for the latter.
This is expected behavior. Somewhat simplified, Kafka Streams emits by default a new output record as soon as a new input record was received.
When you are aggregating (here: counting) the input data, then the aggregation result will be updated (and thus a new output record produced) as soon as new input was received for the aggregation.
input record 1 ---> new output record with count=1
input record 2 ---> new output record with count=2
...
input record 5 ---> new output record with count=5
What to do about it: You can reduce the number of 'intermediate' outputs through configuring the size of the so-called record caches as well as the setting of the commit.interval.ms parameter. See Memory Management. However, how much reduction you will be seeing depends not only on these settings but also on the characteristics of your input data, and because of that the extent of the reduction may also vary over time (think: could be 90% in the first hour of data, 76% in the second hour of data, etc.). That is, the reduction process is deterministic but from the resulting reduction amount is difficult to predict from the outside.
Note: When doing windowed aggregations (like windowed counts) you can also use the Suppress() API so that the number of intermediate updates is not only reduced, but there will only ever be a single output per window. However, in your use case/code you the aggregation is not windowed, so cannot use the Suppress API.
To help you understand why the setup is this way: You must keep in mind that a streaming system generally operates on unbounded streams of data, which means the system doesn't know 'when it has received all the input data'. So even the term 'intermediate outputs' is actually misleading: at the time the second input record was received, for example, the system believes that the result of the (non-windowed) aggregation is '2' -- its the correct result to the best of its knowledge at this point in time. It cannot predict whether (or when) another input record might arrive.
For windowed aggregations (where Suppress is supported) this is a bit easier, because the window size defines a boundary for the input data of a given window. Here, the Suppress() API allows you to make a trade-off decision between better latency but with multiple outputs per window (default behavior, Suppress disabled) and longer latency but you'll get only a single output per window (Suppress enabled). In the latter case, if you have 1h windows, you will not see any output for a given window until 1h later, so to speak. For some use cases this is acceptable, for others it is not.

While loop implementation in Pentaho Kettle

I need guidence on implementing WHILE loop with Kettle/PDI. The scenario is
(1) I have some (may be thousand or thousands of thousand) data in a table, to be validated with a remote server.
(2) Read them and loopup to the remote server; I use Modified Java Script for this as remote server lookup validation is defined in external Java JAR file (I can use "Change number of copies to start... option on Modified java script and set to 5 or 10)
(3) Update the result on database table. There will be 50 to 60% connection failure cases each session.
(4) Repeat Step 1 to step 3 till all gets updated to success
(5) Stop looping on Nth cycle; this is to avoid very long or infinite looping, N value may be 5 or 10.
How to design such a WhILE loop in Pentaho Kettle?
Have you seen this link? It gives a pretty well detailed explanation of how to implement a while loop.
You need a parent job with a sub-transformation for doing a check on the condition which will return a variable to the job on whether to abort or to continue.

Redis SortedSet: Does the ZUNIONSTORE command block other concurrency commands?

I want to create a temporary sorted-set based on the origin one in a timer, maybe the interval is 4 hour, I'm using spring-data-redis api to do this.
ZUNIONSTORE tmp 2 A B AGGREGATE MAX
when the ZUNIONSTORE commmand is executing, will it block any other commands like
ZADD,ZREM,ZRANGE,ZINCRBY based on the SortedSet A or B ? I don't know if this will cause
concurrency problems, please give me some suggestions.

Timing commands in cpp project (windows)

I hope someone could help me with this (And english is not my native language so I'm sorry in advance for any grammar or spelling mistakes):
As part of a project I'm coding, I need to time some commands. More specifically: I have 2 sets of commands (Lets call them set A and set B) - I need to to execute set A, then wait for a specific number of milliseconds (calculated in set A), then execute set B. I did it using the Sleep(time) command between the sets.
Now, I need to incorporate another set of commands (Set C) that will run in a loop in the time between the sets A and B instead of simply doing nothing. Meaning, instead of the time the program was idle before (waiting the specified number of milliseconds) I need it to loop the C set - but the catch is that it has to loop C exactly the same time it would have waited in the idle time.
How can I do this without using threads? (And generally keep it as simple as possible)
I guess the "work-time" for the set of commands in C is known. And C is a loop which can/shall finish when the wait time has expired.
In this case I'd suggest to use a performance counter to count down the wait time. Depending on what is calculated and what overhaed is introduced in C the accuracy to obtain can be in the microseconds range.
Pseudo code:
Delay = 1000
Do A
CounterBegin = GetCounter()
// and now the C loop
while ((GetCounter() - CounterBegin) < Delay) {
Do C
}
Do B
Note: The counter values are to be converted into times by using the counter frequency. See the link above to get the details.