AnyLogic funtion with using SelectOutputOut block and while loop - if-statement

Good day!
I faced the challenge of writing the function for allocation of the agents into SelectOutputOut blocks. Considering various scenarios of using if...else statements in function I understand that all possibilities must be covered (as suggested here).
However, the problem is that I don't want the agent to leave the function before it gets the appropriate SelectOutputOut block. This situation may occur if there are not enough resources in any Service blocks (Network1, Network2 or Network3). In this case, it is necessary to wait for any Service block will have enough resources for servicing the agent. For this purpose, I tried to use the while loop, but it doesn't help.
The questions are:
How to write the if-else statements to force the agent waits for enough resources in any Service block
Does the Select function monitor the parameters which are outside it? In other words: Does it know about the states of Service blocks during its execution?
Thank you.

What you need to do is have your agents wait in the queue and then have a function to remove them from the queue and then send them to the correct service block. The best way to do this is with an enter block where you can send them to.
See example below
You then need to call this function at the On enter code for the queue as well as the On exit code for the service blocks, to ensure you are always sending new agents when there is space.

Related

How to exit running event-triggered Cloud Function from within itself?

I want to terminate and exit running cloud function. Function was triggered by Firestore event. What are some ways to do this?
There are some reasons why you want a Cloud Function to terminate itself, for example, to avoid an infinite loop or infinite retries.
To avoid infinite retry loops, set an end condition. You can do this by including a well-defined end condition, before the function begins processing.
A simple yet effective approach is to discard events with timestamps older than a certain time. This helps to avoid excessive executions when failures are either persistent or longer-lived than expected.
Events are delivered at least once, but a single event may result in multiple function invocations. Avoid depending on exactly-once mechanics and write idempotent functions.
Note that updating the function-triggering Firestore document may create subsequent update events, which may cascade into an infinite loop within your function. To solve this problem, use trigger types that ignore updates (such as document.create), or configure your function to only write to Firestore if the underlying value has changed.
Also, note the limitations for Firestore triggers for Cloud Functions.
You might also want to check this example about Cloud Function Termination.
Do not manually exit a Function; it can cause unexpected behavior.

Highly concurrent AWS Express Step Functions

I have a system the receives records from Kinesis stream, Lambda is consuming the stream and invokes one function per shard, this function takes a batch of records and invokes an Async Express Step Function to process each record. The Step Function contains a Task relies on a third party. I have the timeout for this task set but this still can cause high number of concurrent step functions to start executing, when the task is taking longer, as the step functions are not completing quickly enough, causing throttling on Lambda executions further down the line.
To mitigate the issue I am thinking of implementing a "Semaphore" for concurrent Express function executions. There isn't too much out there in terms of similar approach, I found this article but the approach of checking how many active executions there are at a time would only work with Standard Step Function. If it would work with Express I can imagine I could throw error in the function that receives Kinesis record if the arbitrary Step Function execution limit is exceeded, causing Kinesis+Lambda to retry until capacity is available. But as I am using Express workflow, calling ListExecutions is not really an option.
Is there a solution for limiting number of parallel Async Express Step Function executions out there or do you see how I could alternatively implement the "Semaphore" approach?
Have you considered triggering on step function per lambda invoke and using a map state to do the multiple records per batch? The map state allows you to limit the number of concurrent executions. This doesn’t address multiple executions of the step function, and could lead to issues with timeouts if you are pushing the boundary of the five minute limits for express functions.
I think if you find that you need to throttle something across partitions you are going to be in a world of complex solutions. One could imagine a two phase commit system of tracking concurrent executions and handling timeouts, but these solutions are often more complicated than they are worth.
Perhaps the solution is to make adjustments downstream to reduce the concurrency there? If you end up with other lambdas being invoked too many times at once you can put SQS in front of them and enable batching as well as manage throttling there. In general you should use something like SQS to trigger lambdas at the point where high concurrency is a problem, and less so at points that feed into it. In other words if your current step functions can handle the high concurrency you should let them, and anything has issues as a result of it should be managed at that point.

Fast synchronised cout for multithreading

Recently I ran into a rather common problem about using cout in a multithreading application but with a little twist. I've got several callbackfunctions which get called by external hardware via a driver. Main objective of the callback funtions is to receive some data and store it in a queue and signal a processing-task as soon as a certain amout of datasets got collected. The callback-function needs to run as fast as possible in order to respond to the hardware in soft realtime.
My problem is this: From time to time my queue gets full and I have to handle this case by printing out a warning to the console (hard requirement). As I work with several threads I've created a wrapper function which uses a mutex to synchronise cout. Unfortunately, in some cases waiting for access to cout can take so much time that my callback function doesn't end fast enough to respond to the hardware before a timeout. My solution was to use a atomic variable for each possible error to count the number of occurences and a further task to check these variables periodically and print out the messages afterwards, but I'm pretty sure that this is not the best approach to solve my performance problems.
Are there any general approaches for this type of problem?
Any recommendations how I could improve or simplify my solution?
Thank you in advance
Don't write output in the hotpath.
Instead, queue up the stuff you want to log (prefereably raw data rather than a fully formatted string). Have another OOB thread running which picks up this stuff and logs it.

Concurrency, tasks

I'm new to the Microsoft Concurrency Runtime (and asynchronous programming in general), and I'm trying to understand what can and can't be done with it. Is it possible to create a task group such that the tasks execute in the order in which they were added, and any task doesn't start until the previous one ends?
I'm trying to understand if there's a more general and devolved way of dealing with tasks compared to chaining several tasks within a single member function. For example, let's say I have a program that creates resources at different points in a program, and the order in which the resources are allocated matters. Could any resource allocation function that was called simply append a task to the end of a central task list, with the result that the tasks execute in the order in which they were added (i.e. the order in which the resource allocation functions were called)?
Thanks,
RobertF
I'm not sure I understand what you're trying to achieve, but, are you looking for Agent or Actor model?
You post messages to an Async Agent and it processes them. It can then send messages to other agents.

capture a call stack and have it execute in a different thread

I need to write a logging api which does the actual logging on a seperate thread.
i.e. I have an application which wants to log some information. It calls my API and the api captures all the arguments etc and then hands that off to a seperate thread to be logged.
The logger api accepts variadic arguments and therefore my initial thoughts were to capture the whole call stack and somehow hand it to the thread whcih will do the logging.
I'm reasonable happy that I can capture the call stack. However I'm not sure how I'd pass this call stack off to another method.
I'm using g++ on linux and it may also have to work with Sun's CC v12 on solaris.
Any ideas.
You could capture a fixed amount of bytes on the call stack, but you have to copy all that memory even when it's not necessary and put it on a queue of some sort to pass it to the logging thread. Seems like a lot of work to get working, and quite inefficient.
I assume you're using a separate logging thread to make the logging API more efficient. It's quite probable that it's more efficient in this case to have the logging API extract the variadic parameters, convert them into a simpler representation (for example the string to be logged) and queue that.
Note also that a good logging API shouldn't block, so I'd advise a lock-free queue between the logging API and the logging thread.
Is the problem that you don't know how to hand it off to another thread, the simplest thing is to have a queue (std::deque, probably) of callstacks with a mutex protecting it. When your application has generated a callstack, it would then lock the mutex, push the callstack on, amd unlock the mutex. The logging thread periodically locks the mutex, checks the size of the queue, and if it's not empty, takes a callstack off and processes it.
There are ways to improve the efficiency there (eg. condition variables, using a separate counter so you don't have to lock before checking the size, or even using non-locking data structures) but I would recommend not worrying about those until they show up in profiling.
Alternative approach can be:
Define a macro which prints the function name and some additional information like file name, line number etc using standard predefined macros. You can even pass some additional log information which you would otherwise use printf for. This will call the function to send data to your other thread. That thread can wait on a socket/pipe. Write data into that and you can then read from it using the system calls (write/read/pipe).
Now insert this macro at the start of every function/API. You can get the callflow(call stack)
Your logging thread can then use the information from this macro to write to file, display on the console etc.
define PRINT_LOG(X) function_to_pass_data_to_thread(X,FILE,LINE);
API1()
{
PRINT_LOG("Entered API1");
//Your API here
}
PS: Sorry about shoddy editing. I cant seem to understand whats the problem today with the editor. Need to log a SO bug i guess.;)