How is the callback execution scheduled on a lcore for the callback associated to rte_timer_reset() - If the dpdk app was already using the lcore to do other processing, how the timer callback scheduling work on that lcore?
DPDK does not support a scheduler, only polling.
When you use Timer objects, you have to periodically call rte_timer_manage(), to check for expired timers and call their associated callback functions. In the example Timer application, they start a mainloop on each lcore and in that mainloop, they periodically call rte_timer_manage.
If you look at the code for rte_timer_manage, you can see that it builds a list (from the internal skip list data structure) and calls each callback function sequentially, but only for those timers running on its own lcore id.
Related
You can specify SQS as an event source for Lambda functions, with the option of defining a batch window duration.
You can also specify the WaitTimeSeconds for a ReceiveMessage call.
What are the key differences between these two settings?
What are the use cases?
They're fundamentally different.
The receive message wait time setting is a setting that determines if your application will be doing long or short polling. You should (almost) always opt for long polling as it helps reduce your costs; the how is in the documentation.
It can be set in 2 different ways:
on the queue level, by setting the ReceiveMessageWaitTimeSeconds attribute
on the message level, by setting the WaitTimeSeconds property on your ReceiveMessage calls
It determines how long your application will wait for a message to become available in the queue before returning an empty result.
On the other hand, you can configure an SQS queue as an event source for Lambda functions by adding it as a trigger.
When creating an SQS trigger, you have 2 optional fields:
batch size (the number of messages in each batch to send to the function)
batch window (the maximum amount of time to gather SQS messages before invoking the function, in seconds)
The batch window function sets the MaximumBatchingWindowInSeconds attribute for SQS event source mapping.
It's the maximum amount of time, in seconds, that the Lambda poller waits to gather the messages from the queue before invoking the function. The batch window just ensures that more messages have accumulated in the SQS queue before the Lambda function is invoked. This increases the efficiency and reduces the frequency of Lambda invocations, helping you reduce costs.
It's important to note that it's defined as the maximum as it's not guaranteed.
As per the docs, your Lambda function may be invoked as soon as any of the below are true:
the batch size limit has been reached
the batching window has expired
the batch reaches the payload limit of 6 MB
To conclude, both features are used to control how long something waits but the resulting behaviour differs.
In the first case, you're controlling how long the poller (your application) could wait before it detects a message in your SQS queue & then immediately returns. You could set this value to 10 seconds but if a message is detected on the queue after 5 seconds, the call will return. You can change this value per message, or have a universal value set at the queue level. You can take advantage of long (or short) polling with or without Lambda functions, as it's available via the AWS API, console, CLI and any official SDK.
In the second case, you're controlling how long the poller (inbuilt Lambda poller) could wait before actually invoking your Lambda to process the messages. You could set this value to 10 second and even if a message is detected on the queue after 5 seconds, it may still not invoke your Lambda. Actual behaviour as to when your function is invoked, will differ based on batch size & payload limits. This value is naturally set at the Lambda level & not per message. This option is only available when using Lambda functions.
You can’t use both together as long/short polling is for a constantly running application or one-off calls. A Lambda function cannot poll SQS for more than 15 minutes and that is with a manual invocation.
For Lambda functions, you would use native SQS event sourcing and for any other service/application/use case, you would manually integrate SQS.
They're same in the sense that both aim to help you to ultimately reduce costs, but very different in terms of where you can use them.
The process for NetShareEnum sometimes takes upwards of 30 seconds, successful connections generally take less than a second, is there any way to set a manual timeout time?
The documentation is quite silent on the subject. The protocol includes a timeout that seems to be the actual connection timeout instead of a failure timeout. I found SMB timeouts, which seem to be configurable to a degree (via registry settings) but I'd rather not mess up the default timeouts for a user.
If we can't set a manual timeout- is it acceptable to spawn a worker thread to run the process and kill that thread after a custom timeout (using WaitForSingleObject and TerminateThread)? Is there any possibility of crashing due to killing a thread running only that process?
The maximum amount of time the pollForActivityTask method stays open polling for requests is 60 seconds. I am currently scheduling a cron job every minute to call my activity worker file so that my activity worker machine is constantly polling for jobs.
Is this the correct way to have continuous queue coverage?
The way that the Java Flow SDK does it and the way that you create an ActivityWorker, give it a tasklist, domain, activity implementations, and a few other settings. You set both the setPollThreadCount and setTaskExecutorSize. The polling threads long poll and then hand over work to the executor threads to avoid blocking further polling. You call start on the ActivityWorker to boot it up and when wanting to shutdown the workers, you can call one of the shutdown methods (usually best to call shutdownAndAwaitTermination).
Essentially your workers are long lived and need to deal with a few factors:
New versions of Activities
Various tasklists
Scaling independently on tasklist, activity implementations, workflow workers, host sizes, etc.
Handle error cases and deal with polling
Handle shutdowns (in case of deployments and new versions)
I ended using a solution where I had another script file that is called by a cron job every minute. This file checks whether an activity worker is already running in the background (if so, I assume a workflow execution is already being processed on the current server).
If no activity worker is there, then the previous long poll has completed and we launch the activity worker script again. If there is an activity worker already present, then the previous poll found a workflow execution and started processing so we refrain from launching another activity worker.
I need to build a thread pool with scheduling priorities: all running threads have the same priority in terms of CPU time and OS priority, but when it takes to pick the next task to complete, the one with the highest priority goes first.
I've decided to try the boost::asio as it has a thread pool that looks good. I've looked over the prioritized handlers example in the asio documentation but I don't like it because it doesn't limit the number of threads, and I have to schedule the tasks manually. What I need is a fixed number of threads that would take tasks from a queue, so I could create the single pool in my application and then add tasks at any time during the application lifetime.
What would be sufficient is getting some notification from the asio::io_service when a task is finished; the handler of that notification could go and find the next task with the highest priority, and post it to the service.
Is that possible?
I am trying to control a service within an application. Starting the service via StartService (MSDN) works fine, the service needs about 10 seconds to start, but after calling StartService it gives the control back to the main-application immediately.
However, when stopping the service via ControlService (MSDN) - AFAIK there is no StopService - it blocks the main-application for the complete time until the service is stopped, which takes about 10 seconds.
Start: StartServiceW( handle, 0, NULL)
Stop: ControlService( handle, SERVICE_CONTROL_STOP, status )
Is there a way for a non-blocking / asynchronously stopping of a windows service?
I would probably look at stopping the service in a new thread. That will eliminate the blocking of your main thread.
The SCM processes control requests in a serialized manner. If any service is busy processing a control request, ControlService() will be blocked until the SCM can process the new request. This is stated as much in the documentation:
The SCM processes service control notifications in a serial fashion—it
will wait for one service to complete processing a service control
notification before sending the next one. Because of this, a call to
ControlService will block for 30 seconds if any service is busy
handling a control code. If the busy service still has not returned
from its handler function when the timeout expires, ControlService
fails with ERROR_SERVICE_REQUEST_TIMEOUT.
The service is doing its cleanup in its control handler routine. That's OK for a service that will only take a fraction of a second to exit, but a service that's going to take ten seconds should definitely be setting a status of STOP_PENDING and then cleaning up asynchronously.
If this is your own service, you should correct that problem. I'd start by making sure that all of the cleanup is really necessary; for example, there's no need to free memory before stopping (unless the service is sharing a process with other services). If the cleanup really can't be made fast enough, launch a separate thread (or signal your main thread) to perform the service shutdown and set the service status to STOP_PENDING.
If this is someone else's service, the only solution is to issue the stop request from a separate thread or in a subprocess.