I have a function that I need to make multiple instances of, but that function requires variables from the previous instance to run. There are 5 variables at different stages of the function that the other ones need to run, so I want to be able to create 5 different instances of the function because the function that inputs data into this function is much faster.
What I am in the process of doing is creating a class with a buffer that will notify each other when each stage is computed by using conditional variables and ofcourse mutex to lock.
What is the fastest way to do this to minimize any time lost since the whole goal is to create multiple instances of this function to process data in multi-threaded manner?
Related
I have a requirement where I need to send out an email based on the triggers that gets activated dynamically. I have briefed my current architecture of code in below image.
In the below image I have class Sample.cpp which has Common function where I send an email. This Common function is being called in a single thread from function triggerFun in EmailClass. EmailClass gets called dynamically by multiple classes like shown in below image.
My requirement is I want to synchronize the usage of common function across multiple threads. Means I want to only one thread to call common function at a time. After competition of usage common function in first thread, then want to allow second thread to allow to execute common function, etc....
Could you please let me know if there is any way I can synchronize the threads in usage of function common.
EmailImage
You can use std::mutex.
void common() {
static std::mutex m;
std::lock_guard lck(m);
// do something
}
I currently have a Firebase Function and to do its task it needs a key. This key changes every 4-20 days and I want to be able to have the functions update the key themselves. What would the best way to do this be? To get the key it is a slow network call to a 3rd party API so I'd rather store it. Currently I have an environment variable that I change myself when I find the functions failing, but I would rather have this process done automatically.
I don't think I can change the environment variables at run time so is the only option to store the value in my database and query for that every time I need it? This seems a bit slow, but I'm not sure.
is the only option to store the value in my database and query for that every time I need it?
Cloud Functions is stateless and will not retain any information outside of the code and data that was deployed with the function. So, you will need some sort of persistent storage to hold the key. It doesn't have to be a database. It can be any persistent storage you want.
You can certainly just read the key once (from wherever you choose to store it) and store it in memory if it was not previously read, for as long as you are allowed to keep using it without refreshing the value. Memory does persist for some time per server instance, but it is not shared among all of your function invocations, as each one might run on a different instance.
As informed to me by Paul Rudin on the Google Cloud Slack, you could cache the key as a global variable which is, in practice, often reused: https://cloud.google.com/functions/docs/bestpractices/tips#use_global_variables_to_reuse_objects_in_future_invocations
Use global variables to reuse objects in future invocations There is
no guarantee that the state of a Cloud Function will be preserved for
future invocations. However, Cloud Functions often recycles the
execution environment of a previous invocation. If you declare a
variable in global scope, its value can be reused in subsequent
invocations without having to be recomputed.
This way you can cache objects that may be expensive to recreate on
each function invocation. Moving such objects from the function body
to global scope may result in significant performance improvements.
The following example creates a heavy object only once per function
instance, and shares it across all function invocations reaching the
given instance:
// Global (instance-wide) scope
// This computation runs at instance cold-start
const instanceVar = heavyComputation();
/**
* HTTP function that declares a variable.
*
* #param {Object} req request context.
* #param {Object} res response context.
*/
exports.scopeDemo = (req, res) => {
// Per-function scope
// This computation runs every time this function is called
const functionVar = lightComputation();
res.send(`Per instance: ${instanceVar}, per function: ${functionVar}`);
};
If I call one lambda function with one request, but within that function, there are three calls made to different functions then would this count as 4 calls or is it just one call since its based on one request?
So if the count is 4 then (from economic stand point) wouldnt it be better if one writes one long function instead of many small functions, despite it being ill advised from design pattern stand point?
Every invocation of a Lambda function counts. It doesn't matter whether you call it from the console, from a CLI, from an event source, or from another Lambda function it will count as invocation.
Personally, I would focus on writing my Lambda functions in a way that makes sense and allowed me to use them effectively. If you find your costs are a factor later, you can always adjust then.
I have in a Server object multiple thread who are doing the same task. Those threads are init with a Server::* routine.
In this routine there is a infinite loop with some treatments.
I was wondering if it was thread safe to use the same method for multiple threads ? No wonder for the fields of the class, If I want to read or write it I will use a mutex. But what about the routine itself ?
Since a function is an address, those thread will be running in the same memory zone ?
Do I need to create a method with same code for every thread ?
Ps: I use std::mutex(&Server::Task, this)
There is no problem with two threads running the same function at the same time (whether it's a member function or not).
In terms of instructions, it's similar to if you had two threads reading the same field at the same time - that's fine, they both get the same value. It's when you have one writing and one reading, or two writing, that you can start to have race conditions.
In C++ every thread is allocated its own call stack. This means that all local variables which exist only in the scope of a given thread's call stack belong to that thread alone. However, in the case of shared data or resources, such as a global data structure or a database, it is possible for different threads to access these at the same time. One solution to this synchronization problem is to use std::mutex, which you are already doing.
While the function itself might be the same address in memory in terms of its place in the table you aren't writing to it from multiple locations, the function itself is immutable and local variables scoped inside that function will be stacked per thread.
If your writes are protected and the fetches don't pull stale data you're as safe as you could possibly need on most architectures and implementations out there.
Behind the scenes, int Server::Task(std::string arg) is very similar to int Server__Task(Server* this, std::string arg). Just like multiple threads can execute the same function, multiple threads can also execute the same member function - even with the same arguments.
A mutex ensures that no conflicting changes are made, and that each thread sees every prior change. But since code does not chance, you don't need a mutex for it, just like you don't need a mutex for string literals.
I have a C++ multi-threaded application which run tasks in separate threads. Each task have an object which handles and stores it's output. Each task create different business logic objects and probably another threads or threadpools.
What I want to do is somehow provide an easy way for any of business logic objects which are run by task to access each task's output without manually passing "output" object to each business logic object.
What i see is to create output singleton factory and store task_id in TLS. But the problem is when business logic create a new thread or thread pool and those thread would not have task_id in TLS. In this way i would need to have an access to parent's thread TLS.
The other way is to simply grab all output since task's start. There would be output from different task in that time, but at least, better than nothing...
I'm looking for any suggestions or ideas of clean and pretty way of solving my problem. Thanks.
upd: yeah, it is not singletone, I agree. I just want to be able to access this object like this:
output << "message";
And that's it. No worry of passing pointers to output object between business logic classes. I need to have a global output object per task.
From an application point of view, they are not singletons, so why treating the objects like singletons?
I would make a new instance of the output storer and pass the (smart?) pointer to the new thread. The main function may put the pointer in the TLS, thus making the instance global per thread (I don't think that this is a wise design deision, but it is asked). When making a new (sub-?)thread, the pointer can again be passed. So according to me, no singletons or factories are needed.
If I understand you correctly, you want to have multiple class instances (each not necessarily the same class) all be able to access a common data pool that needs to be thread safe. I can think of a few ways to do this. The first idea is to have this data pool in a class that each of the other classes contain. This data pool will actually store it's data in a static member, so that way there is only one instance of the data even though there will be more than one instance of the data pool class. The class will then have accessor methods which access this static data pool (so that it is transparent). To make it thread safe you would then require the access to go through a mutex or something like that.