Say I'm working with some simple processes in erl:
1> Fun = fun(F) -> F(F) end.
#Fun<erl_eval.6.82930912>
2> Pid = spawn(fun() -> Fun(Fun) end).
<0.178.0>
3> f(Pid).
What happens when I do f(Pid).? Does the process exit or do I just lose my reference to it?
According to documentation f(Pid) removes the binding of variable Pid, the process is not stopped.
You can test it in this way: suppose you have a gen_server called myserver which is based on the skeleton provided by emacs erlang mode.
1> {ok, Pid} = myserver:start_link().
{ok,<0.39.0>}
2> f(Pid).
ok
3> gen_server:call(pid(0,39,0), mycall).
ok
4> gen_server:call(myserver, mycall).
ok
As you can see even though we did f(Pid) we can still contact the process using its pid or the atom used during registration (in our case the module name).
Related
I am using boost.process, trying to spawn a child process and do some communication btw parent and child process. I create unnamed socket pair and pass one end to the child process. I still want to use limit_handles to close all other fd, but also preserve one end of this socket pair. How to achieve this in boost.process? I did not find any example on how to achieve this.
I think I knew how to do this. Basically I need to create a new initializer. Like this
struct PreservedFds : boost::process::detail::handler,
boost::process::detail::uses_handles {
std::vector<int> fds;
PreservedFds(std::vector<int> pfds) :
fds(pfds)
{}
std::vector<int>& get_used_handles() {
return fds;
}
};
Then I can initialize my child process with following:
std::vector<int> pfds{5, 7, 9};
boost::process::child c("/usr/bin/ls", "/home", PreservedFds(pfds),
boost::process::limit_handles);
Can someone please explain the difference between Kotlin Coroutine's ExecutorCoroutineDispatcher and CoroutineDispatcher from practical point of view, i.e. in which scenarios to use one against another?
So far I've been using Dispatchers, but (as far as I see) it can't give me a single background thread. That's the reason I'm using newSingleThreadExecutor().
What I've noticed though is that my main process never ends while using ExecutorCoroutineDispatcher (1) (with CoroutineDispatcher it finished as expected (2)). After some investigation it appears that I should run method close() on ExecutorCoroutineDispatcher for the main process to be finished (3). With CoroutineDispatcher you don't have to do this, it doesn't even have method close() (4).
Is CoroutineDispatcher closed automatically? Why do we have closure process for ExecutorCoroutineDispatcher, but not for CoroutineDispatcher?
Below is a code I've used for testing:
fun main() = runBlocking<Unit> {
val dispatcher1 = Executors.newSingleThreadExecutor().asCoroutineDispatcher() // (1) <-- main process runs indefinitely w/o closing dispatcher1 (3)
val dispatcher2 = Dispatchers.Unconfined // (2)
println("Start")
launch(dispatcher1) {
println("Child")
delay(1000)
printInfo(coroutineContext, this)
}.join()
println("End")
dispatcher1.close() // (3) <-- need to close dispatcher1 for the main process to finish, otherwise it runs indefinitely
// dispatcher2.close() // (4) <-- dispatcher2 doesn't have method 'close()'
}
Is CoroutineDispatcher closed automatically? Why do we have closure process for ExecutorCoroutineDispatcher, but not for CoroutineDispatcher?
The difference is not in the dispatcher type, but in how the underlying Java Executor Service is configured. The default shared executors use daemon threads, which don't prevent the JVM from shutting down. If you want to, you can get the same for your own executors:
val myExecutor = Executors.newSingleThreadExecutor { task ->
Thread(task).also { it.isDaemon = true }
}
val myDispatcher = myExecutor.asCoroutineDispatcher()
suspend fun main() {
withContext(myDispatcher) {
println("On my dispatcher")
}
}
I'm trying to run a list of futures concurrently (instead of in sequence) in Rust async-await (being stabilized soon), until any of them resolves to true.
Imagine having a Vec<File>, and a future to run for each file yielding a bool (may be unordered). Here would be a simple sequenced implementation.
async fn my_function(files: Vec<File>) -> bool {
// Run the future on each file, return early if we received true
for file in files {
if long_future(file).await {
return true;
}
}
false
}
async fn long_future(file: File) -> bool {
// Some long-running task here...
}
This works, but I'd like to run a few of these futures concurrently to speed up the process. I came across buffer_unordered() (on Stream), but couldn't figure out how to implement this.
As I understand it, something like join can be used as well to run futures concurrently, given that you gave a multithreaded pool. But I don't see how that could efficiently be used here.
I attempted something like this, but couldn't get it to work:
let any_true = futures::stream::iter(files)
.buffer_unordered(4) // Run up to 4 concurrently
.map(|file| long_future(file).await)
.filter(|stop| stop) // Only propagate true values
.next() // Return early on first true
.is_some();
Along with that, I'm looking for something like any as used in iterators, to replace the if-statement or the filter().next().is_some() combination.
How would I go about this?
I think that you should be able to use select_ok, as mentioned by Some Guy. An example, in which I've replaced the files with a bunch of u32 for illustration:
use futures::future::FutureExt;
async fn long_future(file: u32) -> bool {
true
}
async fn handle_file(file: u32) -> Result<(), ()> {
let should_stop = long_future(file).await;
// Would be better if there were something more descriptive here
if should_stop {
Ok(())
} else {
Err(())
}
}
async fn tims_answer(files: Vec<u32>) -> bool {
let waits = files.into_iter().map(|f| handle_file(f).boxed());
let any_true = futures::future::select_ok(waits).await.is_ok();
any_true
}
My environment is C++ for Linux-Xenomai on ARM gnueabi. After spawning a new pthread successfully I discovered that the class instance was out of scope to the thread. Accessing class instance objects, variables, structures etc. from the thread returned arbitrary values and often 'Segmentation Fault'.
After having spent days of onerous time searching for a solution on the net, I took a guess and tried using the 'this' pointer as argument to pthread_create. And voila! The class instance became visible to the thread. The question is why?
void*(*server_listener_fptr)(void*); // declare the function ptr
server_listener_fptr = reinterpret_cast<void*(*)(void*)>(&UDP_ClientServer::server_listener);
iret = pthread_create(&s_thread, NULL, server_listener_fptr, this);
There is a simple reason why this effectively launches a class instance as an independent thread of a parent process. The debug execution log below sheds some light on the situation. The UDP_ClientServer class instance's ::init() method is entered followed by the creation of a ::server_listener(void*) thread which is a class method of a class instance of the class UDP_ClientServer. The ::init() method that spawned the thread then exits as UDP_ClientServer::init() exit ..., followed shorthy by the class instance method ::server_listener(void*) announcing itself as a thread, as in UDP_ClientServer::server_listener(void*) entry ....
# ./xeno_pruss 37 -INOAUTOENA -FREQ 100
-> -IRQ 37
-> -I_NOAUTOENA
-> -FREQ 100.000000
-> Starting UDP_ClientServer...
-> UDP_ClientServer::init() entry ...
-> UDP Server on wlan0 IP: 192.168.1.10 PORT: 9930
-> UDP Server fd: 3
-> Bind to IP address: 0.0.0.0
-> UDP_ClientServer::init() creating thread ::server_listener(void*) ...
-> UDP_ClientServer::init() exit ...
-> main - Opening server on IRQ 37
-> main - rt_intr_create - interrupt object created on IRQ 37
-> UDP_ClientServer::server_listener(void*) entry ...
-> rt_task_create created task MyIrqServer
-> disabling and reseting the I2C1 peripheral, writing I2C_CON = 0x0
-> disabling and reseting the I2C2 peripheral, writing I2C_CON = 0x0
-> rt_task_start started thread MyIrqServer
-> started real-time interrupt server thread for IRQ37
-> pausing ...
-> *** irq_server entry ***
-> Task name: MyIrqServer
-> initializing the pru subsystem driver
-> prussdrv_open() opened pru interrupt...
-> prussdrv_map_prumem completed...
-> initializing 16 x 32-bit words of p_pru_shared_memu ...
-> current value # p_pru_shared_memu[0] : 0
-> current value # p_pru_shared_memu[0] : 10000000
-> mapped device (Success)
-> *** mem mapped CM_PER registers...
-> enabling I2C1 peripheral clocking, writing CM_PER_I2C1_CLKCTRL = 0x02
-> CM_PER_I2C1_CLKCTRL: 00000002
The thread is created as below. The (void*)this pointer is provided as argument passed by of pthread_create to class instance method ::server_listener.
printf("\t-> UDP_ClientServer::init() creating thread ::server_listener(void*) ...\n");
void*(*server_listener_fptr)(void*); // declare the function ptr
server_listener_fptr = reinterpret_cast<void*(*)(void*)>(&UDP_ClientServer::server_listener);
iret = pthread_create(&s_thread, NULL, server_listener_fptr, this);
The spawned ::server_listener thread never exits as seen below.
void* UDP_ClientServer::server_listener(void*ptr)
{
printf("\t-> UDP_ClientServer::server_listener(void*) entry ...\n");
for(;;) /* Run forever */
{
This of course gives the programmer the unique ability to describe complex state machines in a robust concurrent versus sequential processes manner similar to the methods employed in writing RTL in VHDL or Verilog.
The answer to the question is simply that for a class
class My_Class
{
public:
My_Class();
void func(void);
};
For the declaration of a class object instance
My_Class instance;
A call on the class object instance, to a member
instance.func(void);
Is by definition of the C++ language specification, compiled to
func(&instance);
Where the passed reference '&instance' IS the member's 'this' pointer.
I'm looking for a way to perform cross-thread operations the way SendMessage allows. In other words, how to have a thread execute some function in another thread. But I want to do it without SendMessage as that requires a window which is not always available.
Synchronously or asynchronously is fine.
.NET does it with the System.Windows.Threading.Dispatcher class, so surely there's a way?
So I'm guessing we're talking about Windows OS here, right? you should specify that in your question. A solution for Windows might be a different for a solution for Linux, for example.
now, regarding your question, any solution for that situation will force your thread(s) to either wait on some event to happen (enqueuing a task), or poll for a task endlessly.
So we're talking about either some kind of a mutex, a condition variable or some special sleeping function.
One (simple) and non portable way or sending "tasks" to other threads is to use the builtin Win32 - mechanism of APC (Asynchronous Procedure Call).
it utilizes the functions QueueUserAPC and SleepEx, I have mini tested this solution on my Windows 10 + Visual studio 2015
namespace details {
void waitforTask() noexcept {
SleepEx(INFINITE, TRUE);
}
void __stdcall executeTask(ULONG_PTR ptr) {
if (ptr == 0) {
return;
}
std::unique_ptr<std::function<void()>> scopedPtr(reinterpret_cast<std::function<void()>*>(ptr));
(*scopedPtr)();
}
}
template<class F, class ... Args>
void sendTask(void* threadHandle, F&& f, Args&& ... args) {
auto task =
std::make_unique<std::function<void()>>(std::bind(std::forward<F>(f), std::forward<Args>(args)...));
const auto res = QueueUserAPC(&details::executeTask,
threadHandle,
reinterpret_cast<ULONG_PTR>(task.get()));
if (res == 0) {
throw std::runtime_error("sendTask failed.");
}
task.release();
}
Example use:
std::thread thread([] {
for (;;) {
details::waitforTask();
}
});
sendTask(thread.native_handle(), [](auto literal) {
std::cout << literal;
}, "hello world");
this solution also shows how to use Win32 without actually contaminating the business logic written in C++ code non related win32 code.
this solution also can be adapted to a cross platform solution, instead of using an internal, semi-documented win32 task queue, one can build his own task queue using std::queue and std::function<void()>. instead of sleeping in alertable state, one can use std::condition_variable instead. this is what any thread-pool does behind the scenes in order to get and execute tasks. If you do want a cross-platform solution, I suggest googling "C++ thread pool" in order to see examples of such task queue.