I have an out of process com server, specifying CLSCTX_LOCAL_SERVER as the context, and REGCLS_MULTIPLEUSE for the connection type. This results in a single server process being reused by multiple calls from multiple clients.
I’m now wanting to make some changes to the server, which unfortunately can not work with a single process shared amongst clients (there are reasons for this, but they're long winded). I know you can set the server to use REGCLS_SINGLEUSE as the connection type, and this will create a new process for the OOP server each call. This solves my issue, but is a non-starter in terms of process usage; multiple calls over short periods result in many processes and this particular server might be hit incredibly often.
Does anyone happen to know of a mechanism to mix those two connection types? Essentially what I want is a single server process per calling process. (ie, client one creates a process, and that process is reused for subsequent calls from that client. Client two tries to call the server, and a new process is created). I suspect I could achieve it by forcing a REGCLS_SINGLEUSE server to stay open permanently in the client, but this is neither elegant nor possible (since I can’t change one of the clients).
Thoughts?
UPDATE
As expected, it seems there is no way to do this. If time and resource permitted I would most likely convert this to an In-Proc solution. For now though, I'm having to go with the new behaviour being used for any calling client. Fortunately, the impact of this change is incredibly small, and acceptable by the clients. I'll look into more drastic and appropriate changes later.
NOTE
I've marked Hans' reply as the answer, as it does in fact give a solution to the problem which maintains the OOP solution. I merely don't have capacity to implement it.
cal
COM does not support this activation scenario. It is supposed to be covered by an in-process server, do make sure that isn't the way you want to do it given its rather major advantages.
Using REGCLS_SINGLEUSE is the alternative, but this requires you extending your object model to avoid the storm of server instances you now create. The Application coclass is the boilerplate approach. Provide it with factory methods that gives you instances to your existing interfaces.
I'll mention a drastically different approach, one I used when I wanted to solve the same problem as well but required an out-of-process server to take advantage of bridging a bitness gap. You are not stuck with COM launching the server process for you, a client can start it as well. Provided of course that it knows enough about the server's installation location. Now a client of course has complete control over the server instance. The server called CoRegisterClassObject() with an altered CLSID, I xored part of the guid with the process ID. The client did the same so it always connected with the correct server. Extra code was required in the client to ensure it waits long enough to give the server a chance to register its object factories. Worked well.
Related
Apologies if this question seems a tad open ended or too vague, I'm not the best C/Cpp programmer (either language is acceptable to solve this problem).
Suppose there are two running processes, a client and a server (they could be viewed as producer and consumer as well, but I think client and server might be better here). The server is a "child" (sort of, see below) process of the client such that it has been created to act somewhat as an offload. Over time the client generates several jobs which it then offloads to the server it created. Depending on the job, the server may or may not send back to the client information regarding job completion. As an aside, some might suggest that this could be done with threads. For reasons I won't get into, threads will not work here. The client and server should not be sharing memory (there may or may not be shared memory, its possible the client and server are two different machines: the code I'm writing should support both possibilities).
The server has a very long initialization period and thus must always be running, hence the idea of the second process being a server. It therefore must always be listening for any messages from the client. A simple pseudo-code/C example is given below
int main() {
...
client_pid = getpid();
pid = fork();
if(pid > 0) {
/* sets up connection with client, based on what connection type has been
* given (shared memory, sockets, etc). I don't know anything
* about what type of connection is established only that all
* communication is handled by the wait_For_Jobs and generate_Jobs functions
*/
start_Server(client_pid, connection_type);
wait_For_Jobs();
} else {
// gets information needed to send messages to server
contact_info = wait_for_connection();
generate_Jobs(contact_info);
}
}
This is a very, very rough outline of what I want. The question that I have is related to the "wait_For_Jobs" function. Unfortunately, the connection_type will not be known until runtime and thus this question might have several different answers depending on what type of communication method is used (ie shared memory, sockets, etc). For simplicity then, assume that shared memory is the communication type being used (say boost interprocess). With this in mind, what is the best and most efficient way for the server to wait for input from the client? One possible approach is to use a while loop somewhat in the fashion given below.
void wait_For_Jobs() {
while(true) {
if(check_If_Message_Received_Over_Shared_Memory){
// handle message
}
}
}
However, I suspect that this will be very inefficient; the process is always "spinning its wheels". Somewhat of a fix would be to put the process to "sleep" at the end of the while loop for a period of time. This isn't really too different from just running the code in the while loop (in fact its the same thing); it just lowers resource usage at the expense of response time. Ideally, the process should just be in somewhat of a standby mode and start computing once it receives a message. However, I'm not sure how you would do such a thing in C or Cpp. With that in mind, is there a better alternative?
If you use shared memory you will need to block on a semaphore which is raised when another request appears.
If you use sockets, use a blocking receive.
I have a Web Service (in java) on a Oracle WebLogic 10.3 that does all kinds of database queries. Recently I started stress tests. It passed the repetition tests (invoke the WS several 1000 times serially) but problems become to arise when concurrency testing began. Making as much as 2 concurrent calls results in errors. When doing proper tests the results looked like the WS wasn't able to handle concurrent calls at all, which obviously should not be the case. Error included null pointer exceptions, closed connections or prepared statements, etc. I am bit stumped at this specially since I was unable to find any kind of configuration options that could effect this but then again my knowledge of the WLS is quite limited.
Thanks for any suggestions in advance.
The answer you marked as correct is totally wrong.
The webservice methods should not be made in order to be thread safe.
Webservice implenmtation of weblogic are multithreaded.
It's like for the servlets
"Servlets are multithreaded. Servlet-based applications have to recognize and handle this appropriately. If large sections of code are synchronized, an application effectively becomes single threaded, and throughput decreases dramatically."
http://www.ibm.com/developerworks/websphere/library/bestpractices/avoiding_or_minimizing_synchronization_in_servlets.html
The code inside the WS you might want to synchronize depending what you do.
Does it make sense to synchronize web-service method?
Just so there is a clear answer.
When there are several concurrent calls to a given Web Service (in this case SOAP/JAX-WS was used) on WLS, the same object is used (no pooling or queues are used), therefore the implementation must be thread safe.
EDIT:
To clarify:
Assume there is a class attribute in the WebService implementation class generated by JDeveloper. If you modify this attribute in your web method (and then use it) it will cause synchronization problems when the method is called (ie WS is called) concurrently. When I first started creating web services I though the whole WebService object was created anew for each WS call but this does not seem to be the case.
I'm writing a client-server application and one of the requirements is the Server, upon receiving an update from one of the clients, be able to Push out new data to all the other clients. This is a C++ (Qt) application meant to run on Linux (both client and server), but I'm more looking for high-level conceptual ideas of how this should work (though low-level thoughts are good, too).
Server:
It needs to (among its other duties) keep a socket open listening for incoming packets from potentially n different clients, presumably on a background thread (I haven't written much in terms of socket code other than some rinky-dink examples in school). Upon getting this data from a client, it processes it and then spits it out to all its clients, right?
Of course, I'm not sure how it actually does this. I'm guessing this means it has to keep persistent connections with every single client (at least the active clients), but I don't understand even conceptually how to maintain this connection (or the list of these connections).
So, how should I approach this?
In general when you have multiple clients, there are a few ways to handle this.
First of all, in TCP, when a client connects to you they're placed into a queue until they can be serviced. This is a given, you don't need to do anything except call the accept system call to receive a new client. Once the client is recieved, you'll be given a socket which you use to read and write. Who reads / writes first is entirely dependent on your protocol, but both sides need to know the protocol (which is up to you to define).
Once you've got the socket, you can do a few things. In a simple case, you just read some data, process it, write back to the socket, close the socket, and serve the next client. Unfortunately this means you can only serve one client at a time, thus no "push" updates are possible. Another strategy is to keep a list of all the open sockets. Any "updates" simply iterate over the list and write to each socket. This may present a problem though because it only allows push updates (if a client sent a request, who would be watching for it?)
The more advanced approach is to assign one thread to each socket. In this scenario, each time a socket is created, you spin up a new thread whose whole purpose is to serve exactly one client. This cuts down on latency and utilizes multiple cores (if available), but is far more difficult to program. Also if you have 10,000 clients connecting, that's 10,000 threads which gets to be too much. Pushing an update to a single client (in this scenario) is very simple (a thread just writes to its respective socket). Pushing to all of them at once is a little more tricky (requires either a thread event or a producer / consumer queue, neither of which are very fun to implement)
There are, of course, a million other ways to handle this (one process per client, a thread pool, a load-balancing proxy, you name it). Suffice it to say there's no way to cover all of these in one answer. I hope this answers your basic questions, let me know if you need me to clarify anything. It's a very large subject. However if I might make a suggestion, handling multiple clients is a wheel that has been re-invented a million times. There are very good libraries out there that are far more efficient and programmer-friendly than raw socket IO. I suggest libevent, which turns network requests into an event-driven paradigm (much more like GUI programming, which might be nice for you), and is incredibly efficient.
From what I understand, I think you need to keep an infinite loop going, (at least until the program terminates) that answers a connection request from your clients. It would be best to add them to a array of some sort. Use an event to see when a new client is added to that array, and wait for one of them to give data. Then you do what you have to do with that data and spit it back.
I am currently involved in the development of a software using distributed computing to detect different events.
The current approach is : a dozen of threads are running simultaneously on different (physical) computers. Each event is assigned a number ; and every thread broadcasts its detected events to the other and filters the relevant events from the incoming stream.
I feel very bad about that, because it looks awful, is hard to maintain and could lead to performance issues when the system will be upgraded.
So I am looking for a flexible and elegant way to handle this IPC, and I think Boost::Signals seems a good candidate ; but I never used it, and I would like to know whether it is possible to provide encapsulation for network communication.
Since I don't know any solution that will do that, other then Open MPI, if I had to do that, I would first use Google's Protocol Buffer as my message container. With it, I could just create an abstract base message with stuff like source, dest, type, id, etc. Then, I would use Boost ASIO to distribute those across the network, or over a Named PIPE/loopback for local messages. Maybe, in each physical computer, a dedicated process could be running just for distribution. Each thread registers with it which types of messages it is interested in, and what its named pipe is called. This process would know the IP of all the other services.
If you need IPC over the network then boost::signals won't help you, at least not entirely by itself.
You could try using Open MPI.
From an out-of-process COM object (LocalServer32) can I determine the client process that requested the creation of the object? - to be specific I need to get hold of the client processes command line.
This question arrises because (due to poor standardisation, implementation and support) the potential 3rd party clients of the object have a variety of idiosyncracies which the object needs to workaround.
To do this the object needs to be able to identify its current client.
Extending the interface of the COM object so that the client can identify itself is unfortunately not possible ... or to be more precise the interface can be extended but I won't be able to get the clients to call the extension.
Having looked into this further I suspect the answer is going to be "NO", but by all means tell me I'm wrong.
Using Process Explorer I can see that the parent process for my COM object is an instance of "svchost.exe", and not the client application.
Because COM server processes are shared by all clients of the same AppID, it's not possible to actually get the PID of the client application. As #Anders said, you can use CoImpersonateClient (or, better, call CoGetCallContext and interrogate the resulting IServerSecurity) to find the account and login session of the caller, but you cannot get the process itself.
If you are trying to work around bugs in legacy clients, I would recommend you create a new set of CLSIDs (or IIDs, if you can emulate all the bugs the legacy clients rely on with shims) for new (non-legacy) clients with VERY strict input validation, and implement new features only in these new CLSIDs. Legacy clients stick with their older CLSID, in which you can simply use the existing, legacy implementation (or a bug-for-bug compatible clone).
Maybe CoImpersonateClient()