Concurrent Calls to Oracle WebLogic 10.3 Web Service Problems - web-services

I have a Web Service (in java) on a Oracle WebLogic 10.3 that does all kinds of database queries. Recently I started stress tests. It passed the repetition tests (invoke the WS several 1000 times serially) but problems become to arise when concurrency testing began. Making as much as 2 concurrent calls results in errors. When doing proper tests the results looked like the WS wasn't able to handle concurrent calls at all, which obviously should not be the case. Error included null pointer exceptions, closed connections or prepared statements, etc. I am bit stumped at this specially since I was unable to find any kind of configuration options that could effect this but then again my knowledge of the WLS is quite limited.
Thanks for any suggestions in advance.

The answer you marked as correct is totally wrong.
The webservice methods should not be made in order to be thread safe.
Webservice implenmtation of weblogic are multithreaded.
It's like for the servlets
"Servlets are multithreaded. Servlet-based applications have to recognize and handle this appropriately. If large sections of code are synchronized, an application effectively becomes single threaded, and throughput decreases dramatically."
http://www.ibm.com/developerworks/websphere/library/bestpractices/avoiding_or_minimizing_synchronization_in_servlets.html
The code inside the WS you might want to synchronize depending what you do.
Does it make sense to synchronize web-service method?

Just so there is a clear answer.
When there are several concurrent calls to a given Web Service (in this case SOAP/JAX-WS was used) on WLS, the same object is used (no pooling or queues are used), therefore the implementation must be thread safe.
EDIT:
To clarify:
Assume there is a class attribute in the WebService implementation class generated by JDeveloper. If you modify this attribute in your web method (and then use it) it will cause synchronization problems when the method is called (ie WS is called) concurrently. When I first started creating web services I though the whole WebService object was created anew for each WS call but this does not seem to be the case.

Related

Out of Process COM Server - One server process per calling process?

I have an out of process com server, specifying CLSCTX_LOCAL_SERVER as the context, and REGCLS_MULTIPLEUSE for the connection type. This results in a single server process being reused by multiple calls from multiple clients.
I’m now wanting to make some changes to the server, which unfortunately can not work with a single process shared amongst clients (there are reasons for this, but they're long winded). I know you can set the server to use REGCLS_SINGLEUSE as the connection type, and this will create a new process for the OOP server each call. This solves my issue, but is a non-starter in terms of process usage; multiple calls over short periods result in many processes and this particular server might be hit incredibly often.
Does anyone happen to know of a mechanism to mix those two connection types? Essentially what I want is a single server process per calling process. (ie, client one creates a process, and that process is reused for subsequent calls from that client. Client two tries to call the server, and a new process is created). I suspect I could achieve it by forcing a REGCLS_SINGLEUSE server to stay open permanently in the client, but this is neither elegant nor possible (since I can’t change one of the clients).
Thoughts?
UPDATE
As expected, it seems there is no way to do this. If time and resource permitted I would most likely convert this to an In-Proc solution. For now though, I'm having to go with the new behaviour being used for any calling client. Fortunately, the impact of this change is incredibly small, and acceptable by the clients. I'll look into more drastic and appropriate changes later.
NOTE
I've marked Hans' reply as the answer, as it does in fact give a solution to the problem which maintains the OOP solution. I merely don't have capacity to implement it.
cal
COM does not support this activation scenario. It is supposed to be covered by an in-process server, do make sure that isn't the way you want to do it given its rather major advantages.
Using REGCLS_SINGLEUSE is the alternative, but this requires you extending your object model to avoid the storm of server instances you now create. The Application coclass is the boilerplate approach. Provide it with factory methods that gives you instances to your existing interfaces.
I'll mention a drastically different approach, one I used when I wanted to solve the same problem as well but required an out-of-process server to take advantage of bridging a bitness gap. You are not stuck with COM launching the server process for you, a client can start it as well. Provided of course that it knows enough about the server's installation location. Now a client of course has complete control over the server instance. The server called CoRegisterClassObject() with an altered CLSID, I xored part of the guid with the process ID. The client did the same so it always connected with the correct server. Extra code was required in the client to ensure it waits long enough to give the server a chance to register its object factories. Worked well.

Is Perforce's C++ P4API thread-safe?

Simple question - is the C++ API provided by Perforce thread-safe? There is no mention of it in the documentation.
By "thread-safe" I mean for server requests from the client. Obviously there will be issues if I have multiple threads trying to set client names and such on the same connection.
But given a single connection object, can I have multiple threads fetching changelists, getting status, translating files through a p4 map, etc.?
Late answer, but... From the release notes themselves:
Known Limitations
The Perforce client-server protocol is not designed to support
multiple concurrent queries over the same connection. For this
reason, multi-threaded applications using the C++ API or the
derived APIs (P4API.NET, P4Perl, etc.) should ensure that a
separate connection is used for each thread or that only one
thread may use a shared connection at a time.
It does not look like the client object has thread affinity, so in order to share a connection between threads, one just has to use a mutex to serialize the calls.
If the documentation doesn't mention it, then it is not safe.
Making something thread-safe in any sense is often difficult and may result in a performance penalty because of the addition of locks. It wouldn't make sense to go through the trouble and then not mention it in the documentation.

Django with fastcgi and threads

I have a Django app, which spawns a thread to communicate with another server, using Pyro.
Unfortunately, it seems like under fastcgi, multiple versions of this thread are fired off, and a dictionary that should be globally constant within my program, isn't. (Sometimes it has the values I expect, sometimes not)
What's the best way to ensure that there's one and only one copy of a dictionary in a django / fastcgi app?
I strongly recommend against relying on global anything in django. The problem is that, just as you seem to be encountering, the type of deployment will determine how (or whether or not) this global state is shared. To be a style nazi, that's a completely different level of abstraction from the code, which is relying on some guarantee of consistent global state.
I'm not experienced with fastcgi, but my understanding is that it, like many other frameworks, has a pre-forked and a threaded mode. In pre-forked mode, you have separate processes, not threads, running your python code. This spells nightmare for shared global state.
Barring some fragile workaround, which ought to be possible and which someone may or may not suggest, the only persistence you can really rely on is in the database, and, to a lesser extent, whatever caching mechanism you choose. You could use the low-level api to cache and retrieve keys and values.

Scalability implications of converting stateless session beans to POJOs

Imagine a heavily-used service object that's implemented as an EJB 2.1 SLSB, and that also happens to be thread-safe in itself by virtue of having no state whatsoever. All its public methods are transactional (via CMT), most simply requiring a transaction, but some requiring a new transaction.
If I convert this SLSB to a genuine singleton POJO (e.g. using a DI framework), how will that affect the scalability of the application? When the service was a SLSB, the EJB container would manage a pool of instances from which each client would get its own copy, so I'm wondering whether turning it into a singleton POJO will introduce some kind of contention for that single instance.
FWIW, none of this service's methods are synchronized.
Clarification: my motivation for converting the SLSB to a POJO is simplicity of both the object's lifecycle (true singleton versus container-managed) and of the code itself (one interface and one annotated POJO, versus three interfaces, one bean class, and a bunch of XML in ejb-jar.xml).
Also, FWIW, the service in question is one component of a collocated web app running on JBoss 3.x.
If the POJO is truly stateless, or has no conversational state (i.e. state is immutable) then this will not worsen the performance, and may even improve slightly since you really are using just one instance from your DI framework rather than a pool from the container. (Even the pool suffers from contention under high load.)
There is no synchronization needed for an object that is thread-safe by design, such as one with none or just immutable state. There will be no contention - threads can freely execute methods on the POJO without synchronization.
By using just the POJO you also get to see really what is going on in your app, and can be sure there is no hidden "container magic" going on behind the scenes.
Your POJO seem perfect.
So No, there will be no contention, your scalability will be perfect.
You have no additional cost.
You even have less because you have one instance instead of several
Your scalability is better because you will never hit the limit of your pool (you don't have).

What are "Jetty 6 Continuations" and how do they compare to the continuations found in programming languages?

I'm looking for an answer that describes a "continuation" mechanism in a web server vs. a programming language.
My understanding is that using continuations, it is trivial to have a "digits of pi" producer communicate with a "digits of pi" consumer, without explicit threading.
I've heard very good things about Jetty continuations. I am curious what others think.
I may have already found my answer, but I'm asking the question here anyway - for the record.
how do they compare to the continuations found in programming languages?
They have nothing in common apart from the name. It's merely a mechanism for freeing the current thread by giving Servlet an API for storing and restoring its state, but it's all rather manually managed as opposed to real continuations, where the state is automatically inferred from the current context.
The prototypical example for cases where this makes sense is layered (composed) web services, where one service needs to make many requests to other services, and while these requests are made, the current thread is freed. Upon completions of the requests (which can be done asynchronously on some other threads), the servlet's resume method is called, which then will assemble the response from the results of the requests.
According to this page:
continuations will be replaced by
standard Servlet-3.0 suspendable
requests once the specification is
finalized. Early releases of Jetty-7
are now available that implement the
proposed standard suspend/resume API
I have not used Jetty yet, but it seems that with continuations the server is not required to keep a thread for each client where normally when the server is "holding off" (i guess blocking) on sending a response to a client that continuously polls it with AJAX it would need a thread for each client which would be a scalability problem.