in our application, if user logged in as admin he can do any operation. supposed one admin modifying a route,if second admin at the same time checked the same route and creating an airwaybill for the same route.This will be a problem. I could not find how my application is handling these concurrent requests.
(we are simply using jdbc transactions)
I am getting different answers from my team 1. web/application server handles these transactions and it will handle concurrent requests without any issues.
2. locks will be happened on rows on database and there wont be any problem for concurrent requests.
bottomline : concurrent requests should be handled in code? or we shall do any setting in web/application for concurrent requests while deploying? or by default database will handle concurrent requests by row locking mechanism?
if anyone knows where to find the solution , please let me know.
As far as I'm aware most database engines use some kind of locking during queries, but it will differ depending on the engine. I know that InnoDB enforces transaction atomicity (see this stack exchange thread) so anything that is wrappend in a transaction wont be interfered with mid execution. However there is no guaruntee as to which request will reach the datbase first.
As for the webServer/appServer, assuming your using a threaded web server: apache tomcat, jetty etc then each request is handled as by a seperate thread so I would assume is no inherent thread saftey. In the majority of cases the database will handle your concurrency without complaining, however i would recommend including a some kind of serialisation on the application end in case you decide to change DB implementation somewhere down the road, you will also have more control over how requests are handled.
In short ..... do both.
As far as I know, most databases has some kind of locking during the transactions and queries, but you should check the database references to ensure the type of locking method it uses. As for your problem with your web server, I know that tomcat handles requests concurrently and offers some kind of thread safety for it's own resources, but it offers no thread safety for your application.Thus, you should do it on your own. For the problem you mentioned above, I think when you are accessing your route, you should query it against the database whether it exists or not. Also when your other admin is modifying the route, you can use some sort of lock on the block you are doing that so that when the other admin at the same time wants to access the route that is being modified, he waits for the transaction to be completed. if you are using java for the server side, I recommend to see java synchronization methods or if another language, check locking and thread safety methods for that language.
Related
We would like to run our cms::MessageConsumer and cms::MessageProducer on different threads of the same process.
How do we do this safely?
Would having two cms::Connection objects and two cms::Session objects, one each for consumer and producer, be sufficient to guarantee safety? Is this necessary?
Is there shared state between objects at the static library level that would prevent this type of usage?
You should read the JMS v1.1 specification, it calls out clearly which objects are valid to use in multiple threads and which are not. Namely the Session, MessageConsumer and MessageProducer are considered unsafe to share amongst threads. We generally try to make them as thread safe as we can but there are certainly ways in which you can get yourself into trouble. Its generally a good idea to use a single session in each thread and in general its a good idea to use a session for each MessageConsumer / MessageProducer since the Session contains a single dispatch thread which means that a session with many consumers must share its dispatch thread for sending messages on to each consumer which can lower latency depending on the scenario.
I'm answering my own question to supplement Tim Bish's answer, which I am accepting as having provided the essential pieces of information.
From http://activemq.apache.org/cms/cms-api-overview.html
What is CMS?
The CMS API is a C++ corollary to the JMS API in Java which is used to
send and receive messages from clients spread out across a network or
located on the same machine. In CMS we've made every attempt to
maintain as much parity with the JMS api as possible, diverging only
when a JMS feature depended strongly on features in the Java
programming language itself. Even though there are some differences
most are quite minor and for the most part CMS adheres to the JMS
spec, so having a firm grasp on how JMS works should make using CMS
that much easier.
What does the JMS spec say about thread safety?
Download spec here:
http://download.oracle.com/otndocs/jcp/7195-jms-1.1-fr-spec-oth-JSpec/
2.8 Multithreading JMS could have required that all its objects support concurrent use. Since support for concurrent access typically
adds some overhead and complexity, the JMS design restricts its
requirement for concurrent access to those objects that would
naturally be shared by a multithreaded client. The remainder are
designed to be accessed by one logical thread of control at a time.
JMS defines some specific rules that restrict the concurrent use of
Sessions. Since they require more knowledge of JMS specifics than we
have presented at
Table 2-2 JMS Objects that Support Concurrent Use
Destination: YES
ConnectionFactory: YES
Connection: YES
Session: NO
MessageProducer: NO
MessageConsumer: NO
this point, they will be described later. Here we will describe the
rationale for imposing them.
There are two reasons for restricting concurrent access to Sessions.
First, Sessions are the JMS entity that supports transactions. It is
very difficult to implement transactions that are multithreaded.
Second, Sessions support asynchronous message consumption. It is
important that JMS not require that client code used for asynchronous
message consumption be capable of handling multiple, concurrent
messages. In addition, if a Session has been set up with multiple,
asynchronous consumers, it is important that the client is not forced
to handle the case where these separate consumers are concurrently
executing. These restrictions make JMS easier to use for typical
clients. More sophisticated clients can get the concurrency they
desire by using multiple sessions.
As far as I know from the Java side, the connection is thread safe (and rather expensive to create) but Session and messageProducer are not thread safe. Therefore it seems you should create a Session for each of your threads.
I have an API which opens an access database for read and write. The API opens the connection when it's constructed and closes the connection when it's destructed. When the db is opened an .ldb file is created and when it closes it's removed (or disappears).
There are multiple applications using the API to read and write to the access db. I want to know:
Is ldb file used to track multiple connections
Does calling an db.close() closes all connections or just one instance.
Will there be any sync issues with the above approach.
db.Close() closes one connecton. The .ldb is automatically removed when all connections are closed.
Keep in mind that while Jet databases (i.e. Access) do support mutiple simultaneous users, they're not extremely well-suited for a very large concurrent user base; for one thing, they are easily corrupted when there are network issues. I'm actually dealing with that right now. If it comes to that, you will want to use a database server.
That said, I've used Jet databases in that way many times.
Not sure what you mean when you say "sync issues".
Yes, it's required to open database in shared mode by multiple users. Seems it stands for "Lock Database". See more info in MSDN: Introduction to .ldb files in Access 2000.
Close() closes only one connection, others are unaffected.
Yes, it's possible if you try to write records that another user has locked. However data will remain consistent, you will just receive error about write conflict.
Actually MS Access is not best solution for multi-connection usage scenario.
You may take a look at SQL Server Compact which is light version of MS SQL Server. It runs in-process, supports multiple connections and multithreading, most of robust T-SQL features (excluding stored procs) etc.
As an additional note to otherwise good answers, I would strongly recommend keeping a connection to a dummy table open for the lifetime of the client application.
Closing connections too often and allowing the lock file to be created/deleted every time is a huge performance bottleneck and, in some cases of rapid access to the database, can actually cause queries and inserts to fail.
You can read a bit more in this answer I gave a while ago.
When it comes to performance and reliability, you can get quite a lot out of Access databases providing that you keep some things in mind:
Keep a connection open to a dummy table for the duration of the life of the client (or at least use some timeout that would close the connection after like 20 seconds of inactivity if you don't want to keep it open all the time).
Engineer your clients apps to properly close all connections (including the dummy one when i'ts time to do it), whatever happens (eg crash, user shutdown, etc).
Leaving locks in place is not good, as it could mean that the client has left the database in an unknown state, and could increase the likelihood of corruption if other clients keep leaving stale locks.
Compact and repair the database regularly. Make it a nightly task.
This will ensure that the database is optimised, and that any stale data is removed and open locks properly closed.
Good, stable network connectivity is paramount to data integrity for a file-based database: avoid WiFi like the plague.
Have a way to kick out all clients from the database server itself.
For instance, have a table with for instance a MaintenanceLock field that clients poll regularly. If the field is set, the client should disconnect, after giving an opportunity for the user to save his work.
Similarly, when a client app starts, check this field in the database to allow or disallow the client to connect to it.
Now, you can quick out clients at any time without having to go to each user and ask them to close the app. It's also very useful to ensure that no client left open at night are still connected to the database when you run Compact & Repair maintenance on it.
I am debugging an ASMX web service that receives "bursts" of requests. i.e., it is likely that the web service will receive 100 asynchronous requests within about 1 or 2 seconds. Each request seems to take about a second to process (this is expected and I'm OK with this performance). What is important however, is that each request is dealt with sequentially and no parallel processing takes places. I do not want any concurrent request processing due to the external components called by the web service. Is there any way I can force the web service to only handle each response sequentially?
I have seen the maxconnection attribute in the machine.config but this seems to only work for outbound connections, where as I wish to throttle the incoming connections.
Please note that refactoring into WCF is not an option at this point in time.
We are usinng IIS6 on Win2003.
What I've done in the past is to simply put a lock statement around any access to the external resource I was using. In my case, it was a piece of unmanaged code that claimed to be thread-safe, but which in fact would trash the C runtime library heap if accessed from more than one thread at a time.
Perhaps you should be queuing the requests up internally and processing them one by one?
It may cause the clients to poll for results (if they even need them), but you'd get the sequential pipeline you wanted...
In IIS7 you can set up a limit of connections allowed to a web site. Can you use IIS7?
I want to know the technical reasons why the lift webframework has high performance and scalability? I know it uses scala, which has an actor library, but according to the install instructions it default configuration is with jetty. So does it use the actor library to scale?
Now is the scalability built right out of the box. Just add additional servers and nodes and it will automatically scale, is that how it works? Can it handle 500000+ concurrent connections with supporting servers.
I am trying to create a web services framework for the enterprise level, that can beat what is out there and is easy to scale, configurable, and maintainable. My definition of scaling is just adding more servers and you should be able to accommodate the extra load.
Thanks
Lift's approach to scalability is within a single machine. Scaling across machines is a larger, tougher topic. The short answer there is: Scala and Lift don't do anything to either help or hinder horizontal scaling.
As far as actors within a single machine, Lift achieves better scalability because a single instance can handle more concurrent requests than most other servers. To explain, I first have to point out the flaws in the classic thread-per-request handling model. Bear with me, this is going to require some explanation.
A typical framework uses a thread to service a page request. When the client connects, the framework assigns a thread out of a pool. That thread then does three things: it reads the request from a socket; it does some computation (potentially involving I/O to the database); and it sends a response out on the socket. At pretty much every step, the thread will end up blocking for some time. When reading the request, it can block while waiting for the network. When doing the computation, it can block on disk or network I/O. It can also block while waiting for the database. Finally, while sending the response, it can block if the client receives data slowly and TCP windows get filled up. Overall, the thread might spend 30 - 90% of it's time blocked. It spends 100% of its time, however, on that one request.
A JVM can only support so many threads before it really slows down. Thread scheduling, contention for shared-memory entities (like connection pools and monitors), and native OS limits all impose restrictions on how many threads a JVM can create.
Well, if the JVM is limited in its maximum number of threads, and the number of threads determines how many concurrent requests a server can handle, then the number of concurrent requests will be determined by the number of threads.
(There are other issues that can impose lower limits---GC thrashing, for example. Threads are a fundamental limiting factor, but not the only one!)
Lift decouples thread from requests. In Lift, a request does not tie up a thread. Rather, a thread does an action (like reading the request), then sends a message to an actor. Actors are an important part of the story, because they are scheduled via "lightweight" threads. A pool of threads gets used to process messages within actors. It's important to avoid blocking operations inside of actors, so these threads get returned to the pool rapidly. (Note that this pool isn't visible to the application, it's part of Scala's support for actors.) A request that's currently blocked on database or disk I/O, for example, doesn't keep a request-handling thread occupied. The request handling thread is available, almost immediately, to receive more connections.
This method for decoupling requests from threads allows a Lift server to have many more concurrent requests than a thread-per-request server. (I'd also like to point out that the Grizzly library supports a similar approach without actors.) More concurrent requests means that a single Lift server can support more users than a regular Java EE server.
at mtnyguard
"Scala and Lift don't do anything to either help or hinder horizontal scaling"
Ain't quite right. Lift is highly statefull framework. For example if a user requests a form, then he can only post the request to the same machine where the form came from, because the form processeing action is saved in the server state.
And this is actualy a thing which hinders scalability in a way, because this behaviour is inconistent to the shared nothing architecture.
No doubt that lift is highly performant but perfomance and scalability are two different things. So if you want to scale horizontaly with lift you have to define sticky sessions on the loadbalancer which will redirect a user during a session to the same machine.
Jetty maybe the point of entry, but the actor ends up servicing the request, I suggest having a look at the twitter-esque example, 'skitter' to see how you would be able to create a very scalable service. IIRC, this is one of the things that made the twitter people take notice.
I really like #dre's reply as he correctly states the statefulness of lift being a potential problem for horizontal scalability.
The problem -
Instead of me describing the whole thing again, check out the discussion (Not the content) on this post. http://javasmith.blogspot.com/2010/02/automagically-cluster-web-sessions-in.html
Solution would be as #dre said sticky session configuration on load balancer on the front and adding more instances. But since request handling in lift is done in thread + actor combination you can expect one instance handle more requests than normal frameworks. This would give an edge over having sticky sessions in other frameworks. i.e. Individual instance's capacity to process more may help you to scale
you have Akka lift integration which would be another advantage in this.
Ok, strange setup, strange question. We've got a Client and an Admin web application for our SaaS app, running on asp.net-2.0/iis-6. The Admin application can change options displayed on the Client application. When those options are saved in the Admin we call a Webservice on the Client, from the Admin, to flush our cache of the options for that specific account.
Recently we started giving our Client application >1 Worker Processes, thus causing the cache of options to only be cleared on 1 of the currently running Worker Processes.
So, I obviously have other avenues of fixing this problem (however input is appreciated), but my question is: is there any way to target/iterate through each Worker Processes via a web request?
I'm making some assumptions here for this answer....
I'm assuming the client app is using one of the .NET caching classes to store your application's options?
When you say 'flush' do you mean flush them back to a configuration file or db table?
Because the cache objects and data won't be shared between processes you need a mechanism to signal to the code running on the other worker process that it needs to re-read it's options into its cache or force the process to restart (which is not exactly convenient and most likely undesirable).
If you don't have access to the client source to modify to either watch the options config file or DB table (say using a SqlCacheDependency) I think you're kinda stuck with this behaviour.
I have full access to admin and client, by cache, I mean .net's Cache object. By flush I mean removing the item from the Cache object.
I'm aware that both worker processes don't share the cache data. That's sort of my conundrum)
The system is the way it is to remove the need to hit sql every new-session that comes in. So I'm trying to find a solution that can just tell each worker process that the cache needs to be cleared w/o getting sql involved.