Multithreaded JavaMail: is it possible? - concurrency

JavaMail can only read one message at a time through IMAP. Threrefore, I am wondering whether opening several connections and reading emails concurrently is a good wordaround. Has anyone done this? I am concerned about the server (Gmail) refusing to open multiple connections or throttling them.

i have tried this. it seems to work fine, because it seems completely fine (and common) to have 2 different clients connected to gmail on different computers. However, i sometimes get random errors from gmail, and in that case its one additional factor to debug.

Related

Design a multi client - server application, where client send messages infrequent

I have to design a server which can able to send a same objects to many clients. clients may send some request to the server if it wants to update something in the database.
Things which are confusing:
My server should start the program (where I perform some operation and produce 'results' , this will be send to the client).
My server should listen to the incoming connection from the client, if any it should accept and start sending the ‘results’.
Server should accept as many clients as possible (Not more than 100).
My ‘result' should be secured. I don’t want some one take my ‘result' and see what my program logics look like.
I thought point 1. is one thread. And point 2. is another thread and it going to create multiple threads within its scope to serve point 3. Point 4 should be taken by my application logic while serialising the 'result' rather the server.
Is it a bad idea? If so where can i improve?
Thanks
Putting every connection on a thread is very bad, and is apparently a common mistake that beginners do. Every thread costs about 1 MB of memory, and this will overkill your program for no good reason. I did ask the very same question before, and I got a very good answer. I used boost ASIO, and the server/client project is finished since months, and it's a running project now beautifully.
If you use C++ and SSL (to secure your connection), no one will see your logic, since your programs are compiled. But you have to write your own communication protocol/serialization in that case.

Winsock IOCP Server Stress Test Issue

I have a winsock IOCP server written in c++ using TCP IP connections. I have tested this server locally, using the loopback address with a client simulator. I have been able to get upwards of 60,000 clients no sweat. The issue I am having, is when I run the server at my house and the client simulator at a friends house. Everything works fine up until we hit around 3700 connections, after that every call to connect() fails from the client side with a return of 10060 (this is the winsock timed out error). Last night this number was 3700, but it has been around 300 before, and we also saw it near 1000. But whatever the number is, every time we try to simulate it, it will fail right around that number (within 10 or so).
Both computers are using Windows 7 Ultimate. We have also both modified the TCPIP registry setting MaxTcpConnections to around 16 million. We also changed the MaxUserPort setting from its 5000 default to 65k. No useful information is showing up in the event viewer. We also both watched our resource monitor, and we havent even gotten to 1% network utilization, the CPU is also close to 0% usage as well.
We just got off the phone with our ISP, and they are saying that they are not limiting us in any way but the guy was kinda unsure and ended up hanging up on us anyway after a 30 minute hold time...
We are trying everything to figure this issue out, but cannot come up with the solution. I would be very greatful if someone out there could give us a hand with this issue.
P.S. Both computers are on Verizon FIOS with the same verizon router. Another thing to note, the server is using WSAAccept and NOT AcceptEx. The client simulator is attempting to connect over many seconds though, so I am pretty sure the connects are not getting backlogged. We have tried to change the speed at which the client simulator connects, and no matter what speed it is set to it fails right around the same number each time.
UPDATE
We simulated 2 separate clients (on 2 separate machines) on network A. The server was running on network B. Each client was only able to connect half (about 1600) connections to the server. We were initially using a port below 1,000, this has been changed to above 50,000. The router log on both machines showed nothing. We are both using the Actiontec MI424WR verizon FIOS router. This leads me to believe the problem is not with the client code. The server throws no errors and has no unexpected behavior. Could this be an ISP/Router issue?
UPDATE
The solution has been found. The verizon router we were using (MI424WR revision C) is unable to handle any more than 3700 connections, we tested this with a separate set of networks. Thanks for the help guys!
Thanks
- Rick
I would have guessed that this was a MaxUserPort issue, but you say you've changed that. Did you reboot after changing it?
Run the test on the exact same computers on your local network (this will take the computers out of the equation).
The issue could be one of your routers not being up to the job?

how can we tell if the remote server is multi-threaded?

My customer did not gave me details regarding the nature of it's application. It might
be multithreaded it might be not. His server serves SOAP messages (http requests)
Is there any special trick in order to understand if the peer is single or multi threaded?
I don't want to ask the customer and I don't have access to his server/machine. I want to find it myself.
It's irrelevant. Why do you feel it matters to you?
A more useful question would be:
Can the server accept multiple
simultaneous sessions?
The answer is likely to be 'yes, of course' but it's certainly possible to implement a server that's incapable of supporting multiple sessions.
Just because a server supports multiple sessions, it doesn't mean that it's multi-threaded. And, just because it's multi-threaded doesn't mean it will have good performance. When servers need to support many hundreds or thousands of sessions, multi-threading may be a very poor choice for performance.
Are you asking this question because you want to 'overlap' SOAP messages on the same connection - in other words, have three threads send requests, and then all three wait for a response? That won't work, because (like HTTP) request and response messages are paired together on each connection. You would need to open three connections in order to have three overlapped messages.
Unfortunately, no, at least not without accessing the computer directly. Multiple connections can even be managed by a single thread, however the good news is that this is highly unlikely. Most servers use thread pooling and assign a thread to a connection upon a handshake. Is there a particular reason why you need to know? If you're presumably going to work on this server, you'll know first-hand how it works.
It doesn't matter if the server is multithreaded or not. There are good and efficient ways to implement I/O multiplexing without threads [like select(2) and suchlike], if that's what worries you.

Increasing SSL handshaking performance

I've got a short-lived client process that talks to a server over SSL. The process is invoked frequently and only runs for a short time (typically for less than 1 second). This process is intended to be used as part of a shell script used to perform larger tasks and may be invoked pretty frequently.
The SSL handshaking it performs each time it starts up is showing up as a significant performance bottleneck in my tests and I'd like to reduce this if possible.
One thing that comes to mind is taking the session id and storing it somewhere (kind of like a cookie), and then re-using this on the next invocation, however this is making me feel uneasy as I think there would be some security concerns around doing this.
So, I've got a couple of questions,
Is this a bad idea?
Is this even possible using OpenSSL?
Are there any better ways to speed up the SSL handshaking process?
After the handshake, you can get the SSL session information from your connection with SSL_get_session(). You can then use i2d_SSL_SESSION() to serialise it into a form that can be written to disk.
When you next want to connect to the same server, you can load the session information from disk, then unserialise it with d2i_SSL_SESSION() and use SSL_set_session() to set it (prior to SSL_connect()).
The on-disk SSL session should be readable only by the user that the tool runs as, and stale sessions should be overwritten and removed frequently.
You should be able to use a session cache securely (which OpenSSL supports), see the documentation on SSL_CTX_set_session_cache_mode, SSL_set_session and SSL_session_reused for more information on how this is achieved.
Could you perhaps use a persistent connection, so the setup is a one-time cost?
You could abstract away the connection logic so your client code still thinks its doing a connect/process/disconnect cycle.
Interestingly enough I encountered an issue with OpenSSL handshakes just today. The implementation of RAND_poll, on Windows, uses the Windows heap APIs as a source of random entropy.
Unfortunately, due to a "bug fix" in Windows 7 (and Server 2008) the heap enumeration APIs (which are debugging APIs afterall) now can take over a second per call once the heap is full of allocations. Which means that both SSL connects and accepts can take anywhere from 1 seconds to more than a few minutes.
The Ticket contains some good suggestions on how to patch openssl to achieve far FAR faster handshakes.

How can I simulate a hung web service?

I'm trying to test out modes of failure for software that interacts with a web service, and I've already had reported issues where problems occur if the software doesn't get a timely response (e.g., it's waiting a minute or longer). I'd like to simulate this so that I can track down and fix issues myself, but unplugging the network connection doesn't do the trick, because it returns immediately with no route found.
What I'd like to know is, is there a simple way I can make a CGI script that accepts a connection but just sits there, keeping the connection alive for several minutes, without doing a while (true) {} type of loop?
How about letting the script sleep for some (very long) time?
I don't know what language you are using for your scripting, but in .net you could do something like Thread.Sleep(6000);
HTTP Fiddler is excellent for this sort of thing. You can simulate slow connections and, if you want, get it to "break" when a request comes in so you can similate a response that never returns.
Go get it from here...
http://www.fiddlertool.com/fiddler/
You will have to idle in some way since if your CGI script returns the connection will get closed.
If your network equipment supports throttling you might want to limit outgoing traffic to something ridiculously low.