I was using jetty 7.6.5 version in one of my project and now we want to upgrade to jetty 9.4.7. And found that multiple class have been removed or changed in 9.4.7 version.
Example:
httpClient.setConnectorType(HttpClient.CONNECTOR_SELECT_CHANNEL);
ExecutorThreadPool pool = new ExecutorThreadPool(execSvc);
httpClient.setThreadPool(pool); httpClient.setTimeout(1000);
This code does not work on jetty 9. Please help how to fix it
Let's answer your questions first:
Connector types do not exist in the same way as it did in Jetty 7.x. The default is NIO based.
Executors / ThreadPools can only be set on the constructor. (Don't set small thread pools!)
There are many flavors of timeout you can set from the HttpClient itself, do you want idle timeout? connection timeout? dns lookup timeout? etc... Check the javadoc for details.
There are also per-request timeouts available. Again, check the javadoc for details on which kind you want.
Going from Jetty 7.6.x series to 9.4.x series is a huge jump. You are skipping nearly 8 years of heavy development and change in your usage. (7.x is 2009 vintage to 9.4.x is new to 2017)
The techniques and mechanisms you were using in the past are likely no longer present anymore.
This is because HttpClient has been updated significantly for HTTP/1.1, HTTP/2, ALPN, Unix Socket, FCGI, WebSocket, etc.
The most significant is to treat the new HttpClient like a web browser, start it once, and leave it running, perform as many requests as you like against it. The worst thing you can do for yourself is to start it, perform a request or two, then stop it. This kind of usage is not supported and can lead to strange issues with memory/threading/etc. Start it once when you first need it, don't stop/shutdown that HttpClient instance till your application stops.
Related
I am using Coldfusion MX8 server and one of the scheduled task was running from 2 years but now suddenly from 01/12/2014 scheduled tasks are not running. When i browsed the file in browser then the file is running successfully without error.
I am not sure is there any updatation or license expiration problem. I am aware that mid of this year Adobe closed the support for coldfusion 8.
The first most common problem of this problem is external to the server. When you say you browsed to the file and it worked in a browser, it is very important to know if that test was performed on the server desktop. Knowing that you can browse to the file from your desktop or laptop is of small value.
The most common source of issues like this is a change in the DNS or network stack that is interfereing with resolution. For example, if the internal DNS serving your DMZ suddenly starts serving the "external" address - suddenly your server can't browse to your domain. Or if the IP served by the server for the domain in question goes from being 127.0.0.1 to some other IP that the server can't acces correctly due to reverse proxy or LB or some other rule. Finally, sometimes the Apache or IIS is altered so that an IP that previously was serviced (127.0.0.1 being the most common example) now does not respond.
If it is something intrinsic to the scheduler service then Frank's advice is pretty good - especially look for "proxy schduler" entries in the log - they can give you good clues. I would also log results of a scheduled task to a file. Then check the file. If it exists then your scheduled tasks ARE running - they are just not succeeding. Good luck!
I've seen the cf scheduling service crash in CF8. The rest of CF is unaffected.
Have you tried restarting the server?
Here are your concerns:
Your File (works since you tested it manually).
Your Scheduled Task (failed).
Your Coldfusion Application (Service) (any changes here)?
Your Server (what about here).
To test your problem create a duplicate task and schedule it. Leave the other one in place (maybe set your new one to run earlier). Use the same file too. See if it completes.
If it doesn't then you have a larger problem. Since the Coldfusion Server sits atop of the JVM there could be something happening there. Things just don't stop working unless something got corrupted or you got compromised. If you hardened your server by rearranging/renaming the file structure to make it more secure...It would break your task.
So going back: if your test schedule works then determine what is different between the two. Note you have logging capabilities. Logging abilities for CF8
If you are not directly incharge of maintaining this server, then I would recommend asking around and see if there was recent maintenance, if so, what was done to the server?
I have already used Faye with Ruby On Rails, it's almost at 0 cost for me, because I'm running Faye over another server connected to my Rails App.
However I have faced some problems like when a query takes too long on the Rails server, after a while the Faye Connection would fail and raise an exception.
Now what I'm looking into the Actioncontroller::Live , most of the implementations are using Redis, which will be a bit costy for my startup, however I realized I can't do subscribe/publish style things with the Actioncontroller::Live.
My question: should I move over to Actioncontroller::Live or stick to Faye ? While these are the things that I want to accomplish:
Updates after commenting/feed
Notification system, based on pub/sub, similar to Faye.
Exception handling
Scalability > More users more connections
I know that Faye uses Bayeux vs ActionController::live uses SSE/ HTTP.
Should I consider anything related to Socket.IO ? SockJS ?
I have already read through some of the question about this topic on here like:
Replace Faye with rails 4 server side events? Faye VS rails 4 streaming?
But I need more info:
Here's some notes on why I would stick with Faye, which might bring you closer to an answer on this question:
Browser compatibility
As you read in the related stackoverflow question, Faye has better browser compatibility.
Stability
Rails::Live functionality doesn't seem to be very stable yet. There's currently active development on Rails SSE. As an example, it's quite unlikely that you won't be affected by this issue.
Threading & blocking vs asynchronous non-blocking
Do you use multi-threading in your application? If you don't, I definitely wouldn't introduce it just for Rails::Live as it opens up the possibility of non-threadsafe gem issues & limitations of server choices.
If you do have multi-threading, each client will keep a thread open to your application. If you run out of threads your application will be unresponsive/dead. Consider how many threads you need to cater for peak times with users having multiple browser tabs open, or even DOS attacks where someone opens up a huge amount of idle SSE/websocket connections to reach your max and take your app down. If you set a high amount of max threads to support many idle connections, you open up the possibility of having that many non-idle threads which could have it's own problems. No SSE/websockets and no comet/long polling is much safer for blocking apps. From what I understand, your setup runs Faye separately. The Faye server runs Ruby EventMachine or Node.js which are both asynchronous non-blocking and do not use a thread for each open connection. It can handle a huge amount of concurrent connections without problems.
My opinion is that a normal blocking Rails web application with a separate asynchronous non-blocking server for connections that stay open (to pass messages & make the app live) is the best of both setup. This is what you have with Rails + Faye.
Update: Actioncable was announced at Railsconf 2015. It runs non-blocking as described above, but it's an integrated official Rails solution. Having a single framework with a massive community, an integrated non-blocking connection handler for websockets that you can run and configure separately while everything works "out of the box" is a big advantage of Rails.
From Action Cable readme:
Action Cable is powered by a combination of EventMachine and threads. The framework plumbing needed for connection handling is handled in the EventMachine loop, but the actual channel, user-specified, work is handled in a normal Ruby thread. This means you can use all your regular Rails models with no problem, as long as you haven't committed any thread-safety sins.
To learn more you can read up on ActionCable & Underlying architecture.
I have an Apache2 and Django (mod_wsgi) setup that provides a RESTful API. I have a set of automated tests for this, that executes ~1000 API requests (pure http GET/POST/PUT/DELETE) in sequential order.
The problem is, for every 80 requests or so, I get a strange lag/timeout for exactly 5s or 10s. See timestamp examples here:
Request 1: 2013-08-30T03:49:20.915
Response 1: 2013-08-30T03:49:30.940
Request 2: 2013-08-30T03:50:32.559
Response 2: 2013-08-30T03:50:37.597
I can't figure out why this happens. I have an apache config with KeepAlive Off (recommended setup setting for Django) but otherwise standard install for Ubuntu 12.04 LTS.
I'm running the tests from the same server where the webserver is, I first thought this was some kind of DNS cache thing, but I've added the hostname I'm requesting to /etc/hosts but the problem persists.
The system is idle and have lots of cpu and mem when this lag/timeouts happens.
The lag is not specific to a certain request (URL), it seems kinda random.
Considering that it's always exactly to the millisecond 5s or 10s, it feels like this is some specific setting somewhere causing this.
In case it provides some insight, watch my talk from PyCon US.
http://lanyrd.com/2013/pycon/scdyzk/
The talk deals with things like process churn and startup costs. One thing you shouldn't do is set maximum requests if you don't really need it.
Also consider trying New Relic to help diagnose where the issue is. That will save a lot of guessing if it is a web application of backend service infrastructure issue.
As far as seeing how such monitoring can help, watch another one of my PyCon talks.
http://lanyrd.com/2012/pycon/spcdg/
This was a DNS issue, adding the domainname I used locally to /etc/hosts actually solved the problem. I just hadn't reboot the server for the changes to take effect, thought restarting networking would take care of that, but apparently not.
I have been working with spring web applications using jetty/tomcat app server for around two years now, however the thing that eludes me still is how are multiple requests handled in these servers. I understand that spring is helpful in making singletons, but my understanding is just limited to that.
Can someone point to any good resource that can help me understand how multiple requests are handled.
This can be answered at so many levels I have been staring at it for two days trying to figure out how answer it...so I'll take a kinda high level shot at it.
There is this server port that jetty listens on and some number of acceptor threads whose job it is to get connection objects made between the client and server side. Once you have that connection it flows through the jetty handler architecture doing things like authentication perhaps, or pulling off a session id and attaching a session object to the request. Then it works its way into the servlet handler and the appropriate servlet is found and you start dealing with the servlet-api. At that point you have a thread allocated to your request for all of the time you are in the servlet-api, at least under servlet 2.5. In servlet 3.0 you have some async mechanisms available to you, or you can use jetty-continuations as a way to get async support on servlet 2.5 api.
Anyway, there is a thread pool that the server uses to allocate threads to those connectors which ultimately are the threads spending all their time in the servlet-api. The jetty continuations api and the newer servlet 3.0 support provide mechanisms to release threads back to the primary threadpool so they can spend time on accepting and processing other requests.
There is obviously a lot more going on under the covered related to usage of the nio api's and how jetty efficiently manages all of this stuff, but maybe this is enough to sate your initial question. If not, feel free to peruse the jetty docs (http://www.eclipse.org/jetty/documentation/current) or look to the jetty mailing lists. There has been some discussion on jetty-9 optimizations as it relates to under the covers with http, spdy, and websocket connection handling and processing in the blogs at Webtide (http://webtide.com/blogs).
(Edited to try to explain better)
We have an agent, written in C++ for Win32. It needs to periodically post information to a server. It must support disconnected operation. That is: the client doesn't always have a connection to the server.
Note: This is for communication between an agent running on desktop PCs, to communicate with a server running somewhere in the enterprise.
This means that the messages to be sent to the server must be queued (so that they can be sent once the connection is available).
We currently use an in-house system that queues messages as individual files on disk, and uses HTTP POST to send them to the server when it's available.
It's starting to show its age, and I'd like to investigate alternatives before I consider updating it.
It must be available by default on Windows XP SP2, Windows Vista and Windows 7, or must be simple to include in our installer.
This product will be installed (by administrators) on a couple of hundred thousand PCs. They'll probably use something like Microsoft SMS or ConfigMgr. In this scenario, "frivolous" prerequisites are frowned upon. This means that, unless the client-side code (or a redistributable) can be included in our installer, the administrator won't be happy. This makes MSMQ a particularly hard sell, because it's not installed by default with XP.
It must be relatively simple to use from C++ on Win32.
Our client is an unmanaged C++ Win32 application. No .NET or Java on the client.
The transport should be HTTP or HTTPS. That is: it must go through firewalls easily; no RPC or DCOM.
It should be relatively reliable, with retries, etc. Protection against replays is a must-have.
It must be scalable -- there's a lot of traffic. Per-message impact on the server should be minimal.
The server end is C#, currently using ASP.NET to implement a simple HTTP POST mechanism.
(The slightly odd one). It must support client-side in-memory queues, so that we can avoid spinning up the hard disk. It must allow flushing to disk periodically.
It must be suitable for use in a proprietary product (i.e. no GPL, etc.).
How is your current solution showing its age?
I would push the logic on to the back end, and make the clients extremely simple.
Messages are simply stored in the file system. Have the client write to c:/queue/{uuid}.tmp. When the file is written, rename it to c:/queue/{uuid}.msg. This makes writing messages to the queue on the client "atomic".
A C++ thread wakes up, scans c:\queue for "*.msg" files, and if it finds one it then checks for the server, and HTTP POSTs the message to it. When it receives the 200 status back from the server (i.e. it has got the message), then it can delete the file. It only scans for *.msg files. The *.tmp files are still being written too, and you'd have a race condition trying to send a msg file that was still being written. That's what the rename from .tmp is for. I'd also suggest scanning by creation date so early messages go first.
Your server receives the message, and here it can to any necessary dupe checking. Push this burden on the server to centralize it. You could simply record every uuid for every message to do duplication elimination. If that list gets too long (I don't know your traffic volume), perhaps you can cull it of items greater than 30 days (I also don't know how long your clients can remain off line).
This system is simple, but pretty robust. If the file sending thread gets an error, it will simply try to send the file next time. The only time you should be getting a duplicate message is in the window between when the client gets the 200 ack from the server and when it deletes the file. If the client shuts down or crashes at that point, you will have a file that has been sent but not removed from the queue.
If your clients are stable, this is a pretty low risk. With the dupe checking based on the message ID, you can mitigate that at the cost of some bookkeeping, but maintaining a list of uuids isn't spectacularly daunting, but again it does depend on your message volume and other performance requirements.
The fact that you are allowed to work "offline" suggests you have some "slack" in your absolute messaging performance.
To be honest, the requirements listed don't make a lot of sense and show you have a long way to go in your MQ learning. Given that, if you don't want to use MSMQ (probably the easiest overall on Windows -- but with [IMO severe] limitations), then you should look into:
qpid - Decent use of AMQP standard
zeromq - (the best, IMO, technically but also requires the most familiarity with MQ technologies)
I'd recommend rabbitmq too, but that's an Erlang server and last I looked it didn't have usuable C or C++ libraries. Still, if you are shopping MQ, take a look at it...
[EDIT]
I've gone back and reread your reqs as well as some of your comments and think, for you, that perhaps client MQ -> server is not your best option. I would maybe consider letting your client -> server operations be HTTP POST or SOAP and allow the HTTP endpoint in turn queue messages on your MQ backend. IOW, abstract away the MQ client into an architecture you have more control over. Then your C++ client would simply be HTTP (easy), and your HTTP service (likely C# / .Net from reading your comments) can interact with any MQ backend of your choice. If all your HTTP endpoint does is spawn MQ messages, it'll be pretty darned lightweight and can scale through all the traditional load balancing techniques.
Last time I wanted to do any messaging I used C# and MSMQ. There are MSMQ libraries available that make using MSMQ very easy. It's free to install on both your servers and never lost a message to this day. It handles reboots etc all by itself. It's a thing of beauty and 100,000's of message are processed daily.
I'm not sure why you ruled out MSMQ and I didn't get point 2.
Quite often for queues we just dump record data into a database table and another process lifts rows out of the table periodically.
How about using Asynchronous Agents library from .NET Framework 4.0. It is still beta though.
http://msdn.microsoft.com/en-us/library/dd492627(VS.100).aspx