embed non web application in jetty - jetty

How can I configure jetty6 to start a non web application (not a servlet)? My Java app is a rabbitmq consumer listening for ampq messages over tcp. I could have jetty init() call my Main entry point. Is there a better way to do this?

Why not provide a trivial servlet that provides an init() method and invoke your application from within there ? i.e. wrap it within a servlet wrapper that does next to nothing.
It doesn't have to respond to GETs/POSTs etc. Although you'd probably find that useful and report application status via a simple HTML page.

You'll need to provide a little more info if you want a complete answer but there are a few approaches I could suggest, that will give different behaviours (you'll need to pick the right one for your use case)
1. Just put the right code in your jetty.xml file. The XML file is a pretty complete execution language, so you can simply call methods on objects. An appropriate static method, along with a <call> tag should do the trick
The downside, is that you're not really getting any thing from Jetty - you just tying your startup method into the same startup process that Jetty uses.
2. Build a component that implements the Jetty LifeCycle interface (your best option is to extend AbstractLifeCycle), and then call Server.addLifeCycle()
That will allow you to open your port when Jetty starts up, shutdown cleanly when Jetty, stops, etc.
But all you get is that lifecycle. You don't get anything around deployment.
3. Same as option 1, but put it in jetty-web.xml (or jetty-env.xml), which allows you to tie it into the deployment of a WAR file.
It doesn't buy you much over option 1, but if you're trying to deploy an application to an existing Jetty setup, it might help.
4. Same as option 1, but using jetty-web.xml. I'm not sure how well that would work, since I don't think you can attach a LifeCycle to a WebAppContext, but it might work OK - you'd need to do more investigation on that.
5. As per Brian's solution, simply write a servlet with an init() method, and initialise-on-startup, then don't map it to any URLs. Put a call to your entry method inside that init.

Related

Codename One - how to add function to existing web service

with the CN1 web service wizard, I created a working server project that I run on my local Tomcat installation. in addition, the CN1 project has the webserviceproxy.java class that I use to call the web services. so far so good.
During development, there now is the need to create a new function within the webservice that I did not previously think of. So instead of recreating my whole server using the wizard, I thought I simply add some code into the files that were created.
On the client side:
WebServiceProxy.java - add WebServiceProxyCall.WSDefinition and add the function call in sync and async fashion. the arguments and return type matches the definition.
On the server side:
WebServiceProxyServer.java
- add the function definition with the required functionality (this works as I have it debugged it locally on the server side).
CN1WebServiceServlet.java - add definition and add the if statement matching the service name.
when debugging the server and calling the service from the client, it does not reach the breakpoint of the doPost method, so something is terribly off.
What else do I need to change when manually adding a new webservice function? Or is this so complicated that I should better use the web service wizard, create the new server from scratch and copy all the other functionality from my old server to the new one?
Thanks and best regards
There is currently no way to do this seamlessly since the generated protocol is binary for fastest protocol performance.
The solution is to generate a new class we usually use the notion V2, V3 onward. That way the first webservice is still 100% compatible to devices in production and you can create a new "more correct" protocol for the newer devices. The implementation classes can derive from one another to increase code reuse.

Geoserver is unable to accept concurrent requests when processing files

I am trying to set up Geoserver as a backend to our MVC app. Geoserver works great...except it only lets me do one thing at a time. If I am processing a shapefile, the REST interface and GUI lock up until the job is done processing.
I know that there is the option to Cluster a geoserver configuration, but that would only be load balancing, so instead of only one read/write operation, I would have two instead...but we need to scale this up to at least 20 concurrent tasks at one time.
All of the references I've seen on the internet talk about locking down the number of concurrent connections, but only 1 is allowed the whole time.
Obviously GeoServer is used in production environments that have more than 1 request at the same time. I am just stumped about how to make it happen.
A few weeks ago, my colleague sent this email to the Geoserver Development team, the problem was described as a configuration lock...and that by changing a variable we could release it. The only place I saw this variable was in the source code on GitHub.
Is there a way to specify in one of the config files of Geoserver to turn these locks off so I can do concurrent read/writes? If anybody out there has encountered this before PLEASE HELP!!! Thanks!
On Fri, May 16, 2014 at 7:34 PM, Sean Winstead wrote:
Hi,
We are using GeoServer 2.5 RC2. When uploading a shape file via the REST
API, the server does not respond to other requests until after the shape
file has been processed.
For example, if I start a file upload and then click on the Layers menu
item in the web app, the response for the Layers page is not received until
after the file upload and processing have completed.
I researched the issue but did not find a suitable cause/answer. I did
install the control flow extension and created an controlflow.properties
file in the data directory, but this did not appear to have any effect.​
How do I diagnose the cause of this behavior?
Simple, it's the configuration lock. Our configuration subsystem is not
able to handle correct concurrent writes,
or reads during writes, so there is a whole instance read/write lock that
is taken every time you use the rest
api and the user interface, nothing can be done while the lock is in place
If you want, you can disable it using the system variable
GeoServerConfigurationLock.enabled,
-DGeoServerConfigurationLock.enabled=true
but of course we cannot predict what will happen to the configuration if
you do that.
Cheers
Andrea
-DGeoServerConfigurationLock.enabled=true is referring to a startup parameter given to the java command when GeoServer is first started. Looking at GeoServer's bin/startup.sh and bin\startup.bat the approved way to do this is via an environment variable named JAVA_OPTS. You will see lines like
if [ -z "$JAVA_OPTS" ]; then
export JAVA_OPTS="-XX:MaxPermSize=128m"
fi
in startup.sh and
if "%JAVA_OPTS%" == "" (set JAVA_OPTS=-XX:MaxPermSize=128m)
in startup.bat. You will need to make those
... JAVA_OPTS="-DGeoServerConfigurationLock.enabled=true -XX:MaxPermSize=128m"
or define that JAVA_OPTS environment variable similarly before GeoServer is started.
The development team's response of "of course we cannot predict what will happen to the configuration if you do that", however, suggests to me that there may be concurrency issues lurking; which may be likely to surface more frequently as you scale up. Maybe you want to think about disconnecting the backend processing of those shape files from the REST requests to do so using some queueing mechanism instead of disabling GeoServer's configuration lock.
Thank You, I figured it out. We didn't even need to do this because we were only using one login for the REST interface (admin) instead of making a new user for each repository, now the locking issue doesn't happen.

Biztalk and the best way to call web service

I am writing a biztalk orchestration that will need to call a web service, probably multiple web services, and probably more than once. I see two options before me; one, consume the wsdl in a separate code project, and call the web services from code in an expression shape, and two, consume it from Biz, get schemas, etc and call through request/response ports. What is the best practice here? On the one hand, if the wsdl is updated it will be easier to update the code than the schemas and ports, and it seems like a lot of clutter and work to build ports enough for multiple web service calls. On the other hand, all the tuning you can do at the port level(retries being one) makes it robust to call web service.
Also see this question here, which discusses a 3rd option, viz using add service reference in BizTalk as an alternative method to import XSD's.
IMO you would be defeating the point of using BizTalk by using .NET proxies to handle integration. For example:
You are hard coding the protocol (WCF), and now need to marshal request and response messages to / from your custom code. With a send port, any request-response mechanism can be configured at deployment time - especially useful for unit and integrating testing.
You will be losing all of the benefit of BizTalk's message delivery mechanisms, such as retries, backup transports, resume suspended messages, different maps for different ports, and arguably the whole pub sub ability, (e.g. what if multiple listeners want to listen to the responses from the called web services?)
Where will you store the WCF serviceModel config settings, such as the endpoint etc? i.e. You've lost the flexibility of binding files.
etc.
So, TL;DR Always use the WCF adapters in BizTalk
However, that said, am in agreement that updating generated items if the consumed service changes can be messy. FWIW, we mitigate some of this as follows:
Always create a separate, empty folder in which to import all the
imported generated artifacts.
Leave all the generated items 'as-is', i.e. don't be tempted to move the dummy .odx, or delete it (since it has the preconfigured Port Types)
Unfortunately this leaves the below actions which still need to be manually applied:
Remember to change the visibility of the Port Types to public if the artifacts are in a separate assembly to your orchestrations
Promoted and distinguished properties on the imported schemas need to be reapplied (e.g. remember to document screen shots after any change). Possibly this can be simplified or automated by saving and re-pasting in the <xs:annotation> section of the schma.
If you are using message contracts in your WCF service, and are reusing the same referenced messages across multiple applications, you will need to manually delete the duplicates created by the add generated items and then re-reference the existing schemas. (e.g. we have a standard 'response' message back to all BizTalk calls)
Interestingly, you can have a mixture of both infact. Check this out by Saravana Kumar!!!
It uses passthrough receive and consumes a webservice using the dll on the send port, without going through the pain of creating schemas and webports.
This gives all the power of Biztalk ( routing response, send port configuration, etc) and still the flexibility to change the schema without much fuss.

Programmatically query deployed Jetty 8 applications

Is there a way I can query the list of deployed applications in a Jetty 8 server in code? For instance, can I inject the DeploymentManager and query it?
From a handler, you should have a reference to the Server object so you should be able to dig around for it from there, or just pass in reference to it when you create it and are knitting things together. Handlers are simple to write and wire up. Look at the StatisticsHandler usage as an example if you like.
From a servlet that is not really appropriate since there is a webapp classloader and webapps are classloader isolated for a reason.

clojure/lein/ring: I have two ring handlers doing different things, how do I wrap this into a servlet?

I have a clojure/ring project that has two distinct app/handlers running on different ports (one for an api and one for a web frontend). They share a lot of code, but each has its own namespace where it does all the work particular to that interface. Now I want to deploy this as a servlet running in tomcat or something similar (really it needs to work in any servlet container). I am having trouble though because it seems like lein-ring makes the assumption that there will be only one handler in the servlet. For instance, in my project.clj I have this line:
:ring {:handler caribou.api/app
:servlet-name "caribou"
:init caribou.api/init}
This is great for the api, but now what about the frontend? Do I need to make another project that includes this one so that it can have its own handler and servlet? Does a servlet always run on one port?
There are two things I'm not sure about basically: I am not coming from a java background so I'm not sure what is going on with the servlet approach and what limitations it has, and I am unclear on how exactly clojure translates into the servlet paradigm enough to structure this project in a general way.
Any help is appreciated, thanks!
All servlets in the same container are being served from the same server and therefore the same port. Usually you identify different servlets by giving them different URI prefixes such as /servlet1 or /my/servlet.
I don't know if there is anything preventing you from creating separate servlets with Ring, but in general it doesn't seem like a good idea if your entire app is Clojure-based. At the very least, as you have pointed out, the lein-ring plugin enforces that only one servlet is used for the web application.
One thing you could do is create a parent handler that delegates to either the app or API handlers based on the URI. This essentially gives you control with out needing to delegate the logic to the Servlet API.