Apache Camel FTP Client Concurrency - concurrency

I'm having a problem.
I have an application that I'm deploying twice in two different servers, this application uses the ftp component:
<from uri="ftp://..." />
As I'm deploying it twice (Like this) I'm having concurrency and some files in FTP server are being processed twice or are throwing exceptions (FileNotFoundException) when the other Node process it first.
Is there any solution for this?
Thx.

Yes you can look at setting up those FTP routes in master/slave mode, so only one of them is active at any time. Or you can use a shared idempotent repository as a "lock" so they can only grab a file if they can get an exclusive lock from that repo.
Its covered in the Camel in Action book chapter 17, and chapter 12 as well.
And you can find some details on the Camel website, however not as good docs as the book.

Related

Suddenly scheduled tasks are not running in coldfusion 8

I am using Coldfusion MX8 server and one of the scheduled task was running from 2 years but now suddenly from 01/12/2014 scheduled tasks are not running. When i browsed the file in browser then the file is running successfully without error.
I am not sure is there any updatation or license expiration problem. I am aware that mid of this year Adobe closed the support for coldfusion 8.
The first most common problem of this problem is external to the server. When you say you browsed to the file and it worked in a browser, it is very important to know if that test was performed on the server desktop. Knowing that you can browse to the file from your desktop or laptop is of small value.
The most common source of issues like this is a change in the DNS or network stack that is interfereing with resolution. For example, if the internal DNS serving your DMZ suddenly starts serving the "external" address - suddenly your server can't browse to your domain. Or if the IP served by the server for the domain in question goes from being 127.0.0.1 to some other IP that the server can't acces correctly due to reverse proxy or LB or some other rule. Finally, sometimes the Apache or IIS is altered so that an IP that previously was serviced (127.0.0.1 being the most common example) now does not respond.
If it is something intrinsic to the scheduler service then Frank's advice is pretty good - especially look for "proxy schduler" entries in the log - they can give you good clues. I would also log results of a scheduled task to a file. Then check the file. If it exists then your scheduled tasks ARE running - they are just not succeeding. Good luck!
I've seen the cf scheduling service crash in CF8. The rest of CF is unaffected.
Have you tried restarting the server?
Here are your concerns:
Your File (works since you tested it manually).
Your Scheduled Task (failed).
Your Coldfusion Application (Service) (any changes here)?
Your Server (what about here).
To test your problem create a duplicate task and schedule it. Leave the other one in place (maybe set your new one to run earlier). Use the same file too. See if it completes.
If it doesn't then you have a larger problem. Since the Coldfusion Server sits atop of the JVM there could be something happening there. Things just don't stop working unless something got corrupted or you got compromised. If you hardened your server by rearranging/renaming the file structure to make it more secure...It would break your task.
So going back: if your test schedule works then determine what is different between the two. Note you have logging capabilities. Logging abilities for CF8
If you are not directly incharge of maintaining this server, then I would recommend asking around and see if there was recent maintenance, if so, what was done to the server?

Issues with ActiveMQ 3.8.3 (CPP) priorityBackup not working

I am a little new to active MQ so please bear with me.
I am trying to take advantage of the ActiveMQ priority backup feature for some of my Java and CPP applications. I have two brokers on two different servers (local and remote), and I want the following behavior for my apps.
Always connect to local broker on startup
If local broker goes down, connect to remote
While connected to remote, if local comes back up, we then reconnect to local.
I have had success with testing it on the java apps by simply adding priorityBackup to my uri options
i.e.
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false&priorityBackup=true
However stuff isn't going as smoothly on the CPP side.
The following works fine on the CPP apps (with basic working failover functionality - aka jumping to remote when local goes down )
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false
But updating the uri options with priorityBackup seems to break failover functionality completely (my apps never failover to the remote broker, they just stay in some kind of broker-less/limbo state when their local broker goes down)
failover:(tcp://local:61616,tcp://remote:61616)?randomize=false&priorityBackup=true
Is there anything I am missing here? Extra uri options that I should have included?
UPDATE: Transport connector info
<transportConnectors>
<transportConnector name="ClientOpenwire" uri="tcp://0.0.0.0:61616?wireFormat.maxInactivityDuration=7000"/>
<transportConnector name="Broker2BrokerOpenwire" uri="tcp://0.0.0.0:62627?wireFormat.maxInactivityDuration=5000"/>
<transportConnector name="stompConnector" uri="stomp://0.0.0.0:62623"/>
</transportConnectors>
backup and priorityBackup parameters are handled in completely different way in Java and C++ implementation of the library.
Java implementation works well but unfortunately C++ implementation is broken. There are no extra options that can fix this issue. Serious changes in library are required to resolve this issue.
I was testing this issue using activemq-cpp-library-3.8.3, and brokers in various versions (5.10.0, 5.11.1). Issue is not fixed in 3.8.4 release.

Geoserver is unable to accept concurrent requests when processing files

I am trying to set up Geoserver as a backend to our MVC app. Geoserver works great...except it only lets me do one thing at a time. If I am processing a shapefile, the REST interface and GUI lock up until the job is done processing.
I know that there is the option to Cluster a geoserver configuration, but that would only be load balancing, so instead of only one read/write operation, I would have two instead...but we need to scale this up to at least 20 concurrent tasks at one time.
All of the references I've seen on the internet talk about locking down the number of concurrent connections, but only 1 is allowed the whole time.
Obviously GeoServer is used in production environments that have more than 1 request at the same time. I am just stumped about how to make it happen.
A few weeks ago, my colleague sent this email to the Geoserver Development team, the problem was described as a configuration lock...and that by changing a variable we could release it. The only place I saw this variable was in the source code on GitHub.
Is there a way to specify in one of the config files of Geoserver to turn these locks off so I can do concurrent read/writes? If anybody out there has encountered this before PLEASE HELP!!! Thanks!
On Fri, May 16, 2014 at 7:34 PM, Sean Winstead wrote:
Hi,
We are using GeoServer 2.5 RC2. When uploading a shape file via the REST
API, the server does not respond to other requests until after the shape
file has been processed.
For example, if I start a file upload and then click on the Layers menu
item in the web app, the response for the Layers page is not received until
after the file upload and processing have completed.
I researched the issue but did not find a suitable cause/answer. I did
install the control flow extension and created an controlflow.properties
file in the data directory, but this did not appear to have any effect.​
How do I diagnose the cause of this behavior?
Simple, it's the configuration lock. Our configuration subsystem is not
able to handle correct concurrent writes,
or reads during writes, so there is a whole instance read/write lock that
is taken every time you use the rest
api and the user interface, nothing can be done while the lock is in place
If you want, you can disable it using the system variable
GeoServerConfigurationLock.enabled,
-DGeoServerConfigurationLock.enabled=true
but of course we cannot predict what will happen to the configuration if
you do that.
Cheers
Andrea
-DGeoServerConfigurationLock.enabled=true is referring to a startup parameter given to the java command when GeoServer is first started. Looking at GeoServer's bin/startup.sh and bin\startup.bat the approved way to do this is via an environment variable named JAVA_OPTS. You will see lines like
if [ -z "$JAVA_OPTS" ]; then
export JAVA_OPTS="-XX:MaxPermSize=128m"
fi
in startup.sh and
if "%JAVA_OPTS%" == "" (set JAVA_OPTS=-XX:MaxPermSize=128m)
in startup.bat. You will need to make those
... JAVA_OPTS="-DGeoServerConfigurationLock.enabled=true -XX:MaxPermSize=128m"
or define that JAVA_OPTS environment variable similarly before GeoServer is started.
The development team's response of "of course we cannot predict what will happen to the configuration if you do that", however, suggests to me that there may be concurrency issues lurking; which may be likely to surface more frequently as you scale up. Maybe you want to think about disconnecting the backend processing of those shape files from the REST requests to do so using some queueing mechanism instead of disabling GeoServer's configuration lock.
Thank You, I figured it out. We didn't even need to do this because we were only using one login for the REST interface (admin) instead of making a new user for each repository, now the locking issue doesn't happen.

Asynchronous web services calls with JAX-WS: Use wsimport support for asynchrony or roll my own?

There is an excellent article by Young Yang that explains how to use wsimport to create web service client artifacts that have asynchronous web service calls. Asynchrony requires that the WSDL has the tag
<enableAsyncMapping>true</enableAsyncMapping>
in its bindings section. If you are using the bottom-up approach with JAX-WS annotated Java classes you can't do this directly in the WSDL because the WSDL is a generated artifact on the web server. Instead you use build tools like Ant or Maven to include this binding when wsimport is executed on the WSDL.
The generated client artifacts have asynchronous method calls that return a
Future<?>
or a
Response
which is a Future.
My question after reading Yang's article is why not just roll my own asynchronous web service calls using Executors and Futures. Do the artifacts created by wsimport offer some advantage that I can't see over a roll-your-own approach?
If anyone has experience or insight with both approaches I would appreciate your feedback.
In theory, the generated asynchronous clients wouldn't need to block threads. By passing an AsyncHandler, the system can use NIO to register for an event when the web service call is complete, and it can call that handler. No threads need to block at all.
If you put your synchronous web service call into an executor, it will still end up blocking a thread until the result arrives, although at least this blocking is limited to the thread pool in the executor.
As soon as you have many hundreds of threads floating around, your system performance will degrade due to context switching.
Whether the web service library under the hood actually uses NIO is another matter. It doesn't appear to be required by the JAX-WS specification. Using JDK 1.6 and setting a break point server side, I set 100 clients off to call the server. Using JVisualVM I attached to the client and could see that it had created one new thread per call to the server. Rubbish!
Looking around on the web I found that Apache CXF supports limiting the pool of threads used in async calls. Sure enough, using a client generated with CXF and putting the right libraries on the classpath as discussed here, a retest showed that only 25 threads were being used.
So why use the jax-ws API rather than build your own? Because building your own takes more work ;-)
I know that it does not reach the prompted question, but just complementing one information included on question:
"Instead you use build tools like Ant or Maven to include this binding when wsimport is executed on the WSDL."
It is possible generate the asynchronous client by a adding a custom xml file using the option -b to the wsimport:
Example:
wsimport -p helloAsyncClient -keep http://localhost:8080/helloservice?wsdl -b customAsync.xml
The customAsync.xml content:
<jaxws:bindings
wsdlLocation="http://localhost:8080/helloservice?wsdl"
xmlns:jaxws="http://java.sun.com/xml/ns/jaxws">
<jaxws:enableAsyncMapping>true</jaxws:enableAsyncMapping>
</jaxws:bindings>
It is just one more way to generate asynchronous client beyond by using ant or maven :)

Using LoadRunner to Test Server Processes

We currently use LoadRunner for performance testing our web apps, but we also have some server side processes we need to test.
Background:
We call these processes our "engines". One engine receives messages by polling an IBM WebSpere MQ queue for messages. It takes a message off the queue, processes it, and puts the result on an outbound queue. We currently test this engine via a TCL script that reads a file that contains the messages, puts the messages on the inbound queue, then polls the outbound queue for the results.
The other engine receives messages via a web service. The web service writes the message to a table in our database. The engine polls the database table for new messages, takes a message and processes it, and puts the result back into the database. We currently test this engine via a VBScript script that reads a file that contains the messages, sends the message to the web service, then keeps querying the web service for the result unitl it's ready.
Question:
We'd like to do away with the TCL and VBScript scripts and standardize on LoadRunner so that we have one tool to manage all our performance tests.
I know LoadRunner supports a Web Services protocol "out of the box", but I'm not sure how to use it. Does anyone know of any examples of how to use LoadRunner to test a web service?
Does LoadRunner have a protocol for MQ? Is it possible to use a LoadRunner Vuser to drive load (put messages) into an MQ queue? Would we need to purchase something from HP or some other vendor to do this?
Thanks :)
There is an add-in for LoadRunner in the incuded software to interface with MQ series and put the messages directly on the queue. Web services are fully supported also, and VBScript is supported too,perhaps using QTPro for the script and a GUI user in LoadRunner?
Colin.
For #1, as an alternative to a Web Services script, you could try recording a Windows Sockets script. I've used LoadRunner to record winsock scripts to test some (Java) APIs. What I did was write a really simple Java API client and then execute that from a Windows batch file. The batch file would then be referenced as the executable when recording a LR script in VUGen.
I'm not sure if VUGen can load a VBScript file for recording, but you might try. Otherwise, you might try wrapping your VBScript in a batch file that can be run by VUGen.
When VUGen records a winsock script, it's basically monitoring the network communication for the process you're recording with. After you're done recording, it'll generate a dump of the network data in a "data.ws" worksheet that you can look at and edit with VUGen. You can parameterize this data worksheet for your load tests.
One can code SOA requests and parse responses within LoadRunner.
See wilsonmar.com/1lrscript.htm.
But bear in mind that TCL and VBScript developed for functional testing have a different architecture and scope than LoadRunner scripts. QTP and WinRunner take over the application.
LoadRunner scripts focus on the exchange of data across the wire. In the case of headless SOA XML, this architectural distinction doesn't matter.
However, it may be easier for you to maintain VBscript from the GUI, because creating SOA scripts in LoadRunner require a deeper understanding of message formats than what most MQ developers have.
You really have three paths for pushing and popping messages off of an MQ queue using LoadRunner
(1) MQTester. This is a native MQ Protocol Add in for use with LoadRunner
(2) Winsock. Winsock development is best described as tedipously similar to picking fly scat out of ground pepper. Tedious, but in the end very rewarding. Out of the box, no additional add ins are required except license updates (possibly)
(3) JMS using a Java Virtual user, see. http://en.wikipedia.org/wiki/Java_Message_Service . You wind up with a small Java program in the Java template virtual user for LoadRunner. You will have to deal with all of the Java black magic aspects associated with LoadRunner, but once you nail down the combination of release and installation details you can use the virtual same code to post to just about any JMS provider (not just MQ) with some connection factory settings changed.
You should be able to do JMS with the web services virtual user as well, but I have not tested that configuration. Look at the JMS section of the run time settings.