Downloaded cxf-client-1.5.4 from Maven Repository, does not contain any configuration for fail-over features about retry strategy. However, during heavy load and slowness in production, I observed and suspected from the log that retry is happening.
Is there any retry strategy configured in CXF plug-in for Grails ? If yes, how to stop retry.
Stress test and applying different measures in application confirmed that there is no retry strategy in Grails cxf-client-1.5.4.
Related
I'm using kube-aws to set up a production Kubernetes cluster on AWS. Now I'm running into an issue I am unable to recreate in my dev environment.
When a pod is running heavy computation, which in my case happens in bursts, the liveness and readiness probes stop responding. In my local environment, where I run docker compose, they work as expected.
The probes use simple HTTP 204 No Content output using native Go functionality.
Has anyone seen this before?
EDIT:
I'd be happy to provide more information, but I am uncertain what I should provide as there is a tremendous amount of configuration and code involved. Primarily, I'm looking for help to troubleshoot and where to look to try to locate the actual issue.
This web page suggests its possible to deploy webapps in parallel, using this function:
HandlerList handlerList = createHandlerList(...);
handlerList.setParallelStart(true);
I can't find anything similar in Jetty 9.4.x and Google seems to return little either.
How can I do this with the latest Jetty?
Parallel startup of Handler Collections were removed in...
Jetty 7.6.13.v20130916
Jetty 8.1.14.v20131031
Jetty 9.0.0.v20130308
.. as it was known to cause problems with LifeCycle startup.
The original reason for the parallel start was to improve startup performance.
Since Jetty 9.2.2.v20140723, the jetty-quickstart concepts were introduced that allows for a compile time scan of the webapp that produces a WEB-INF/quickstart-web.xml which allows for static load of the webapp (no bytecode scanning, no annotation scanning, no discovery of the Servlet spec components occur).
I wrote a couple of visual studio unit tests to test business logic included in worker role.
My worker role publishes message to Azure topics. For this I have specified connection strings in CloudConfig.cfg & I pick up the settings using RoleEnvironment.GetConfigurationSettingValue(). Since tests run in their own app domain & not inside azure emulator. Call to those functions would obviously fail.
What are the best practices out there to handle this scenario?
Instead of using RoleEnvironment.GetConfigurationSettingValue, use CloudConfigurationManager.GetSetting. This will pick up configuration settings from appropriate config file - service configuration file if your code is running in context of a cloud service or app.config/web.config otherwise.
I am currently using Jetty 9.1.4 on Windows.
When I deploy the war file without hot deployment config, and then restart the Jetty service. During that 5-10 seconds starting process, all client connections to my Jetty server are waiting for the server to finish loading. Then clients will be able to view the contents.
Now with hot deployment config on, the default Jetty 404 error page shows within that 5-10 second loading interval.
Is there anyway I can make the hot deployment has the same behavior as the complete restart - clients connections will wait instead seeing the 404 error page ?
Unfortunately this does not seem to be possible currently after talking with the Jetty developers on IRC #jetty.
One solution I will try to use are two Jetty instances with a loadbalancing reverse proxy (e.g. nginx) before them and taking one instance down for deployment.
Of course this will instantly lead to new requirements (session persistence/sharing) which need to be handled. So in conclusion: much work to do in the Java world for zero downtime on deployments.
Edit: I will try this, seems like a simple enough solution http://rafaelsteil.com/zero-downtime-deploy-script-for-jetty/ Github: https://github.com/rafaelsteil/jetty-zero-downtime-deploy
I've started using SOAP UI recently to test web services and it's pretty cool, but it's a huge resource hog.
Is there any way to reduce the amount of resources it uses?
It shouldn't be a resource hog, although I've seen it do this before. I leave it running on my PC all week, and a co-worker with a similar machine (dual-core running XP) has to kill it every few hours, otherwise it keeps using CPU. I'd try uninstalling/re-installing. Currently, my instance has been up for 10 days, running a mockservice that I've been hitting very hard (I've sent it thousands of requests). CPU time total (over 10 days) is about an hour and a half, but the "right now" number is about 1%.
There are no popular alternatives, aside from writing your own client in the language of your choice.
If you're testing WCF services, you can run wcftestclient from the Visual Studio command line. It works for local or remotely hosted services. Its no good for ASMX-style .NET 2.0 SOAP services though.
if you want to test using only json, you could use some of the light weight Rest clients ex. Mozilla Rest plugin.
We test our SOAP APIs manually with SOAP UI and otherwise use jMeter for automated SOAP API testing. While having a GUI seems attractive first, I find both applications quiet user-unfriendly and time consuming to work with.
As already suggested, you could do it in code using Java or maybe use a dynamic language like Ruby:
Testing SOAP Webservices with RSpec
SOAP web Services testing in RUBY
As user mitchnull mentions in his comment:
Disabling the browser component (-Dsoapui.jxbrowser.disable=true)
solved the 100% CPU usage issues for me. (when it was enabled, it
periodically went to 100% CPU even when not running any
tests/requests).