How to set Spring Boot RabbitMQ Heartbeat on Cloud Foundry? - cloud-foundry

I have an application running on Cloud Foundry with Spring Boot (1.5.12) and spring-boot-starter-amqp
Based on the previous SO answer to set heartbeat property on rabbitmq autoconfig connectionfactory bean, I tried setting the heart beat property as follows.
cf set-env app spring.rabbitmq.requested-heartbeat 30
cf restage app
However, when viewed through the Rabbit Management Console, the connection still indicates the heart beat is at the default of 60s.
I took a heap dump using the actuator endpoints, and took a look at the connectionFactory that seemed to have been auto-reconfigured by spring-cloud-spring-service-connector. It seems to have the default 60 seconds, and ignores the 30 seconds requested.
Is there another environment property that should be used to configure the heartbeat value ? If not, I suspect we will wire the CachingConnectionFactory and modify it in there.

If the connection is created by Spring Cloud Connectors (i.e. spring-cloud-spring-service-connector), then you will need to customize the connection with Java configuration.
#Configuration
class CloudConfig extends AbstractCloudConfig {
#Bean
public RabbitConnectionFactory rabbitFactory() {
Map<String, Object> properties = new HashMap<String, Object>();
properties.put("requestedHeartbeat", 30);
RabbitConnectionFactoryConfig rabbitConfig = new
RabbitConnectionFactoryConfig(properties);
return connectionFactory().rabbitConnectionFactory(rabbitConfig);
}
}
More detail is available in the Connectors docs.

Related

Example of embedded Jetty and using Micrometer for stats (without Spring)

I am new to using Micrometer as a metrics/stats producer and I am having a hard time in getting it configured correctly with my Jersey/Embedded Jetty server. I would like to get Jetty statistics added.
I already have the servlet producing stats for the JVM in a Prometheus format.
Does anyone know of a good working example on how to configure it?
I am not using SpringBoot.
The best way is to look at the Spring Boot code. For example it binds the jetty connections
JettyConnectionMetrics.addToAllConnectors(server, this.meterRegistry, this.tags);
And it uses an ApplicationStartedEvent to find the server reference.
private Server findServer(ApplicationContext applicationContext) {
if (applicationContext instanceof WebServerApplicationContext) {
WebServer webServer = ((WebServerApplicationContext) applicationContext).getWebServer();
if (webServer instanceof JettyWebServer) {
return ((JettyWebServer) webServer).getServer();
}
}
return null;
}
There are other classes that record the thread usage and SSL handshake metrics.

Pivotal cloud foundry RedisConnectionFactory

Currently I'm using Redis that is provided by PCF. I'm connecting to it using JedisConnectionFactory from spring-data-redis providing needed configs like this:
#Configuration
public class RedisConfig {
#Bean
public JedisConnectionFactory jedisConnectionFactory() {
final JedisConnectionFactory jedisConFactory = new JedisConnectionFactory();
jedisConFactory.setHostName("pivotal-redis-host");
jedisConFactory.setPort(1234);
jedisConFactory.setPassword("mySecretPassword");
return jedisConFactory;
}
}
spring-cloud-config provides AbstractCloudConfig class that can be used to configure various connections. Is there any noticeable benefits one must use it instead of JedisConnectionFactory? Looks like less configs is needed to be provided, but is there any other reason?
public class RedisCloudConfig extends AbstractCloudConfig {
#Bean
public RedisConnectionFactory redisConnection() {
return connectionFactory().redisConnectionFactory();
}
}
Thanks in advance.
The main difference with Spring Cloud Connectors is that it's reading the service information from the Redis service that you bound to your application on Cloud Foundry. It then automatically configures the Redis connection based on that dynamically bound information.
Your example of using JedisConnectionFactory as well as #avhi's solution are placing the configuration information directly into either your source code or application configuration files. In this case, if your service changes then you'd need to reconfigure your app and run cf push again.
With Spring Cloud Connectors, you can change services by simply unbinding and binding a new Redis service through CF, and running cf restart.
In my opinion even you don't need to define #Bean configuration specifically.
You can simply use auto configuration by providing Redis server details in application.yml or application.properties simply.
spring:
redis:
host: pivotal-redis-host
port: 1234
password: mySecretPassword

Health check in Cloud Foundry

Does anyone know how I can tell my cloud foundry instance to monitor my health endpoint, so that when my health endpoint says that the app health is not status: UP, that the app is restarted?
The cf CLI 6.24.0 (released Feb 2017) exposed this type of health checking.
In your app manifest, use:
applications:
- name: myapp
health-check-type: http
health-check-http-endpoint: /admin/health
Your app needs to return a 200 status code from that path, or an error code when it's not status UP.
You can also use the cf set-health-check command to configure it on existing apps.
Check out this documentation for more details on the different health check types.
If an app instance dies, Cloud Foundry, by default, will new up a new instance and try to start it. That resiliency is built into Cloud Foundry.
Actuators are rest end points auto injected in your app that allow you to see the app's status and health at runtime.
https://spring.io/guides/gs/actuator-service/
Try Actuators out.
I don't believe that custom url health checking is available to day in CF. If your application instance is no longer healthy and you want to restart it you can System.exit(1) and CF will restart it for you.
I've heard rumors of custom health checks possibly coming in the future with the CC V3 api and Diego.
the way to do health check in PCF
cf set-health-check APP-NAME <HEALTH-CHECK-TYPE> --endpoint <CUSTOM-HTTP-ENDPOINT>
HEALTH-CHECK-TYPE = process | port | http ( ideally http for web apps )
CUSTOM-HTTP-ENDPOINT = /health
Reference: https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html

Unable to launch task from a spring cloud data flow stream

I registered my task app in Spring Cloud Data Flow, created a definition for it and the status shows 'unknown'. I created the stream and trying to launch the task through task-sink and I get an error:
java.lang.IllegalStateException: failed to resolve MavenResource:
How to launch a task from the task-sink? Am I missing something? Any help is appreciated. Another question I have is how do I access the payload sent via TaskLaunchRequest in my task?
S1 http | step1: transformer-rabbit | log
S2 :S1.step1 > filter --expression=payload.contains('CUSTADDRMODRQ_V15') | task-processor | task-sink
task-sink is launching the task provided by the uri in the TaskLaunchRequest. It is looking for the resource as shown in the log
OUT Using manager EnhancedLocalRepositoryManager with priority 10.0 for /home/vcap/.m2/repository
OUT Using transporter HttpTransporter with priority 5.0 for https://repo.spring.io/libs-snapshot and finally failing.
The task is deployed in our repository and as mentioned I registered and created the definition for it as well.
This one is in cf environment and I am using SCDF server 1.0.0.M4.
In the application.properties for the task-sink i am providing maven.remote.repositories.snapshots.url=**
task create fis-ifx-event-task --definition "fis-event-task"
My goal is launching the task from the stream.
Thanks for the information. I am in fact using the BUILD-SNAPSHOT as I am unable to enable taks in 1.0.0M4 version. Here is the one I am using spring-cloud-dataflow-server-cloudfoundry-1.0.0.BUILD-20160808.144306-116. I am able to register and create task definitions. The status of the task definition is showing as 'unknown' even when I am using the sample task module provided by your team. But when I initiate the flow of the stream and when task-sink tries to launch the task, it is unable to find the maven resource. When I create the task definition, does the task module gets deployed? I don't see any app in Pivotal Apps Manager. As mentioned earlier, I provided maven.remote.repositories.snapshot.url in the application.properties file for the task-sink application. Another thing I observed is when I launch the task manually from dataflow shell it gives an error CF-UnprocessableEntity(10008): The request is semantically invalid: Unknown field(s): 'staging_disk_in_mb', 'staging_memory_in_mb' and also a message saying 'Source is empty'. Presently the task is supposed to print the timestamp and is not dependent on any input.
TaskProcessor code:
#EnableBinding(Processor.class)
#EnableConfigurationProperties(TaskProcessorProperties.class)
public class TaskProcessor {
#Autowired
private TaskProcessorProperties processorProperties;
public TaskProcessor() {
}
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
#ELI(level = "info", eventType = ELIEventType.INBOUND)
public Object setupRequest(String message) {
Map<String, String> properties = new HashMap<String, String>();
properties.put("payload", message);
TaskLaunchRequest request = new TaskLaunchRequest(processorProperties.getUri(), null, properties, null);
return new GenericMessage<>(request);
}
}
TaskSink code:
#SpringBootApplication
#EnableTaskLauncher
#EnableBinding(Sink.class)
#EnableConfigurationProperties(TaskSinkProperties.class)
public class FisIfxEventTaskSinkApplication {
public static void main(String[] args) {
SpringApplication.run(FisIfxEventTaskSinkApplication.class, args);
}
}
I provided the stream I am using earlier in the post. Sink is receiving the TaskLaunchRequest with uri and payload as you can see here and unable to launch the task.
OUT registering [40, java.io.File] with serializer org.springframework.integration.codec.kryo.FileSerializer
2016-08-10T16:08:55.02-0600 [APP/0]
OUT Launching Task for the following resource TaskLaunchRequest{uri='maven://com.xxx:fis.ifx.event-task:jar:1.0-SNAPSHOT', commandlineArguments=[], environmentProperties={payload={"statusCode":0,"fisT
opic":"CustomerDataUpdated","payloadId":"CUSTADDRMODR``Q_V15","customerIds":[1597304]}}, deploymentProperties={}}
Before I begin, you have a number of questions here. In the future, it's better to break them up into multiple questions so that they are easier to find by other users and easier to answer. That being said:
A little context on the current state of things
In order to understand how things will work, it's important to understand the current state of things. The current releases of the software involved are:
Pivotal Cloud Foundry (PCF) - 1.7.12. This version is required for any task support.
Spring Cloud Task (SCT) - 1.0.2.RELEASE
Spring Cloud Data Flow CF (SCDF) - 1.0.0.BUILD-SNAPSHOT (current as of the date of this post).
Currently PCF 1.7.12+ has all the capabilities to run tasks. You can create v3 applications (the type of application used to launch a task), run it as a task, etc. However, the tooling around that functionality is not currently complete. There is no support for v3 applications in Apps Manager or the CLI. There is a plugin for the CLI that is more of a dev tool that can be used to help with some functions (it will show you logs, etc), but it is not fully functional and requires a specific version of the CLI to work [1]. This is one of the reasons that the task functionality within PCF is still considered experimental.
Spring Cloud Task is currently GA and supports all the functionality needed to effectively run tasks on CF. However, it's important to note that SCT doesn't handle orchestration so the actual launching of tasks on CF is the responsibility of either the user, or Spring Cloud Data Flow (the easier route).
Spring Cloud Data Flow's Cloud Foundry server implementation currently has functionality to launch tasks on PCF in the latest snapshots. We have validated this against 1.7.12 as well as the development branch of 1.8.
The task workflow within SCDF
Tasks are fundamentally different from stream applications within the context of SCDF. When you create a stream definition, you are given the option to deploy it. What this does is it actually downloads the Spring Boot über jars and deploys them to PCF as long running processes. If they go down, PCF, will relaunch them as expected, etc.
Tasks on the other hand, are not deployed. They are launched. The difference is that while you create a task definition, there is nothing deployed until you click launch. And when the task completes, the software is shut down and cleaned up. So while a stream definition may have states, it's really a one to one relationship between the definition and the deployed software. Where with a task, you can launch a task definition as many times as you want.
Your issues
Reading through your post, I see a few things that you are struggling with. Let me see if I can help:
Task Definitions within SCDF and launching them via a stream - When launching a task from a stream, the task registry within SCDF is not used. The sink expects the URL for the resource to be within the TaskLauchRequest.
Apps Manager and tasks - As mentioned above, there is no support for v3 applications in Apps Manager yet so you won't be able to see your tasks there.
Viewing the logs - In order to debug what's going wrong with launching your task on CF, you're going to want to view the logs. To do so, use the v3 CLI plugin mentioned above to view them. It's important to note that you can only tail live logs with the plugin, not view logs that have previously been rendered. Because of that, when testing, you'll want to tail the logs as soon as the app is created, before it's launched.
Error in SCDF Shell - The error you received from the SCDF shell (CF-UnprocessableEntity(10008):...) leads me to wonder if you have both the correct version of PCF (1.7.12+) and the correct version of the following other libraries:
spring-cloud-deployer-cloudfoundry - The latest snapshots
cf-java-client - 2.0.0.M10+
reactor-core - 3.0.0.RC1+
I hope this helps!
[1] https://github.com/cloudfoundry/v3-cli-plugin
Task support is not available in 1.0.0.M4 release of SCDF's CF-server. In this release, the task commands/REST-APIs should be disabled - see here. And for that reason, you wouldn't see any docs related to Tasks in the 1.0.0.M4 reference guide.
That said, the Task support is available/enabled in the BUILD-SNAPSHOT release. If you're locally building the CF-server and upon pushing it to CF, you could take advantage the task commands in the shell to create and launch task definitions.

How to define timeout for webservice client in websphere

Our application is hosted in websphere, my webservice client (jax-ws) is making webservice call to remote server. I will need to define timeout for this webservice call. I tried different way to set timeout up with no luck. here is what i tried:
Map<String, Object> requestContext = ((BindingProvider) binding).getRequestContext();
requestContext.put("com.ibm.websphere.webservices.jaxws.asynctimeout", 15000);
or
Map<String, Object> requestContext = ((BindingProvider) binding).getRequestContext();
requestContext.put(BindingProviderProperties.REQUEST_TIMEOUT, 15000);
requestContext.put(BindingProviderProperties.CONNECT_TIMEOUT, 15000);
None of them works
Any one can give hint, how to setup timeout for webservice client in websphere?
Thx
because Jax-WS in WAS relies on Axis 2 I believe you could use standard Axis 2 approach to do that, try(From axis 2 docs):
Timeout Configuration
Two timeout instances exist in the transport level, Socket timeout and Connection timeout. These can be configured either at deployment or run time. If configuring at deployment time, the user has to add the following lines in axis2.xml.
For Socket timeout:
<parameter name="SO_TIMEOUT">some_integer_value</parameter>
For Connection timeout:
<parameter name="CONNECTION_TIMEOUT">some_integer_value</parameter>
For runtime configuration, it can be set as follows within the client stub:
...
Options options = new Options();
options.setProperty(HTTPConstants.SO_TIMEOUT, new Integer(timeOutInMilliSeconds));
options.setProperty(HTTPConstants.CONNECTION_TIMEOUT, new Integer(timeOutInMilliSeconds));
// or
options.setTimeOutInMilliSeconds(timeOutInMilliSeconds);
...
If you want more information check: http://axis.apache.org/axis2/java/core/docs/http-transport.html
Also:
http://wso2.org/library/209
http://singztechmusings.wordpress.com/2011/05/07/how-to-configure-timeout-duration-at-client-side-for-axis2-web-services/
If you're using ServiceClient check this thread please: Axis2 ServiceClient options ignore timeout
Please let me know if it worked ;)
You should set JVM properties mentioned in following link -
https://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rwbs_jaxwstimeouts.html
bindingProvider.getRequestContext().put(com.ibm.wsspi.webservices.Constants.CONNECTION_TIMEOUT_PROPERTY, timeout);
bindingProvider.getRequestContext().put(com.ibm.wsspi.webservices.Constants.RESPONSE_TIMEOUT_PROPERTY, timeout);
Set timeout in seconds as String.