I have a Java EE server/client architecture which communicate with each other by using SSL connection. When the connection is made, the client can interrogate the server web services. My question is how can I access to client certificate information in the server web service ? My server controller below :
import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
#Path("mycontroller")
#Consumes(MediaType.APPLICATION_XML)
#Produces(MediaType.APPLICATION_XML)
public class Controller {
#GET
#Path("dosomething")
public Response doSomething() {
// How can I have access to certificate information here ?
return Response.ok().build();
}
}
I found a way to do what I wanted.
First, the server has to be configurated to require client certificate authentication. In my case I use a JBoss server and had to add this in the standalone.xml file :
...
<subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false">
...
<connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" enable-lookups="false" secure="true">
<ssl name="localhost" key-alias="localhost" password="server" certificate-file="${jboss.server.config.dir}/server.jks" certificate-key-file="${jboss.server.config.dir}/server.jks" ca-certificate-file="${jboss.server.config.dir}/truststore.jks" protocol="TLSv1" verify-client="true" />
</connector>
...
</subsystem>
...
And then in my controller I had to inject HttpServletRequest and finally I could obtain an instance of X509Certificate which contains the certificate information :
import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.servlet.http.HttpServletRequest;
import java.security.cert.X509Certificate;
#Path("mycontroller")
#Consumes(MediaType.APPLICATION_XML)
#Produces(MediaType.APPLICATION_XML)
public class Controller {
#Context
private HttpServletRequest request;
#GET
#Path("dosomething")
public Response doSomething() {
X509Certificate[] certChain = (X509Certificate[]) request.getAttribute("javax.servlet.request.X509Certificate");
X509Certificate certificate = certChain[0];
return Response.ok().build();
}
}
If you are looking for the standard certificate information that would be found in the HTTP Headers and the HTTP Servlet Reqeust object, such as Client certificate information from Apache HTTP reverse proxy. You can inject these
For Example:
#Context private HttpServletRequest servletRequest;
#Context private HttpServletContext servletContext;
( see Get HttpServletRequest in Jax Rs / Appfuse application? or in the Java EE tutorial )
If you wish to access the keystore file and load the private key of the certificate, then file access should be done through a JNDI file resource or a JCA adaptor.
But i would advise caution, the application server should handle all the SSL/TLS connection security your WAR component just declares that it wants the connection to be "confidential" in the web.xml file. Mixing the message level security and authentication with the applicaition or transport protocol security can break separation of concerns. i.e. keeping authentication attached to the message in a bus or hub scenario.
Related
I have the following need. I would like every day at 5:00 am to restart my instance, so I read the best way to automate it would be using Cloud Scheduler and Cloud Function, but I am not aware of these two GCP features.
I created two schedules in the Cloud Scheduler where my VM instance STOP at 5:00 am and another one that START at 5:10 am, but I don't know how to proceed in Cloud Function to end my process.
Could someone help me with this? hug to everyone!
See how are my project error log when trying to implement
ERROR
{ "jobName": "projects/my-project/locations/us-central1/jobs/Stop", "#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished", "status": "INTERNAL", "url": "https://us-central1-my-project.cloudfunctions.net/power/stop?zone=us-central1-a&instance=my-instance", "targetType": "HTTP" }
###############
{
insertId: "1klx7n3g18eq5zs"
jsonPayload: {
jobName: "projects/my-project/locations/us-central1/jobs/Stop"
#type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
status: "INTERNAL"
url: "https://us-central1-my-project.cloudfunctions.net/power/stop?zone=us-central1-a&instance=my-instance"
targetType: "HTTP"
}
httpRequest: {
status: 500
}
resource: {
type: "cloud_scheduler_job"
labels: {
location: "us-central1"
project_id: "my-project"
job_id: "Stop"
}
}
timestamp: "2020-08-07T08:00:06.896367090Z"
severity: "ERROR"
logName: "projects/my-project/logs/cloudscheduler.googleapis.com%2Fexecutions"
receiveTimestamp: "2020-08-07T08:00:06.896367090Z"
}
I have the same use case on my project, and I use this python function in order to start and stop a vm, my function can handle the paths start and stop
from flask import Flask, request, abort
import os
import logging
app = Flask(__name__)
#app.route('/')
#app.route('/<path:path>')
def power(path=None):
#this libraries are mandatory to reach compute engine api
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
#the function will take the service account of your function
credentials = GoogleCredentials.get_application_default()
#this line is to specify the api that we gonna use, in this case compute engine
service = discovery.build('compute', 'v1', credentials=credentials, cache_discovery=False)
logging.getLogger('googleapiclient.discovery_cache').setLevel(logging.ERROR)
# Project ID for this request.
project = yourprojectID # Update placeholder value.
zone = request.args.get('zone', None)
instance = request.args.get('instance', None)
#call the method to start or stop the instance
if (request.path == "/stop"):
req = service.instances().stop(project=project, zone=zone, instance=instance)
elif (request.path == "/start"):
req = service.instances().start(project=project, zone=zone, instance=instance)
else:
abort(418)
#execute the command
response = req.execute()
print(response)
return ("OK")
if __name__ == '__main__':
app.run(port=3000, debug=True)
requirements.txt file
google-api-python-client
oauth2client
flask
Scheduler config
Create a service account with functions.invoker permission within your function
Create new Cloud scheduler job
Specify the frequency in cron format.
Specify HTTP as the target type.
Add the URL of your cloud function and method as always.
Select the token OIDC from the Auth header dropdown
Add the service account email in the Service account text box.
In audience field you must only need to write the URL of the function without any additional parameter
On cloud scheduler, I hit my function by using these URLs
https://us-central1-yourprojectID.cloudfunctions.net/power/stop?zone=us-central1-a&instance=instance-1
https://us-central1-yourprojectID.cloudfunctions.net/power/stop?zone=us-central1-a&instance=instance-1
and I used this audience
https://us-central1-yourprojectID.cloudfunctions.net/power
please replace yourprojectID in the code and in the URLs
us-central1 is the region where my function is located, power is the name of my function, us-central1-a is the zone where my instance is located and instance-1 is the name of my instance
update for java
Cloud functions supports java11 and maven but this is on beta
first you need this dependencies, please add the following lines to your pom.xml file:
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-compute</artifactId>
<version>beta-rev20200629-1.30.10</version>
</dependency>
<dependency>
<groupId>com.google.api-client</groupId>
<artifactId>google-api-client</artifactId>
<version>1.30.10</version>
</dependency>
Cloud function to START a VM
package com.example;
import com.google.cloud.functions.HttpFunction;
import com.google.cloud.functions.HttpRequest;
import com.google.cloud.functions.HttpResponse;
import java.io.BufferedWriter;
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.compute.Compute;
import com.google.api.services.compute.model.Operation;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Arrays;
public class Example implements HttpFunction {
#Override
public void service(HttpRequest request, HttpResponse response) throws Exception, IOException, GeneralSecurityException {
// Project ID for this request.
String project = "my-project"; // TODO: Update placeholder value.
// The name of the zone for this request.
String zone = "my-zone"; // TODO: Update placeholder value.
// Name of the instance resource to start.
String instance = "my-instance"; // TODO: Update placeholder value.
Compute computeService = createComputeService();
// you can change the method start to stop
Compute.Instances.Start request = computeService.instances().start(project, zone, instance);
Operation xresponse = xrequest.execute();
// TODO: Change code below to process the `response` object:
System.out.println(xresponse);
BufferedWriter writer = response.getWriter();
writer.write("Done");
}
public static Compute createComputeService() throws IOException, GeneralSecurityException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
if (credential.createScopedRequired()) {
credential =
credential.createScoped(Arrays.asList("https://www.googleapis.com/auth/cloud-platform"));
}
return new Compute.Builder(httpTransport, jsonFactory, credential)
.setApplicationName("Google-ComputeSample/0.1")
.build();
}
}
For more information you can check this document in this document are some examples in different programming languages
I need to install the RediSearch module on top of a GCP memorystore redis instance.
I followed the steps:
docker run -p 6379:6379 redislabs/redisearch:latest
I pushed this docker image to a Kubernetes cluster and exposed the external IP. I used that external IP and the 6379 port as configuration for my application but I'm not able to connect to RediSearch.
code:
import java.io.IOException;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.redisearch.client.Client;
import io.redisearch.*;
public class RediSearch {
static Client client = new Client("testdoc1", "clusteripaddress", 8097);
private static final Logger LOG = LoggerFactory.getLogger(RediSearch.class);
public interface Options extends PipelineOptions {
#Description("gcp project id.")
#Default.String("XXXX")
String getProjectId();
void setProjectId(String projectId);
}
public static PipelineResult run(Options options) throws IOException {
Pipeline pipeline = Pipeline.create(options);
pipeline.apply(Create.of("test"))
.apply(ParDo.of(new DoFn<String, String>() {
private static final long serialVersionUID = 1L;
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
String pubsubmsg = c.element();
Schema sc = new Schema()
.addTextField("title", 5.0)
.addTextField("body", 1.0)
.addNumericField("price");
client.createIndex(sc, Client.IndexOptions.Default());
Map<String, Object> fields = new HashMap<String, Object>();
fields.put("title", "hello world");
fields.put("body", "lorem ipsum");
fields.put("price", 800);
fields.put("price", 1337);
fields.put("price", 2000);
client.addDocument("searchdoc3", fields);
SearchResult[] res = client.searchBatch(new Query("hello world").limit(0, 5).setWithScores());
for (Document d : res[0].docs) {
LOG.info("redisearchlog{}",d.getId().startsWith("search"));
LOG.info("redisearchlog1{}",d.getProperties());
LOG.info("redisearchlog2{}",d.toString());
}
}
}));
return pipeline.run();
}
public static void main(String[] args) throws IOException {
Options options = PipelineOptionsFactory.fromArgs(args).as(Options.class);
run(options);
}
}
Error :
redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.jedis.util.Pool.getResource(Pool.java:59)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:15)
at io.redisearch.client.Client._conn(Client.java:137)
at io.redisearch.client.Client.getAllConfig(Client.java:275)
at com.testing.redisearch.RediSearch$1.processElement(RediSearch.java:59)
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host xxxxxxxxxxx:6379
at redis.clients.jedis.Connection.connect(Connection.java:204)
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:100)
at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1894)
at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:117)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:889)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:424)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:349)
at redis.clients.jedis.util.Pool.getResource(Pool.java:50)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:15)
at io.redisearch.client.Client._conn(Client.java:137)
at io.redisearch.client.Client.getAllConfig(Client.java:275)
at com.testing.redisearch.RediSearch$1.processElement(RediSearch.java:59)
at com.testing.redisearch.RediSearch$1$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:218)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:183)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:335)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:44)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:201)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:411)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:380)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:305)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at redis.clients.jedis.Connection.connect(Connection.java:181)
... 31 more
Any solution is appreciated.
There are multiple causes as per the error JedisConnectionException: Could not get a resource from the pool. According to the answers in this question the problem is that the connection to RediSearch couldn't be established, be it because Redis is not running, the connection times out or it cannot be allocated.
Regardless, I have noticed that even though you deploy Redis on port 6379, in your code you are trying to access it on port 8097. Please change your Client declaration to the following and retry the connection.
static Client client = new Client("testdoc1", "<cluster_ip_address>", 6379);
If you are looking to have the RediSearch module in your memorystore instance, it appears that may not be supported yet. You can see in the google cloud docs here that at the time of writing this, even for version 5.0, redis modules are not supported.
I have problems with python-openstackclient library.When i run this code to authorize with keystone:
from keystoneclient import session
from keystoneclient.v3 import client
from keystoneclient.auth.identity import v3
password = v3.PasswordMethod(username='idm',password='idm',user_domain_name='idm')
auth = v3.Auth(auth_url='http://127.0.0.1:5000/v3',auth_methods=[password],project_id='idm')
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
keystone.users.list()
Im getting this error:
keystoneclient.openstack.common.apiclient.exceptions.Unauthorized: The request you have made requires authentication. (HTTP 401)
But when i try openstack client program:
openstack user list
It gives me good output.
I have next global variables in my .bashrc:
export OS_SERVICE_ENDPOINT=http://127.0.0.1:35357/v3
export OS_AUTH_URL=http://127.0.0.1:5000/v3
export OS_TENANT_NAME=idm
export OS_USERNAME=idm
export OS_PASSWORD=idm
export OS_IDENTITY_API_VERSION=3
export OS_URL=http://127.0.0.1:35357/v3
What could be the problem with that python code?
Thanks!
I had the same problem but after applying proposed solution I was getting :
keystoneauth1.exceptions.connection.ConnectFailure: Unable to
establish connection to http://192.0.2.12:35357/v2.0/users:
HTTPConnectionPool(host='192.0.2.12', port=35357): Max retries
exceeded with url: /v2.0/users (Caused by
NewConnectionError(': Failed to establish a new connection:
[Errno 110] Connection timed out',))
Note that my auth_url='https://myopenstack.somewhere.org:13000/v3',
It turns out that the client was discovering and using services on a interface which by default is 'Admin', and is unreachable for me. When forcing the interface to Public it works :
keystone = client.Client(session=sess, interface='Public')
I managed to do it like this:
from keystoneclient import session
from keystoneclient.v3 import client
from keystoneclient.auth.identity import v3
auth = v3.Password(auth_url='http://127.0.0.1:5000/v3',user_id='idm',password='idm',project_id='2545070293684905b9623095768b019d')
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
keystone.users.list()
I am trying to write a unit test for a Mule flow that uses the Quartz connector. However, I receive the following XML error stating that Mule doesn't know how to parse the "quartz-connector" tag when running the unit test. However, quartz-2.0.2.jar and quartz-1.8.5.jar are both in my classpath, and as you can see below, I have added quartz as part of the XML namespace and the XSD to to the root tag. I have searched on many forums, including this one, but I can't find the solution to my error. Please tell me what I am doing incorrectly. I am using Mule Studio 3.5.0 and JDK 1.7 to run this unit test.
Error
org.mule.api.config.ConfigurationException: Line 9 in XML document from URL [file:/C:/Users/smith/Development/MuleStudio_Workspace/funnel-mule-app/funnel-mule-app-batch/funnel-mule-app-batch-int/src/main/app/log_cleanup.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 9; columnNumber: 89; cvc-complex-type.2.4.a: Invalid content was found starting with element 'quartz:connector'. One of '{"http://www.mulesoft.org/schema/mule/core":annotations, "http://www.mulesoft.org/schema/mule/core":description, "http://www.springframework.org/schema/beans":beans, "http://www.springframework.org/schema/beans":bean, "http://www.springframework.org/schema/context":property-placeholder, "http://www.springframework.org/schema/beans":ref, "http://www.mulesoft.org/schema/mule/core":global-property, "http://www.mulesoft.org/schema/mule/core":configuration, "http://www.mulesoft.org/schema/mule/core":notifications, "http://www.mulesoft.org/schema/mule/core":abstract-extension, "http://www.mulesoft.org/schema/mule/core":abstract-mixed-content-extension, "http://www.mulesoft.org/schema/mule/core":abstract-agent, "http://www.mulesoft.org/schema/mule/core":abstract-security-manager, "http://www.mulesoft.org/schema/mule/core":abstract-transaction-manager, "http://www.mulesoft.org/schema/mule/core":abstract-connector, "http://www.mulesoft.org/schema/mule/core":abstract-global-endpoint, "http://www.mulesoft.org/schema/mule/core":abstract-exception-strategy, "http://www.mulesoft.org/schema/mule/core":abstract-flow-construct, "http://www.mulesoft.org/schema/mule/core":flow, "http://www.mulesoft.org/schema/mule/core":sub-flow, "http://www.mulesoft.org/schema/mule/core":abstract-model, "http://www.mulesoft.org/schema/mule/core":abstract-interceptor-stack, "http://www.mulesoft.org/schema/mule/core":abstract-filter, "http://www.mulesoft.org/schema/mule/core":abstract-transformer, "http://www.mulesoft.org/schema/mule/core":processor-chain, "http://www.mulesoft.org/schema/mule/core":custom-processor, "http://www.mulesoft.org/schema/mule/core":invoke, "http://www.mulesoft.org/schema/mule/core":abstract-global-intercepting-message-processor, "http://www.mulesoft.org/schema/mule/core":custom-queue-store, "http://www.mulesoft.org/schema/mule/core":abstract-processing-strategy}' is expected. (org.mule.api.lifecycle.InitialisationException)
at org.mule.config.builders.AbstractConfigurationBuilder.configure(AbstractConfigurationBuilder.java:52)
at org.mule.config.builders.AbstractResourceConfigurationBuilder.configure(AbstractResourceConfigurationBuilder.java:78)
at org.mule.context.DefaultMuleContextFactory.createMuleContext(DefaultMuleContextFactory.java:84)
at org.mule.tck.junit4.AbstractMuleContextTestCase.createMuleContext(AbstractMuleContextTestCase.java:203)
at org.mule.tck.junit4.AbstractMuleContextTestCase.setUpMuleContext(AbstractMuleContextTestCase.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:46)
at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
Mule Flow
<mule xmlns:tracking="http://www.mulesoft.org/schema/mule/ee/tracking" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:quartz="http://www.mulesoft.org/schema/mule/quartz" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:core="http://www.mulesoft.org/schema/mule/core" version="EE-3.4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
http://www.mulesoft.org/schema/mule/quartz http://www.mulesoft.org/schema/mule/quartz/current/mule-quartz.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/ee/tracking http://www.mulesoft.org/schema/mule/ee/tracking/current/mule-tracking-ee.xsd">
<quartz:connector name="TimeToStart2" validateConnections="true" doc:name="Quartz"/>
<flow name="cleanup_flow" doc:name="cleanup_flow">
<quartz:inbound-endpoint name="LogCleanUpStart" jobName="LogCleanUp" cronExpression="${log.cleanup.cron.start}" repeatInterval="0" responseTimeout="10000" connector-ref="TimeToStart2" doc:name="Scheduler">
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<set-variable variableName="#['failCounter']" value="#[0]" doc:name="Init Fail Counter"/>
<logger message="Log Cleanup Started" level="INFO" doc:name="StartLogger"/>
<flow-ref name="cleanup_for_loop_body" doc:name="cleanup_for_loop_body_ref"/>
</flow>
</mule>
Mule Unit Test
import static com.jayway.restassured.RestAssured.expect;
import static com.xebialabs.restito.builder.stub.StubHttp.whenHttp;
import static com.xebialabs.restito.builder.verify.VerifyHttp.verifyHttp;
import static com.xebialabs.restito.semantics.Action.status;
import static com.xebialabs.restito.semantics.Action.stringContent;
import static com.xebialabs.restito.semantics.Condition.method;
import static com.xebialabs.restito.semantics.Condition.post;
import static com.xebialabs.restito.semantics.Condition.delete;
import static com.xebialabs.restito.semantics.Condition.uri;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import org.glassfish.grizzly.http.Method;
import org.glassfish.grizzly.http.util.HttpStatus;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.Ignore;
import org.mule.api.MuleMessage;
import org.mule.api.client.MuleClient;
import org.mule.tck.junit4.FunctionalTestCase;
import com.xebialabs.restito.server.StubServer;
public class LogCleanupTest extends FunctionalTestCase
{
private StubServer server;
#Before
public void start()
{
server = new StubServer().run();
}
#After
public void stop()
{
server.stop();
}
#Override
/**
* Return the list of flow names that will be tested
*/
protected String getConfigResources()
{
String flowNames = "src/main/app/log_cleanup.xml, src/test/resources/batch_global_test_config_internal.xml";
return flowNames;
}
/**
* Make sure that a successful cleanup response does not increment the retry counter.
*/
#Test
public void testLCSuccessResponse() throws Exception
{
MuleClient client = muleContext.getClient();
String logURL = "/api/log/cleanup/XYZ/";
//When a Delete request is made to this Log URL, return an OK response.
whenHttp(server).match(delete(logURL)).then(stringContent("String response"), status(HttpStatus.OK_200));
}
}
There are JAR dependencies missing.
Instead of adding the JARs by hand, you'd rather use Maven to bring the Mule Quartz Transport JAR into your project, which will bring all its needed dependencies. Just make sure to scope the transport as provided.
i tried to create httpprovider in eclipe and when run in wowza media server it does not load properly it returns only wowza server version.
code of eclipse is here
package com.domain.appname;
import java.io.IOException;
import java.io.OutputStream;
import com.wowza.wms.vhost.IVHost;
import com.wowza.wms.http.HTTProvider2Base;
import com.wowza.wms.http.IHTTPRequest;
import com.wowza.wms.http.IHTTPResponse;
public class CreateApp extends HTTProvider2Base {
public void onHTTPRequest(IVHost inVhost, IHTTPRequest req, IHTTPResponse resp){
String ret = req.getQueryString();
resp.setHeader("Content-Type", "text/xml");
OutputStream out = resp.getOutputStream();
byte[] outBytes = ret.toString().getBytes();
try {
out.write(outBytes);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
and also set vhost file as
<HTTPProvider>
<BaseClass>com.domain.appname.CreateApp</BaseClass>
<RequestFilters>CreateProducerApp*</RequestFilters>
<AuthenticationMethod>none</AuthenticationMethod>
</HTTPProvider>
Please help
Without seeing your complete vhost.xml, my guess would be that you put your new HTTPProvider last in the list of HTTPProviders.
When the server processes http requests it starts at the first one and tries each one. The RequestFilter for the provider that returns the server info is "*" which means no providers after this one will get called. This is usually the last one. Do make sure yours is before this one.
Please put your HTTPProvider entry in vHosts file before following block:
<HTTPProvider>
<BaseClass>com.wowza.wms.http.HTTPServerVersion</BaseClass>
<RequestFilters>*</RequestFilters>
<AuthenticationMethod>none</AuthenticationMethod>
</HTTPProvider>
Otherwise all of your request will give Wowza version details.
So, your final vhost file's end should look like:
<HTTPProvider>
<BaseClass>com.domain.appname.CreateApp</BaseClass>
<RequestFilters>CreateProducerApp*</RequestFilters>
<AuthenticationMethod>none</AuthenticationMethod>
</HTTPProvider>
<HTTPProvider>
<BaseClass>com.wowza.wms.http.HTTPServerVersion</BaseClass>
<RequestFilters>*</RequestFilters>
<AuthenticationMethod>none</AuthenticationMethod>
</HTTPProvider>