I have an application which provides services using CXF's Servlet transport and Jetty 6.1. This application also needs to consume external services. All services support WS-Addressing specification (and WS-RM on top). To consume an external service, I run a generated service client from the application.
The problem is that when I provide a decoupled endpoint for the client (WS-RM needs this endpoint to receive incoming messages via a separate http connection), CXF runs another instance of Jetty server (in spite of the fact that Servlet transport (which provides services) and the client (which consumes some external service) share the same bus). I don't need two instances of Jetty (not saying that they can't run on the same HTTP port).
Is there a way I can provide a decoupled endpoint using an existing Jetty server and Servlet transport?
So far, I enable a decoupled endpoint like this:
Client client = ClientProxy.getClient(port);
HTTPConduit httpConduit = (HTTPConduit) client.getConduit();
httpConduit.getClient().setDecoupledEndpoint(
"http://domain.com:port/services/dec_endpoints/TestDecEndpoint");
If I provide a relative path ("/dec_endpoints/TestDecEndpoint", just like relative paths are used with provision of services via Servlet transport), HTTP conduit does not specify a full path in a SOAP message's headers so this doesn't work either (server just can't send a message to /dec_endpoints/TestDecEndpoint).
Ok, I have found a solution myself. You need to specify a relative path for decoupled endpoint and change message's addressing properties manually (after MAPAggregator interceptor, 'cause it sets up the decoupled destination) so the server can send replies to your address.
So what we have:
decoupled destination using a relative path: /dec_endpoints/SomeDestination
<ReplyTo> header with an absolute path: http://addr.com:port/servlet_path/dec_endpoints/SomeDestination
Here's an example how the path can be changed:
public class ReplyToInterceptor extends AbstractPhaseInterceptor<Message>
{
public ReplyToInterceptor() {
super(Phase.PRE_LOGICAL);
addAfter(MAPAggregator.class.getName());
}
public void handleMessage(Message message) {
AddressingProperties maps = ContextUtils.retrieveMAPs(message, false,
true);
EndpointReferenceType replyTo = maps.getReplyTo();
replyTo.getAddress().setValue(
"http://address.com:port/servlet_path/dec_endpoints/SomeDestination");
}
}
Related
I am using Spring Boot 2 Microservices with Spring Cloud Sleuth with the Dependency Management and Spring Cloud Version Greenwich.SR2.
My service is running in an Istio service mesh.
Sample policy of istio is set to 100 (pilot.traceSampling: 100.0).
To use distributed tracing in the mesh, the applications needs to forward HTTP headers like the X-B3-TraceId and X-B3-SpanID. This is achieved by simply adding Sleuth. All my HTTP request are are traced correctly. The sidecar proxies of Istio (Envoy) send the traces to the Jaeger backend.
Sleuth is also supposed to work with Spring WebSocket. But my incoming websocket requests do not get any trace or span id by sleuth; Logs look like [-,,,].
1. Question: Why is Sleuth not working for websocket?
My WS-Config:
#Configuration
#EnableWebSocket
public class WsConfig implements WebSocketConfigurer {
#Autowired
WebSocketHandler webSocketHandler;
#Override
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
DefaultHandshakeHandler handshakeHandler = new DefaultHandshakeHandler();
handshakeHandler.setSupportedProtocols(HANDSHAKE_PROTOCOL);
registry.addHandler(webSocketHandler, WS_HANDLER_PATH + WILDCARD)
.setAllowedOrigins("*")
.setHandshakeHandler(handshakeHandler);
}
}
My clients are able to connect to my Service via Websocket. I am implementing WebSocketHandler interface to handle WS messages.
To achieve that my WS connections are logged by Sleuth, I annotate the method that handles my connection with #NewSpan:
#Override
#NewSpan
public void handleMessage(WebSocketSession session, WebSocketMessage<?> message) {
//doWork and call other services via HTTP
}
With this, Sleuth creates trace and spanId and also propagates them to the other Services, which are called via the restTemplate in this method. But HTTP calls are not send to Jaeger. The x-B3-Sampled Header is always set to 0 by the sidcar.
2 Question: Why are those traces not send to the tracing backend?
Thank you in advance!
Iām using wso2esb-4.9.0, then wso2-5.0.0, and now working on wso2ei-6.0.0
I would like to create a secured proxy service that could be used by different clients.
Required security is scenario 5 (sign and encrypt ā x509 authentication) : Messages are encrypted using service (server) public certificate and signed using client private key. Since multiple client will use the service, each client should sign the message using client private key.
At the server side, the public certificate for each client should be already be in the trust store of the server.
At server side, I can do a hardcoded configuration for rampart in order to respond correctly for incoming request from client1 OR for client2. This means that, for now, the only solution I found in order to support 2 clients, for the same backend service, is through the use of two proxy service, each configured to verify the signature of exactly one client.
I would like to get advice or pointers in order to configure the server side in a dynamic way, where only one proxy service is used. This proxy service should be able to configure at run time correctly rampart, in order to decrypt and verify the signature of the incoming message (one proxy, for N clients).
Thanks,
So, in fact nothing extra needs to be done at configuration level of rampat, since the harcoded configuration is related to the server side, when it would like to consume smthg for other party.
Since the incomming request contains informations related to certificate data, server will dynamically check his keystore in order to verify the incomming signed message... so once again, just configure rampart, at service side, and at client side and let the magic happen.
thanks to wso2 team for great product suite !
I was wondering how the Dropwizard client module should be implemented.
Source of confusion:
Dropwizard recommends you to separate your project as such:
In general, we recommend you separate your projects into three Maven
modules: project-api, project-client, and project-service.
On the Client section, it shows that you can instantiate the httpClient provided by dropWizard within a run method.
#Override
public void run(ExampleConfiguration config,
Environment environment) {
final Client client = new JerseyClientBuilder().using(config.getJerseyClientConfiguration())
.using(environment)
.build();
environment.addResource(new ExternalServiceResource(client));
}
I thought that the client module would wrap the httpClient, and any other service can use the client module, without caring which httpClient it is using.
So
how would a client module look like
When would you instantiate an httpClient directly within a service's run method (as done in the snippet of code above)
Thanks!
How would a client module look like
This is heavily dependant on your project scope and structure. For example, in one of my projects which is heavily database dependant, the Client module (or Service class in DropWizard's terminology) contains my DAO instantiations as well as hibernate initializations and a bunch of other init stuff (SQS, etc). I also use the HTTP Client and the Service class is where I initialize it. Reason being is that the Service class is the entry point and this is where you end up instantiating your Resource classes, etc. So having the dependancies instantiated here allow me to pass them into my resources as constructor params. If you were using something like Guice, then the way to go would be different since you have access to injection, etc.
When would you instantiate an httpClient directly within a service's run method (as done in the snippet of code above)
The HttpClient shown in the doc and your question is used when your project requires a Http Client. For example, lets say your DW project or one of the resources you are writing requires you to make a HTTP call to a twitter API. This is where the Http Client comes into play. You can actually use any Http Client library you want, however using the ones provided by DW (Apache Http Client, Jersey Http Client) allows you to create a 'Managed' Http Client there by allowing DW to start up, shut down and clean up the HTTP Client when the service is shutdown. So things like thread pools, connection pools, etc are all cleaned up by DW when you use its managed HTTP Client. In addition, the reason why you create this HTTP Client inside the run method is because you are then able to get a reference to the Configuration object's instance which will allow you to control the HTTP Client's settings via DW's configuration system.
Hope this answers your questions
I have the same mule webservice application with 2 different versions deployed on the same mule server. Let's call it MuleApp.1.0 and MuleApp.1.1. The flow is as simple as the example of webservice flow on mulesoft website. Their wsdl urls are different as:
http://www.myhost.com:25101/MuleApp.1.0/Service?wsdl
http://www.myhost.com:25101/MuleApp.1.1/Service?wsdl
Both of them are working as expected when the other is not deploying on the mule server. The issue happens when I having both of them deployed on the same mule server like what I used to do in WebLogic. Now I am able to access MuleApp.1.1, but when I tried to access MuleApp.1.0, I got the error as below
07-Mar-2013:14:52:57.142 VWILVM3667 [MuleApp.1.1].connector.http.mule.default.receiver.03
WARN org.mule.transport.http.HttpMessageReceiver NA
No receiver found with secondary lookup on connector: connector.http.mule.default with URI key: http://www.myhost.com:25101/MuleApp.1.0/Service
This is supposed to be a very common versionning case. What did I miss in my config?
You can't have two different applications sharing the same HTTP port in the same Mule instance.
So what probably happens is that MuleApp.1.0 doesn't deploy properly (check the logs), which is why there is no endpoint listening on /MuleApp.1.0.
Either:
Use a different port in the two apps,
Put both flows in a single app.
Create a frontal app that listens on port 25101 and both /MuleApp.1.0 and /MuleApp.1.1 paths and that dispatches requests to MuleApp.1.0 and MuleApp.1.1 on private ports (say 25102 and 25103).
I finally deployed my application on tomcat, and replaced http inbound endpoint with servlet inbound endpoint. I configure the web.xml with servlet class org.mule.transport.servlet.MuleReceiverServlet. Now I am able to deploy multiple applications on the same port.
I have a standalone web-service client. When invoking any of the web-methods an additional "cookie" string must be implicitly(not as a web-method parameter) passed to the WS. The WS on the other end must be able to obtain the string and use it. How can this be achieved?
I invoke the service in the following way:
Service srv = Service.create(new URL(WSDL), QNAME);
myClassPort = srv.getPort(MyClass.class);
What I need is to put some code before the first line, which would make the client send this "cookie" string every time I invoke some remote method via myClassPort. Thx.
By default JAX-WS web services and clients are stateless. When a client makes a request, the server responds and sets a cookie on the connection, if it participates in a session. But, the JAX-WS client ignores that cookie and the server treats subsequent requests as new interaction. When the session is enabled, JAX-WS client sends the same cookie with each subsequent request so that server can keep track of the client session.
So you should not be using either cookies or HTTP sessions with web services. Return a token ID as part of the response; then the client can send that along with the next request.
Anyway:
JAX-WS web service clients must be configured to maintain session information (such as cookies), using the javax.xml.ws.session.maintain property.
Other web service stacks may have similar mechanisms.
On the Server Side
JAX-WS uses some handy annotations defined by Common Annotations for the Java Platform (JSR 250), to inject the web service context and declaring lifecycle methods.
WebServiceContext holds the context information pertaining to a request being served.
You don't need to implement javax.xml.rpc.server.ServiceLifecycle. With JAX-WS Web Service all you need to do is mark a field or method with #Resource. The type element MUST be either java.lang.Object or javax.xml.ws.WebServiceContext.
#WebService
public class HelloWorld {
#Resource
private WebServiceContext wsContext;
public void sayHello(){
MessageContext mc = wsContext.getMessageContext();
HttpSession session = ((javax.servlet.http.HttpServletRequest)mc.get(MessageContext.SERVLET_REQUEST)).getSession();
}
}
There are some misleading answers to this question, so I will attempt to highlight current best practices. Most of these suggestions are part of the OWASP security guidelines, which I strongly recommend anyone working on web development to review.
1) ALWAYS use temporary (session scoped) cookies.
2) All cookies should be protected and encrypted.
3) Do not pass tokens in request payloads
4) For any requests which return data that may be sent back to the server, include a nonce (single use token) in your responses.
5) later requests should (must) include the nonce and the cookie
Again, my recommendation is to review the OWASP guidelines and proceed accordingly.
You may want to look into using a service provider for authentication - this is much smarter than brewing your own solution as there are literally a million details that all must be correct. Auth0.com is one of these.