EJB polls periodically concurrent Queue - concurrency

This is my scenario
i have two singleton EJB. The first one receive some data and writes it in a Queue data structure instantiated by the other EJB.
#Singleton
#Startup
public class Client implements IClient {
#EJB
IClientInQueue reporter;
....
#Asynchronous
public void update(String message){
StatusMessage m = new StatusMessage();
reporter.addStatusMessage(m);
}
#Startup
#Singleton
public class ClientInQueue implements IClientInQueue {
private ConcurrentLinkedQueue<StatusMessage> statusInQueue;
addStatusMessage(String m) ..add element to queue
This works fine. Now i want to poll periodically this queue and then make some dispatching operation.
My issue is that a i can't use runnable in ejb context. I'm looking to migrate to spring, but first of make this i want to know if i'm missing something.
thanks

... now I want to poll periodically this queue..
If you need execute some code periodically, the Java EE specification provides a service called Timer Service that is useful in these cases. This service gives you the posibility to execute your code at a defined interval time.
My issue is that a i can't use runnable in ejb context.
Since Java EE 7 (JSR 236: Concurrency Utilities), exists the possibility to create a managed thread, which allows you run new threads within a Container in a safe way.

Related

How to bring SOAP capability to payara/micro in EJB project

I have an EJB project providing webservices (both SOAP and REST) running inside a container with payara/micro as base image, since payara/micro does not come with JAXWS(SOAP support) feature out of the box, however, by adding
cxf-rt-frontend-jaxws
and
cxf-rt-transports-http
as dependencies into the project as well as following this tutorial and put the following code instead:
#Override
public void loadBus(ServletConfig servletConfig) {
super.loadBus(servletConfig);
Bus bus = getBus();
BusFactory.setDefaultBus(bus);
Endpoint.publish("/MySoapService", new ASoapService());
}
I was able to make the SOAP interface almost available (wsdl information is publicly available already), and
http://localhost:8080/<my project name>/services
event listed out the available services as well their available methods and endpoints/WSDL/Target namespace information.
But when trying to access the SOAP service via SOAP client, I got on the server side errors with the following line of info:
...
Caused by: java.lang.NullPointerException: null
at com.example.ASoapService.getXxx
...
Where
ASoapService
Is Actually an EJB. So I tried instead to replace the above code with the following:
#EJB
ASoapService aSoapService
...
Endpoint.publish("/MySoapService", aSoapService);
During startup of container, I got
Caused by: javax.naming.NameNotFoundException: com.example.ASoapServiceF#com.example.ASoapService not found
By checking the logs, I found a possible reason:
When SOAP part starts up with the following code
Endpoint.publish("/MySoapService", aSoapService);
The EJB Container is not yet ready and thus the lookup of
ASoapService
failed, is such assumption correct? Because normally you should see something like:
[2018-02-02T14:43:57.821+0000] [] [INFO] [AS-EJB-00054] [javax.enterprise.ejb.container] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1517582637821] [levelValue: 800] Portable JNDI names for EJB ASoapService: [java:global/<my project name>/ASoapService, java:global/<my project name>/ASoapService!com.example.ASoapService]
during start up, which is not the case for my situation.
I am relatively new to EJB and Glassfish world, can I somehow force EJB container to start first? Or does it actually have anything to do with the starting sequence? How to combine the two together?
Thanks in advance.
You shouldn't be trying to force EJB container to start. Instead, try one of the following:
instead of #EJB ASoapService aSoapService to inject the EJB try #Inject
ASoapService aSoapService - #Inject should wait for the dependencies therefore wait until the EJB is available
run the method Endpoint.publish from an object which is initialized after EJB container is ready, either from a startup singleton EJB or when CDI application scope is initialized: https://rmannibucau.wordpress.com/2015/03/10/cdi-and-startup/

How do I implement functions in a Bond services definition?

Looking at the Bond Comm documentation, it wasn't clear to me how the functions I define for services are connected to specific functions in my code.
Does it look for a function with the same signature in the project and assign it to the endpoint? Is there some underlying settings file I am missing?
NB: Bond Comm is deprecated. It isn't supported any more, and will be removed from Bond in an upcoming release. Bond-over-gRPC is its replacement.
When using either Bond-over-gRPC or Bond Comm, the generated server-side code is an abstract class with an abstract method for each method in the service definition. To provide your logic for these methods, you inherit from the generated base and provide implementations for all the service methods. Then, typically in your main function, you create a Server (for Bond-over-gRPC) or a Listener (for Bond Comm) and register an instance of the implementation class. This sets up the routing for IDL service method to your implementation code.
From the Bond-over-gRPC C# documentation:
Given a service definition like the following:
service Example
{
ExampleResponse ExampleMethod(ExampleRequest);
}
gbc will generate C# classes for gRPC with the --grpc flag:
gbc c# --grpc example.bond
...
To build the service functionality, simply write a concrete service
implementation by subclassing the server base and supplying the
business logic:
public class ExampleServiceImpl : Example.ExampleBase {
public override async Task<IMessage<ExampleResponse>>
ExampleMethod(
IMessage<ExampleRequest> param,
ServerCallContext context)
{
ExampleRequest request = param.Payload.Deserialize();
var response = new ExampleResponse();
// Service business logic goes here
return Message.From(response);
}
}
This service implementation is hooked up to a gRPC server as follows:
var server = new Grpc.Core.Server {
Services = { Example.BindService(new ExampleServiceImpl()) },
Ports = { new Grpc.Core.ServerPort(ExampleHost, ExamplePort, Grpc.Core.ServerCredentials.Insecure) } };
server.Start();
At this point the server is ready to receive requests and route them to the
service implementation.
There are more examples as well:
a standalone C# project
a C# ping/pong example
a C++ "Hello World" example
a C++ ping/pong example
It's worth pointing out that (Bond-over-) gRPC and Bond Comm are neither SOAP nor REST. The question was tagged with web-service, and sometimes people mean SOAP/REST when they talk about web services. I think of both gRPC and Bond Comm as custom binary protocols over TCP, although gRPC is run atop HTTP/2.

How to expose the asynchronous soap web services in mulesoft using cxf

I have exposed the soap web services using the cxf in mulesoft and this is my interface and java code.
Interface is:
#WebService
public interface HelloWorld {
String sayHi(String text);
}
The java class is:
#WebService(endpointInterface = "demo.HelloWorld",
serviceName = "HelloWorld")
public class HelloWorldImpl implements HelloWorld {
public String sayHi(String text) {
/*
Here i am writing the logic which is takes the 15 seconds
and i do not want that caller webservices must be wait to finish
this process.
*/
return "Hello " + text;
}
}
The currently the caller is waiting for finishing the all logic in the
sayHI() method but i want to do in asynchronous way its means the sayHi() method continuous the process and caller is not wait for response. So any idea how i can achieve in mulesoft?.
This is an reference in mule soft:
https://docs.mulesoft.com/mule-user-guide/v/3.7/building-web-services-with-cxf
Did you considered using flow processing strategy and tuning your threads ? :
https://docs.mulesoft.com/mule-user-guide/v/3.7/flow-processing-strategies
https://docs.mulesoft.com/mule-fundamentals/v/3.7/flows-and-subflows
http://blogs.mulesoft.com/dev/mule-dev/asynchronous-message-processing-with-mule/
and in the flow you can also use block after your Http listener for making it asynchronous :-
https://docs.mulesoft.com/mule-user-guide/v/3.7/async-scope-reference

PersistenceContextType.EXTENDED inside Singleton

We are using Jboss 7.1.1 in an application mostly generated by Jboss Forge, however we added a repository layer for all domain related code.
I was trying to create a Startup bean to initialize the database state. I would like to use my existing repositories for that.
My repositories all have an extended PersistenceContext injected into them. I use these from my View beans that are #ConversationScoped #Stateful beans, by using the extended context my entities remain managed during a conversation.
First I tried this:
#Startup
#Singleton
public class ConfigBean {
#Inject
private StatusRepository statusRepository;
#Inject
private ZipCode zipCodeRepository;
#PostConstruct
public void createData() {
statusRepository.add(new Status("NEW"));
zipCodeRepository.add(new ZipCode("82738"));
}
}
Example repository:
#Stateful
public class ZipCodeRepository {
#PersistenceContext(PersistenceContextType.EXTENDED)
private EntityManger em;
public void add(ZipCode zipCode) {
em.persist(zipCode);
}
....
}
This ends up throwing an javax.ejb.EJBTransactionRolledbackException on Application startup with the following message:
JBAS011437: Found extended persistence context in SFSB invocation call stack but that cannot be used because the transaction already has a transactional context associated with it. This can be avoided by changing application code, either eliminate the extended persistence context or the transactional context. See JPA spec 2.0 section 7.6.3.1.
I struggled finding a good explanation for this, and actually figured that since EJB's and their injection are handled by proxies all the PersistenceContext injection and propagation would be handled automatically. I guess I was wrong.
However, while on this trail of thought I tried the following:
#Startup
#Singleton
public class ConfigBean {
#Inject
private SetupBean setupBean;
#PostConstruct
public void createData() {
setupBean.createData();
}
#Stateful
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public static class SetupBean {
#Inject
private StatusRepository statusRepository;
#Inject
private ZipCode zipCodeRepository;
public void createData() {
statusRepository.add(new Status("NEW"));
zipCodeRepository.add(new ZipCode("82738"));
}
}
}
This does the trick. All I did was wrap the code in a Stateful SessionBean that is a static inner class of my Singleton bean.
Does anyone understand this behavior? Because while everything works now I'm still a bit estranged as to why it works this way.
A container-managed extended persistence context can only be initiated
within the scope of a stateful session bean. It exists from the point
at which the stateful session bean that declares a dependency on an
entity manager of type PersistenceContextType.EXTENDED is created, and
is said to be bound to the stateful session bean.
From the posted code, it seems ZipCodeRepository isn't itself stateful bean, but you're calling it from one such bean.
In this case, you are initiating PersistenceContextType.TRANSACTION from ConfigBean & propogates through ZipCodeRepository having PersistenceContextType.EXTENDED & it tries to join the transaction, hence the exception.
Invocation of an entity manager defined with PersistenceContext- Type.EXTENDED will result in the use of the existing extended
persistence context bound to that component.
When a business method of the stateful session bean is invoked, if the stateful session bean uses container managed transaction
demarcation, and the entity manager is not already associated with the
current JTA transaction, the container associates the entity manager
with the current JTA transaction and calls
EntityManager.joinTransaction. If there is a different persistence
context already associated with the JTA transaction, the container
throws the EJBException.
While in later case, you're creating a new transaction in SetupBean for each invocation with TransactionAttributeType.REQUIRES_NEW which is of extended type, as it's a stateful bean.
Therefore, adding SetupBean as stateful session bean initiating new transaction for each invocation & later calling ZipCodeRepository doesn't result in exception. ZipCodeRepository will join the same transaction as initiated by SetupBean.

Schedule JPA query and access result in a CDI-bean?

Every x minutes I want to query for new instances and cache the results. I currently only need a simple cache solution so I would like to update a Set in my #ApplicationScoped CacheBean
I tried a:
ScheduledExecutorService scheduler = Executors
.newScheduledThreadPool(1);
ScheduledFuture<?> sf = scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
//.................
But the thread created couldn't access any contextual instances (InvocationException).
So how to do this the CDI/JPA way?
Using Tomcat 7, Weld, JPA2 - Hibernate.
My recommendation would be to try the version of Tomcat with CDI and JPA already integrated (TomEE). It comes with OpenJPA but you can use Hibernate. Then do your caching with a class like this:
#Singleton
#Startup
public class CachingBean {
#Resource
private BeanManager beanManager;
#Schedule(minute = "*/10", hour = "*")
private void run() {
// cache things
}
}
That component would automatically start when the app starts and would run the above method every ten minutes. See the Schedule docs for details.
UPDATE
Hacked up an example for you. Uses a nice CDI/EJB combination to schedule CDI Events.
Effectively this is a simple wrapper around the BeanManager.fireEvent(Object,Annotations...) method that adds ScheduleExpression into the mix.
#Singleton
#Lock(LockType.READ)
public class Scheduler {
#Resource
private TimerService timerService;
#Resource
private BeanManager beanManager;
public void scheduleEvent(ScheduleExpression schedule, Object event, Annotation... qualifiers) {
timerService.createCalendarTimer(schedule, new TimerConfig(new EventConfig(event, qualifiers), false));
}
#Timeout
private void timeout(Timer timer) {
final EventConfig config = (EventConfig) timer.getInfo();
beanManager.fireEvent(config.getEvent(), config.getQualifiers());
}
// Doesn't actually need to be serializable, just has to implement it
private final class EventConfig implements Serializable {
private final Object event;
private final Annotation[] qualifiers;
private EventConfig(Object event, Annotation[] qualifiers) {
this.event = event;
this.qualifiers = qualifiers;
}
public Object getEvent() {
return event;
}
public Annotation[] getQualifiers() {
return qualifiers;
}
}
}
Then to use it, have Scheduler injected as an EJB and schedule away.
public class SomeBean {
#EJB
private Scheduler scheduler;
public void doit() throws Exception {
// every five minutes
final ScheduleExpression schedule = new ScheduleExpression()
.hour("*")
.minute("*")
.second("*/5");
scheduler.scheduleEvent(schedule, new TestEvent("five"));
}
/**
* Event will fire every five minutes
*/
public void observe(#Observes TestEvent event) {
// process the event
}
}
Full source code and working example, here.
You must know
CDI Events are not multi-treaded
If there are 10 observers and each of them take 7 minutes to execute, then the total execution time for the one event is 70 minutes. It would do you absolutely no good to schedule that event to fire more frequently than 70 minutes.
What would happen if you did? Depends on the #Singleton #Lock policy
#Lock(WRITE) is the default. In this mode the timeout method would essentially be locked until the previous invocation completes. Having it fire every 5 minutes even though you can only process one every 70 minutes would eventually cause all the pooled timer threads to be waiting on your Singleton.
#Lock(READ) allows for parallel execution of the timeout method. Events will fire in parallel for a while. However since they actually are taking 70 minutes each, within an hour or so we'll run out of threads in the timer pool just like above.
The elegant solution is to use #Lock(WRITE) then specify some short timeout like #AccessTimeout(value = 1, unit = TimeUnit.MINUTES) on the timeout method. When the next 5 minute invocation is triggered, it will wait up until 1 minute to get access to the Singleton before giving up. This will keep your timer pool from filling up with backed up jobs -- the "overflow" is simply discarded.
Instead of passing new Runnable() {....} into scheduler.scheduleAtFixedRate rather create a CDI bean that implements Runnable and #Inject that bean and then pass it to scheduler.scheduleAtFixedRate
After chatting with David Blevins for a good while I can acknowledge his answer as a great one that I voted up. Big thanks for all that. All though David you forgot to announce your involvement in TomEE which I know always bother someone.
Anyways the solution I went for was suggested by Mark Struberg in #Deltaspike (freenode).
As a deltaspike user I was pleased to do it with deltaspike. Solution is outlined in this blog post:
http://struberg.wordpress.com/2012/03/17/controlling-cdi-containers-in-se-and-ee/
I had to switch into OWB see https://issues.apache.org/jira/browse/DELTASPIKE-284
Cheers